CN110119691A - A kind of portrait localization method that based on local 2D pattern and not bending moment is searched - Google Patents

A kind of portrait localization method that based on local 2D pattern and not bending moment is searched Download PDF

Info

Publication number
CN110119691A
CN110119691A CN201910318605.8A CN201910318605A CN110119691A CN 110119691 A CN110119691 A CN 110119691A CN 201910318605 A CN201910318605 A CN 201910318605A CN 110119691 A CN110119691 A CN 110119691A
Authority
CN
China
Prior art keywords
portrait
image
bending moment
lbp
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910318605.8A
Other languages
Chinese (zh)
Other versions
CN110119691B (en
Inventor
谢巍
刘希
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201910318605.8A priority Critical patent/CN110119691B/en
Publication of CN110119691A publication Critical patent/CN110119691A/en
Application granted granted Critical
Publication of CN110119691B publication Critical patent/CN110119691B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The portrait localization method that the invention discloses a kind of based on local 2D pattern and bending moment is not searched, this method comprises: obtaining the portrait outer profile feature of portrait sample training result and standardized LBP characteristic value and Hu's moment invariants, the similar ellipse of the shape of the outer profile of portrait, collection apparatus is carried out to the geometrical characteristic in image-region using Hu's moment invariants, obtains the contour feature value of portrait in image;By first carrying out gamut transform to the image containing portrait, switchs to grayscale image, be then filtered, remove the interference of color and noise, leave the image obvious containing textural characteristics;Then the signature analysis of local binary patterns (LBP) is carried out, obtain the LBP characteristic value at edge in image, portrait foreground and background in image is distinguished further combined with the contour feature value of not bending moment by the LBP characteristic value, to find the accurate location of portrait in complicated background environment, the purpose quickly searched and located is realized.

Description

A kind of portrait localization method that based on local 2D pattern and not bending moment is searched
Technical field
The present invention relates to a variety of technical field of image processing such as image segmentation, edge finding, and also have to signal filtering Certain demand, the portrait localization method that generally a kind of based on local 2D pattern and not bending moment is searched.
Background technique
Identification of Images is challenging a research now, also due to it has a wide range of applications range more Receive various concerns.Portrait recognition technology all has millions of applications in business, military, information and daily life Field.In simple terms, information security management, medical treatment, security system, artificial intelligence, case investigate and prosecute etc..It is wherein most important Or following three kinds of application: 1, authentication is set by comparing the portrait data collected in real time with certification has been stored in Portrait data in standby, reach certain similarity degree, then may determine that and succeed for authentication, the system is in identity Card, passport, driving license, airport security, customs numerous places such as enter the GATT are applied, and can be further realized suspect's investigation Etc. more other fields;2, identity authentication handles collected portrait data, and the portrait in database is compared It is right, it is arranged according to similarity, provides a judging result and relative accuracy, can be used for security control, border investigation etc. Safety supervision department;3, video monitoring can first pass through video camera and be continuously shot to people, track and position, by by people As the separation with background, portrait part is obtained, then compared with monitored library of object carries out analysis, can achieve tracing and positioning Effect is usually used in the directions such as tracking, event analysis.
So-called image segmentation refer to divided the image into according to features such as gray scale, colour, spatial texture, geometries it is several A mutually disjoint region, so that these features in the same area, show consistency or similitude, and between different zones Show apparent difference.It simply says, is exactly that target is separated from background in piece image, in order into one Step processing.Image segmentation be in image procossing and computer vision field Level Visual the most basis and important field it One, it be image is carried out the basic premise of visual analysis and pattern-recognition simultaneously it be also a classic problem, to being at present Only both also judge whether to divide successful objective standard there is no one kind there is no a kind of general image partition method.
About the Identification of Images of local mode, portrait was being identified due to the influence of the external conditions such as illumination condition, expression It will increase difficulty in journey, and indicated according to the analysis of biology, the vision system of mankind itself is more easily by part in portrait Textural characteristics come judge identification and memory object information.So the portrait identification method based on local mode is just referred to In portrait judgement, the identification that has developed the recognition methods based on Gabor wavelet transformation, be based on local binary patterns (LBP) Method and the various local shape factor modes such as recognition methods that model is generated based on part.
Summary of the invention
The purpose of the present invention is to solve drawbacks described above in the prior art, provide it is a kind of based on local binary patterns and The not portrait localization method that bending moment is searched.
The purpose of the present invention can be reached by adopting the following technical scheme that:
A kind of portrait localization method that based on local binary patterns and not bending moment is searched, the portrait localization method include The following steps:
S1, portrait sample graph group is obtained, the region containing portrait in figure is obtained by training, then standardize all figure groups Portrait area location feature obtains one group of feature vector V;
S2, image preprocessing and filtering are carried out to the new test image group containing portrait, then carry out local binary patterns, That is LBP, signature analysis obtain the LBP characteristic value u at edge in imagei, i=0,1,2 ..., M, M is new test image group Quantity;
S3, using Hu's moment invariants come to LBP characteristic value uiGeometrical characteristic carry out collection apparatus, obtain the profile of image Feature vector li
S4, boundary Outline Feature Vector is standardized, obtains standardized vector Li, then with feature vector V into Row Euclidean distance measurement, less than the vector L of threshold epsilonk, k=0,1,2 ..., N, N < M, where representative region is exactly portrait Region.
Further, portrait sample training result is obtained in the step S1 and the positioning of standardized portrait area is special Sign, specifically includes:
Establish the sample set S of training portrait positioning;
Further, portrait in image is obtained by the signature analysis of local 2D pattern using affiliated training sample set S The LBP characteristic value at edge, specific as follows:
It is assumed that the grain distribution in a regional area can be the Joint Distribution of the gray value of pixel in regional area Density, is defined as:
T=t (gc,g0,…,gP-1)
Wherein gcFor the gray value of the central point of round regional area, and gp, p=0,1 ... P are then removing in corresponding region Open the P point being equally spaced except central point.Because cannot ensure all the points in the region just is integer, make Gray value calculating is carried out to the point for not falling within location of pixels with bilinear interpolation algorithm.Then, the g in the neighborhood in imagep's Coordinate can be indicated with following formula:
Wherein (xc,yc) indicate central point coordinate.If not losing any information, from other pixels in neighborhood Gray value gpSubtract central point gcValue, then the texture T of regional area can use the difference of the gray value on central point C and periphery Joint Distribution indicate:
T=t (gc,g0-gc,…,gP-1-gc)
If set again, central point gcWith the g on peripherypPoor gp-gc(p=0,1 ... P) and central point gcAnd it is irrelevant, have:
T≈t(gc)(g0-gc,…,gP-1-gc)
Since these hypothesis can not be set up completely in practice, because the range of the gray value in image is in a computer only For 0-255, those super g to go beyond the scopecIt necessarily will result in the reduction of difference range, therefore assumed above and inference may It will lead to information loss.Certainly, it is not without counter-measure, by allowing the loss of a small amount of image information, gray scale can be allowed Local grain possesses translation invariance in tonal range in figure.It adds, t (gc) be only image Luminance Distribution description, It is unrelated with the state of local grain, and do not have the feature for carrying out texture analysis, so, it can directly omit, be written as herein:
T≈t(g0-gc,…,gP-1-gc)
The distribution of the difference function of above formula clearly marks the textural characteristics of each point in selected areas, for For each pixel, the variation in all directions can be larger;There will be certain orientation value in the position of edge It is larger, and the lesser situation of other direction value;When existing in region, when changing gentle part, difference above will be very small So that close to 0 value.
Due to eliminating t (g relevant to brightness change in above formulac), so, the formula just again gray scale translation invariance, It is exactly that it is the characteristic that will not change its texture that the gray value of all P+1 pixels, which carries out the plus-minus of numerical value simultaneously, in neighborhood. But if carrying out multiple change simultaneously, it will lead to textural characteristics and change.So wanting that scale is kept not become Change, can only just use difference:
T≈t(s(g0-gc),…,s(gP-1-gc))
By the binary number of upper available 8 bytes, 2 then are carried out according to different positions to pointPWeighting ask With can obtain unique LBP value relevant to putting in neighborhood, and referred to as mode.(x can be representedc,yc) centered on neighbour The textural characteristics in domain, are expressed as follows:
The practical symbol that represent the difference sought of above formula becomes one P binary numbers, to be formed one Value is in 0-2PDiscrete LBP value in range, in this way, difference is made to become the mode of LBP a kind of, the then ash in the region Degree distribution and textural characteristics can be with this LBP values come approximate representation:
T≈t(LBP(xc,yc))
To which LBP operator has certain robustness for the grayscale image of monotone variation, and the only position energy of main points Keep inconvenient, the LBP value calculated also has invariance.In the border circular areas of radius position R, wherein including P point gp(p= 0,1 ... P), it usesTo indicate LBP operator.So common operator has It is several in this way.
Further, using affiliated training sample set by Hu's moment invariants come the geometrical characteristic to portrait in image-region It is acquired, obtains the contour feature value of portrait, specific as follows:
Bending moment is not that the image shape based on region describes method to Hu.For two-dimensional discrete image f (x, y), digital picture In discrete pixel, establish plane right-angle coordinate using the image upper right corner as origin, (x, y) indicates that the horizontal, vertical of pixel sits Mark, wherein (p+q) rank square is defined as follows:
(p+q) rank central moment is then accordingly are as follows:
Wherein x0And y0The grey scale centre of gravity of image grayscale in the horizontal and vertical directions is respectively indicated, so (x0,y0) it is figure The grey scale centre of gravity of picture, p, q=0,1,2 ....
When image changes, mpqAlso it can change, and μpqAlthough having translation invariance, for rotation It is still sensitive, so needing that each rank central moment is normalized:
Wherein, γ=(p+q+2)/2, p+q=2,3 ....Central moment after normalized has translation invariant Property, and also there is constant rate.
On this basis, HU proposes using second order and third central moment the method for constructing not bending moment, and has selected it In 7 as Hu not bending moment, I1-I7Respectively 7 Hu not bending moment, ypqIn strict accordance with normalized above, it is respectively as follows:
I1=y20+y02
I3=(y30-3y12)2+(3y21-y03)2
I4=(y30+y12)2+(y03+y21)2
I5=(y30-3y12)(y30+y12)[(y30+y12)2-3(y21+y03)2]
+(3y21-y03)(y21+y03)[3(y30+y12)2-(y21+y03)2]
I6=(y20-y02)[(y30+y12)2-(y03+y21)2]
+4y11(y30+y12)(y03-y21)
I7=(3y21-y03)(y30+y12)[(y30+y12)2-3(y21+y03)2]
+(y30-3y12)(y03+y21)[(y03+y21)2-3(y30+y12)2]
The variation ranges of Hu not bending moment 7 amounts are bigger, first with Gaussian normalization or take logarithm one time during processing Processing achievees the purpose that compression for one time, the method that log-compressed is used in the present invention, specifically:
Further, specific as follows by carrying out image preprocessing and filtering to the test image containing portrait:
Test image in the present invention is the collected RGB image of same camera, in order to effectively extract image The RGB image in three channels is generally converted into being worth the grayscale image in 0-255, i.e. gray processing by textural characteristics.
It is weighted mean method used in the present invention:
G=R*0.299+G*0.587+B*0.114
Filtering is a kind of signal processing technology for effectively and simply inhibiting digital picture noise.It is used in the present invention Wiener filtering.
The principle of Wiener filtering is to be based on LMSE method then, as follows:
E is Minimum Mean Square Error in above formula, and f (x, y) is image (original image) pixel that do not degenerated,Being can Enable the smallest image of Minimum Mean Square Error (required image) pixel.
After arranging deformation, in the recovery processing of image, in a frequency domain shown in being expressed as follows of Wiener filter:
In formula, w (u, v) isDeformation in a frequency domain, H (u, v) are degenrate function, | H (u, v) |2=H*(u, V) H (u, v), H*(u, v) is the complex conjugate of H (u, v), pf(u, v)=| F (u, v) |2For the power spectrum of noise, pn(u, v)=| N (u,v)|2For the power spectrum of non-degraded image, formula is defined as follows for signal-to-noise ratio:
Original image will not be used, therefore the power p of original imagef(u, v) can not be obtained, for the power spectrum of noise signal, very It is hard to find to take.Signal-to-noise ratio would generally be replaced with a constant c, so above formula can be write as:
Wherein c takes between 0.1~0.001.
The LBP characteristic value and geometrical characteristic for standardizing portrait area obtain the contour feature value V of portrait, specific as follows:
Wherein μ is mean value, and σ is standard deviation, and x ' is the portrait characteristic value after standardization.
By the x ' of multiple images in training sample, standardized Outline Feature Vector V is obtained.
Further, local 2D pattern is carried out to test image using same method and bending moment is not analyzed, surveyed Attempt the feature Li of picture;
Further, by LiEuclidean distance measurement is carried out with feature vector V, less than the vector L of threshold epsilonk, k=0,1, 2 ..., N, N < M, representative region are exactly the region where portrait:
Two n-dimensional vector a (x11,x12,…,x1n) and b (x21,x22,…,x2n) between Euclidean distance:
Use the d of n-dimensional vectoriCalculate LiAt a distance from V, if di/dVLess than threshold epsilon=0.1, just illustrate this feature to Measure LiRepresentative region is exactly the region where portrait, realizes portrait positioning.
Compared with prior art, the beneficial effects of the present invention are:
1, present invention utilizes the feature of local feature, the key difference of global characteristics and local feature is to extract feature During the difference of spatial dimension that is covered.Global characteristics are the features extracted based on the information in whole image;With It is compared, and local feature is obtained from the acquisition of the regional area of image.The present invention considers in image in different zones Difference existing for pixel, color or texture information, it is descriptive that local feature has fully demonstrated its unique feature.It will be steady in image The process that existing and with good discrimination feature is indicated as whole image is made, it is stronger to embody local feature description's operator Stability and robust performance
2, the present invention by seven not bending moment by low-order moment extract characteristics of image, can be good at description image profile Information, and have good rotational invariance in itself, can be obtained by the normalization operation to low-order moment spin moment rotation, The characteristics of scale and translation invariance, can effectively identify the position of portrait profile, to reach, operand is small, and speed is fast Target.
Detailed description of the invention
Fig. 1 is the portrait positioning flow figure that based on local 2D pattern and bending moment is not searched.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art Every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
Embodiment
The portrait localization method that present embodiment discloses a kind of based on local 2D pattern and bending moment is not searched, it is therefore intended that By LBP and the feature of bending moment does not learn the image containing portrait profile, the feature of humanoid profile is got, to reach To the purpose for quickly positioning portrait in complicated image, specifically includes the following steps:
T1, portrait sample training result and standardized portrait area location feature are obtained, specifically included:
T11, the sample set S for establishing training portrait positioning;
T12, the portrait profile part concentrated using affiliated training sample are constant by local binary patterns feature and Hu Shi The analysis of square obtains one group of characteristic value Q at portrait edge in image;
T13, Q is standardized, obtains the feature vector V of portrait profile.
T2, image preprocessing and filtering are carried out to the new test image group containing portrait
Test image in the present invention is the collected RGB image of same camera, in order to effectively extract image The RGB image in three channels is generally converted into being worth the grayscale image in 0-255, i.e. gray processing by textural characteristics.
It is weighted mean method used in the present invention:
G=R*0.299+G*0.587+B*0.114
Filtering is a kind of signal processing technology for effectively and simply inhibiting digital picture noise.It is used in the present invention Wiener filtering.
The principle of Wiener filtering is to be based on LMSE method then, as follows:
E is Minimum Mean Square Error in above formula, and f (x, y) is image (original image) pixel that do not degenerated,Being can Enable the smallest image of Minimum Mean Square Error (required image) pixel.
After arranging deformation, in the recovery processing of image, in a frequency domain shown in being expressed as follows of Wiener filter:
In formula, w (u, v) isDeformation in a frequency domain, H (u, v) are degenrate function, | H (u, v) |2=H*(u, V) H (u, v), H*(u, v) is the complex conjugate of H (u, v), pf(u, v)=| F (u, v) |2For the power spectrum of noise, pn(u, v)=| N (u,v)|2For the power spectrum of non-degraded image, formula is defined as follows for signal-to-noise ratio:
Original image will not be used, therefore the power p of original imagef(u, v) can not be obtained, for the power spectrum of noise signal, very It is hard to find to take.Signal-to-noise ratio would generally be replaced with a constant c, so above formula can be write as:
Wherein c takes between 0.1~0.001.
T3, local binary patterns, i.e. LBP are carried out, signature analysis obtains the LBP characteristic value u at edge in imagei, i=0, 1,2 ..., M, M are the quantity of new test image group;
It is assumed that the grain distribution in a regional area can be the Joint Distribution of the gray value of pixel in regional area Density, is defined as:
T=t (gc,g0,…,gP-1)
Wherein gcFor the gray value of the central point of round regional area, and gp, p=0,1 ... P are then removing in corresponding region Open the P point being equally spaced except central point.Because cannot ensure all the points in the region just is integer, make Gray value calculating is carried out to the point for not falling within location of pixels with bilinear interpolation algorithm.Then, the g in the neighborhood in imagep's Coordinate can be indicated with following formula:
Wherein (xc,yc) indicate central point coordinate.If not losing any information, from other pixels in neighborhood Gray value gpSubtract central point gcValue, then the texture T of regional area can use the difference of the gray value on central point C and periphery Joint Distribution indicate:
T=t (gc,g0-gc,…,gP-1-gc)
If set again, central point gcWith the g on peripherypPoor gp-gc(p=0,1 ... P) and central point gcAnd it is irrelevant, have:
T≈t(gc)(g0-gc,…,gP-1-gc)
Since these hypothesis can not be set up completely in practice, because the range of the gray value in image is in a computer only For 0-255, those super g to go beyond the scopecIt necessarily will result in the reduction of difference range, therefore assumed above and inference may It will lead to information loss.Certainly, it is not without counter-measure, by allowing the loss of a small amount of image information, gray scale can be allowed Local grain possesses translation invariance in tonal range in figure.It adds, t (gc) be only image Luminance Distribution description, It is unrelated with the state of local grain, and do not have the feature for carrying out texture analysis, so, it can directly omit, be written as herein:
T≈t(g0-gc,…,gP-1-gc)
The distribution of the difference function of above formula clearly marks the textural characteristics of each point in selected areas, for For each pixel, the variation in all directions can be larger;There will be certain orientation value in the position of edge It is larger, and the lesser situation of other direction value;When existing in region, when changing gentle part, difference above will be very small So that close to 0 value.
Due to eliminating t (g relevant to brightness change in above formulac), so, the formula just again gray scale translation invariance, It is exactly that it is the characteristic that will not change its texture that the gray value of all P+1 pixels, which carries out the plus-minus of numerical value simultaneously, in neighborhood. But if carrying out multiple change simultaneously, it will lead to textural characteristics and change.So wanting that scale is kept not become Change, can only just use difference:
T≈t(s(g0-gc),…,s(gP-1-gc))
By the binary number of upper available 8 bytes, 2 then are carried out according to different positions to pointPWeighting ask With can obtain unique LBP value relevant to putting in neighborhood, and referred to as mode.(x can be representedc,yc) centered on neighbour The textural characteristics in domain, are expressed as follows:
The practical symbol that represent the difference sought of above formula becomes one P binary numbers, to be formed one Value is in 0-2PDiscrete LBP value in range, in this way, difference is made to become the mode of LBP a kind of, the then ash in the region Degree distribution and textural characteristics can be with this LBP values come approximate representation:
T≈t(LBP(xc,yc))
To which LBP operator has certain robustness for the grayscale image of monotone variation, and the only position energy of main points Keep inconvenient, the LBP value calculated also has invariance.In the border circular areas of radius position R, wherein including P point gp(p= 0,1 ... P), it usesTo indicate LBP operator.So common operator has It is several in this way.
T4, using Hu's moment invariants come to LBP characteristic value ui, i=0,1,2 ..., M, geometrical characteristic carry out feature adopt Collection, obtains the Outline Feature Vector l of imagei, i=0,1,2 ..., M;
Bending moment is not that the image shape based on region describes method to Hu.For two-dimensional discrete image f (x, y), digital picture In discrete pixel, establish plane right-angle coordinate using the image upper right corner as origin, (x, y) indicates that the horizontal, vertical of pixel sits Mark, wherein (p+q) rank square is defined as follows:
(p+q) rank central moment is then accordingly are as follows:
Wherein x0And y0The grey scale centre of gravity of image grayscale in the horizontal and vertical directions is respectively indicated, so (x0,y0) it is figure The grey scale centre of gravity of picture, p, q=0,1,2 ....
When image changes, mpqAlso it can change, and μpqAlthough having translation invariance, for rotation It is still sensitive, so needing that each rank central moment is normalized:
Wherein, γ=(p+q+2)/2, p+q=2,3 ....Central moment after normalized has translation invariant Property, and also there is constant rate.
On this basis, HU proposes using second order and third central moment the method for constructing not bending moment, and has selected it In 7 as Hu not bending moment, I1-I7Respectively 7 Hu not bending moment, ypqIn strict accordance with normalized above, it is respectively as follows:
I1=y20+y02
I3=(y30-3y12)2+(3y21-y03)2
I4=(y30+y12)2+(y03+y21)2
I5=(y30-3y12)(y30+y12)[(y30+y12)2-3(y21+y03)2]
+(3y21-y03)(y21+y03)[3(y30+y12)2-(y21+y03)2]
I6=(y20-y02)[(y30+y12)2-(y03+y21)2]
+4y11(y30+y12)(y03-y21)
I7=(3y21-y03)(y30+y12)[(y30+y12)2-3(y21+y03)2]
+(y30-3y12)(y03+y21)[(y03+y21)2-3(y30+y12)2]
The variation ranges of Hu not bending moment 7 amounts are bigger, first with Gaussian normalization or take logarithm one time during processing Processing achievees the purpose that compression for one time, the method that log-compressed is used in the present invention, specifically:
T5, boundary Outline Feature Vector is standardized, obtains standardized vector Li, i=0,1,2 ..., M.
T6, again with feature vector V carry out Euclidean distance measurement, less than the vector L of threshold epsilonk, k=0,1,2 ..., N, N < M, Representative region is exactly the region where portrait:
Use the d of n-dimensional vectoriCalculate LiAt a distance from V, if di/dVLess than threshold epsilon=0.1, just illustrate this feature to Measure LiRepresentative region is exactly the region where portrait, realizes portrait positioning.
The above embodiment is a preferred embodiment of the present invention, but embodiments of the present invention are not by above-described embodiment Limitation, other any changes, modifications, substitutions, combinations, simplifications made without departing from the spirit and principles of the present invention, It should be equivalent substitute mode, be included within the scope of the present invention.

Claims (6)

1. a kind of portrait localization method that based on local 2D pattern and not bending moment is searched, which is characterized in that the portrait is fixed Position method includes the following steps:
S1, portrait sample graph group is obtained, the region containing portrait in figure is obtained by training, then standardize the portrait of all figure groups Zone location feature obtains one group of feature vector V;
S2, image preprocessing and filtering are carried out to the new test image group containing portrait, then carries out the signature analysis of LBP, obtained The LBP characteristic value u at edge into imagei, i=0,1,2 ..., M, M is the quantity of new test image group;
S3, using Hu's moment invariants come to LBP characteristic value uiGeometrical characteristic carry out collection apparatus, obtain the contour feature of image Vector li
S4, boundary Outline Feature Vector is standardized, obtains standardized vector Li, then with feature vector V carry out Euclidean Range determination, less than the vector L of threshold epsilonk, k=0,1,2 ..., N, N < M, representative region is exactly the region where portrait.
2. the portrait localization method that a kind of based on local binary patterns and not bending moment according to claim 1 is searched, special Sign is that the step S1 is specifically included:
Establish the sample set S of training portrait positioning;
The portrait profile part concentrated using affiliated training sample passes through point of local binary patterns feature and Hu's moment invariants Analysis, obtains one group of characteristic value Q at portrait edge in image;
Q is standardized, the feature vector V of portrait profile is obtained.
3. the portrait localization method that a kind of based on local binary patterns and not bending moment according to claim 2 is searched, special Sign is, in the sample set S of the foundation training portrait positioning, is divided using affiliated training sample set S by the feature of LBP Analysis, obtains the LBP characteristic value at portrait edge in image, specific as follows:
Assuming that the grain distribution in a regional area is the density of simultaneous distribution of the gray value of pixel in regional area, determine Justice are as follows:
T=t (gc,g0,…,gP-1)
Wherein gcFor the gray value of the central point of round regional area, and gp, p=0,1 ... P be then in corresponding region except in P point being equally spaced except heart point carries out gray value to the point for not falling within location of pixels using bilinear interpolation algorithm It calculates, then the g in the neighborhood in imagepCoordinate indicated with following formula:
Wherein (xc,yc) indicate central point coordinate, if not losing any information, from the gray scale of other pixels in neighborhood Value gpSubtract central point gcValue, then the texture T of regional area combining point with central point C and the difference of the gray value on periphery Cloth indicates:
T=t (gc,g0-gc,…,gP-1-gc)
If set again, central point gcWith the g on peripherypPoor gp-gc, p=0,1 ... P and central point gcAnd it is irrelevant, have:
T≈t(gc)(g0-gc,…,gP-1-gc)
The Luminance Distribution for directly omitting image herein describes t (gc), it is written as:
T≈t(g0-gc,…,gP-1-gc)
It is as follows using difference to keep scale not change:
T≈t(s(g0-gc),…,s(gP-1-gc))
The binary number of 8 bytes is obtained from above, and 2 then are carried out according to different positions to pointPWeighted sum, obtain with The relevant unique LBP value of point in neighborhood, and referred to as mode, with representative (xc,yc) centered on neighborhood textural characteristics, table Show as follows:
The intensity profile and textural characteristics in the region are with this LBP value come approximate representation:
T≈t(LBP(xc,yc))
In the border circular areas of radius position R, wherein including P point gp, p=0,1 ... P are usedTo indicate LBP operator.
4. the portrait localization method that a kind of based on local 2D pattern and not bending moment according to claim 2 is searched, special Sign is, the portrait profile part concentrated using affiliated training sample, not by local binary patterns feature and Hu Shi The analysis of bending moment obtains one group of characteristic value Q at portrait edge in image, specific as follows:
Bending moment is not that the image shape based on region describes method to Hu, for two-dimensional discrete image f (x, y), wherein (p+q) rank square It is defined as follows:
(p+q) rank central moment is then accordingly are as follows:
Wherein x0And y0The grey scale centre of gravity of image grayscale in the horizontal and vertical directions is respectively indicated, so (x0,y0) it is image Grey scale centre of gravity, p, q=0,1,2 ...;
Each rank central moment is normalized:
Wherein, γ=(p+q+2)/2, p+q=2,3 ..., the central moment after normalized has translation invariance, And also there is constant rate;
The method that not bending moment is constructed using second order and third central moment, and have selected wherein 7 as Hu not bending moment, digitized map Discrete pixel, establishes plane right-angle coordinate using the image upper right corner as origin as in, and (x, y) indicates the horizontal, vertical of pixel Coordinate, I1To I7Respectively 7 Hu not bending moment, is respectively as follows:
I1=y20+y02
I3=(y30-3y12)2+(3y21-y03)2
I4=(y30+y12)2+(y03+y21)2
I5=(y30-3y12)(y30+y12)[(y30+y12)2-3(y21+y03)2]+(3y21-y03)(y21+y03)[3(y30+y12)2-(y21+ y03)2]
I6=(y20-y02)[(y30+y12)2-(y03+y21)2]+4y11(y30+y12)(y03-y21)
I7=(3y21-y03)(y30+y12)[(y30+y12)2-3(y21+y03)2]+(y30-3y12)(y03+y21)[(y03+y21)2-3(y30+ y12)2]
Using the method for log-compressed, specifically:
5. the portrait localization method that a kind of based on local binary patterns and not bending moment according to claim 2 is searched, special Sign is, the portrait profile part concentrated using affiliated training sample, not by local binary patterns feature and Hu Shi The analysis of bending moment, obtain portrait edge in image one group of characteristic value Q, x ' be standardization after portrait characteristic value, it is specific as follows:
Wherein μ is mean value, and σ is standard deviation,
It is that Z-score is standardized the result is that all data are all gathered near 0, variance 1, by by more in training sample The x ' of a image obtains standardized Outline Feature Vector V.
6. the portrait localization method that a kind of based on local binary patterns and not bending moment according to claim 1 is searched, special Sign is, specific as follows by carrying out image preprocessing and filtering to the test image containing portrait in the step S2:
The RGB image in three channels is converted into being worth the grayscale image in 0-255 using weighted mean method;
And it is filtered using Wiener filtering.
CN201910318605.8A 2019-04-19 2019-04-19 Portrait positioning method based on local two-dimensional mode and invariant moment search Active CN110119691B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910318605.8A CN110119691B (en) 2019-04-19 2019-04-19 Portrait positioning method based on local two-dimensional mode and invariant moment search

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910318605.8A CN110119691B (en) 2019-04-19 2019-04-19 Portrait positioning method based on local two-dimensional mode and invariant moment search

Publications (2)

Publication Number Publication Date
CN110119691A true CN110119691A (en) 2019-08-13
CN110119691B CN110119691B (en) 2021-07-20

Family

ID=67521163

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910318605.8A Active CN110119691B (en) 2019-04-19 2019-04-19 Portrait positioning method based on local two-dimensional mode and invariant moment search

Country Status (1)

Country Link
CN (1) CN110119691B (en)

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268539A (en) * 2014-10-17 2015-01-07 中国科学技术大学 High-performance human face recognition method and system
CN104318219A (en) * 2014-10-31 2015-01-28 上海交通大学 Face recognition method based on combination of local features and global features
CN104463085A (en) * 2013-09-23 2015-03-25 深圳市元轩科技发展有限公司 Face recognition method based on local binary pattern and KFDA
CN105069408A (en) * 2015-07-24 2015-11-18 上海依图网络科技有限公司 Video portrait tracking method based on human face identification in complex scenario
CN105117688A (en) * 2015-07-29 2015-12-02 重庆电子工程职业学院 Face identification method based on texture feature fusion and SVM
CN105426889A (en) * 2015-11-13 2016-03-23 浙江大学 PCA mixed feature fusion based gas-liquid two-phase flow type identification method
CN105631451A (en) * 2016-01-07 2016-06-01 同济大学 Plant leave identification method based on android system
CN106056064A (en) * 2016-05-26 2016-10-26 汉王科技股份有限公司 Face recognition method and face recognition device
CN106897590A (en) * 2015-12-17 2017-06-27 阿里巴巴集团控股有限公司 The method of calibration and device of figure information
CN107578007A (en) * 2017-09-01 2018-01-12 杭州电子科技大学 A kind of deep learning face identification method based on multi-feature fusion
CN107679509A (en) * 2017-10-19 2018-02-09 广东工业大学 A kind of small ring algae recognition methods and device
CN108038487A (en) * 2017-11-22 2018-05-15 湖北工业大学 Plant leaf blade discriminating conduct based on image segmentation with Fusion Features
CN108460420A (en) * 2018-03-13 2018-08-28 江苏实达迪美数据处理有限公司 A method of classify to certificate image
CN109063566A (en) * 2018-07-02 2018-12-21 天津煋鸟科技有限公司 A kind of optical detecting method for human testing
CN109522924A (en) * 2018-09-28 2019-03-26 浙江农林大学 A kind of broad-leaf forest wood recognition method based on single photo
CN109598681A (en) * 2018-11-01 2019-04-09 兰州理工大学 The reference-free quality evaluation method of image after a kind of symmetrical Tangka repairs

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463085A (en) * 2013-09-23 2015-03-25 深圳市元轩科技发展有限公司 Face recognition method based on local binary pattern and KFDA
CN104268539A (en) * 2014-10-17 2015-01-07 中国科学技术大学 High-performance human face recognition method and system
CN104318219A (en) * 2014-10-31 2015-01-28 上海交通大学 Face recognition method based on combination of local features and global features
CN105069408A (en) * 2015-07-24 2015-11-18 上海依图网络科技有限公司 Video portrait tracking method based on human face identification in complex scenario
CN105117688A (en) * 2015-07-29 2015-12-02 重庆电子工程职业学院 Face identification method based on texture feature fusion and SVM
CN105426889A (en) * 2015-11-13 2016-03-23 浙江大学 PCA mixed feature fusion based gas-liquid two-phase flow type identification method
CN106897590A (en) * 2015-12-17 2017-06-27 阿里巴巴集团控股有限公司 The method of calibration and device of figure information
CN105631451A (en) * 2016-01-07 2016-06-01 同济大学 Plant leave identification method based on android system
CN106056064A (en) * 2016-05-26 2016-10-26 汉王科技股份有限公司 Face recognition method and face recognition device
CN107578007A (en) * 2017-09-01 2018-01-12 杭州电子科技大学 A kind of deep learning face identification method based on multi-feature fusion
CN107679509A (en) * 2017-10-19 2018-02-09 广东工业大学 A kind of small ring algae recognition methods and device
CN108038487A (en) * 2017-11-22 2018-05-15 湖北工业大学 Plant leaf blade discriminating conduct based on image segmentation with Fusion Features
CN108460420A (en) * 2018-03-13 2018-08-28 江苏实达迪美数据处理有限公司 A method of classify to certificate image
CN109063566A (en) * 2018-07-02 2018-12-21 天津煋鸟科技有限公司 A kind of optical detecting method for human testing
CN109522924A (en) * 2018-09-28 2019-03-26 浙江农林大学 A kind of broad-leaf forest wood recognition method based on single photo
CN109598681A (en) * 2018-11-01 2019-04-09 兰州理工大学 The reference-free quality evaluation method of image after a kind of symmetrical Tangka repairs

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
QING CHEN ET AL.: "A Comparative Study of Fourier Descriptors and Hu"s Seven Moment Invariants for Image Recognition", 《IEEE》 *
梁佳鑫: "基于车辆正面图像的车型特征提取方法研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 *
黄非非: "基于 LBP 的人脸识别研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Also Published As

Publication number Publication date
CN110119691B (en) 2021-07-20

Similar Documents

Publication Publication Date Title
Gao et al. Automatic change detection in synthetic aperture radar images based on PCANet
Yin et al. Hot region selection based on selective search and modified fuzzy C-means in remote sensing images
CN108319964B (en) Fire image recognition method based on mixed features and manifold learning
Huang et al. A new building extraction postprocessing framework for high-spatial-resolution remote-sensing imagery
CN104217221A (en) Method for detecting calligraphy and paintings based on textural features
CN104751187A (en) Automatic meter-reading image recognition method
CN101142584A (en) Method for facial features detection
KR20050025927A (en) The pupil detection method and shape descriptor extraction method for a iris recognition, iris feature extraction apparatus and method, and iris recognition system and method using its
CN108537143B (en) A kind of face identification method and system based on key area aspect ratio pair
Shen et al. Adaptive pedestrian tracking via patch-based features and spatial–temporal similarity measurement
Davy et al. Reducing anomaly detection in images to detection in noise
Jiang et al. Multi-class fruit classification using RGB-D data for indoor robots
Li et al. SDBD: A hierarchical region-of-interest detection approach in large-scale remote sensing image
CN107784263A (en) Based on the method for improving the Plane Rotation Face datection for accelerating robust features
CN108073940A (en) A kind of method of 3D object instance object detections in unstructured moving grids
CN115311746A (en) Off-line signature authenticity detection method based on multi-feature fusion
Xue et al. Nighttime pedestrian and vehicle detection based on a fast saliency and multifeature fusion algorithm for infrared images
CN110458064A (en) Combined data is driving and the detection of the low target of Knowledge driving type and recognition methods
Li et al. Performance comparison of saliency detection
Ahmed et al. Retina based biometric authentication using phase congruency
Leibe et al. Interleaving object categorization and segmentation
CN110119691A (en) A kind of portrait localization method that based on local 2D pattern and not bending moment is searched
Soni et al. Extracting text regions from scene images using Weighted Median Filter and MSER
Campadelli et al. A color based method for face detection
George et al. A survey on prominent iris recognition systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant