CN106446872A - Detection and recognition method of human face in video under low-light conditions - Google Patents

Detection and recognition method of human face in video under low-light conditions Download PDF

Info

Publication number
CN106446872A
CN106446872A CN201610972195.5A CN201610972195A CN106446872A CN 106446872 A CN106446872 A CN 106446872A CN 201610972195 A CN201610972195 A CN 201610972195A CN 106446872 A CN106446872 A CN 106446872A
Authority
CN
China
Prior art keywords
image
face
personage
feature
atlas
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610972195.5A
Other languages
Chinese (zh)
Inventor
张斯尧
刘向
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Yuan Xin Electro-Optical Technology Inc (us) 62 Martin Road Concord Massachusetts 017
Original Assignee
Hunan Yuan Xin Electro-Optical Technology Inc (us) 62 Martin Road Concord Massachusetts 017
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Yuan Xin Electro-Optical Technology Inc (us) 62 Martin Road Concord Massachusetts 017 filed Critical Hunan Yuan Xin Electro-Optical Technology Inc (us) 62 Martin Road Concord Massachusetts 017
Priority to CN201610972195.5A priority Critical patent/CN106446872A/en
Publication of CN106446872A publication Critical patent/CN106446872A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of computer vision, and refers in particular to a detection and recognition method of human face in video under low-light conditions. The system aims at the specific problem of recognizing human face under low-light conditions, and is designed to satisfy the video system with a recognition function of human face under low-light conditions. An algorithm of the system consists of three steps: 1. preprocessing the video image of human face under low-light conditions, based on fuzzy theory; 2. training with a classifier, based on human face detection algorithm of Adaboost; 3. recognizing human face in modeling Video, based on feature fusion. The result of recognizing human face can finally be obtained. Actually, the method can be embedded in FPGA to be used in the video camera monitoring system or cameras with the recognition function of human face. The method has the advantages of good in stability, high in recognition rate, high in speed of calculation, effective in recognizing face feature under low-light conditions, applicable to various systems, such as night vision monitoring system and verifying system of identity under low-light conditions, and strong in practicality.

Description

A kind of video human face detection recognition method under low-light (level)
Technical field
The present invention relates to computer vision field, refers in particular to the video human face detection recognition method under a kind of low-light (level).
Background technology
Face recognition technology has obtained the common concern of the area research person such as computer vision and pattern recognition, sends out in recent years Exhibition is extremely rapid.The technology has a wide range of applications, and cannot be only used for criminal identification, driver's license and passport inspection, immigrant The living things feature recognition fields such as management, it may also be used for Windows Authentication, data base administration and document management, video conference With information security fields such as video monitorings.When we recognize or when knowing someone, either the state before this person is still Later state, human eye easily can carry out debating knowledge to him, and this is the powerful part of our human visual systems.Computer Vision attempts to simulate this visual performance by optical imagery, completes the Automatic analysis of image information resource, so as to reality Existing machine intelligence.
Computer vision field achieves good development result in recent years, but current machine recognition ability also far from Mention in the same breath with human vision level, this has almost similar human face structure owing to us and is close to consistent facial stricture of vagina Reason, these factors force us utilize the Weak characteristic difference between all plurality of human faces to realize correct identification, and this undoubtedly can Greatly increase the difficulty of recognition of face.Especially when the sampling condition such as illumination, attitude and expression changes, the image of collection Can have very big diversity, hair style, age, jewelry change, noise and coverage in addition can also make problem become complicated.Meanwhile, In real life, under low light conditions (the such as environment such as night), due to the illumination of scene low (optical signal is faint), cause energy Degree of opinion is low, and observed scenery signal is very faint, and image image quality is low, and objective fuzzy is unclear, especially in image through depositing After the operations such as storage, conversion, transmission, the quality of low-light (level) image is further reduced so that recognition of face is with detection more Difficult.Therefore, how research is effectively treated to the facial video image under low-light (level), and can be developed in low photograph bar System under part with face identification functions, reduces impact of the weak environment of optical signal to recognition of face to some special departments Industry has important value.
Based on These characteristics, the present invention proposes the video people under a kind of low-light (level) based on Adaboost with Feature Fusion Face recognition method, the method good stability, discrimination height, fast operation, effectively can detect under conditions of low-light (level) Face characteristic is identified, such as night vision monitoring is can be applicable to, in the system such as authentication under low-light (level), with good practicality Property.
Content of the invention
The technical problem to be solved in the present invention is:For existing face identification system under low-light (level) the determining of generally existing Inaccurate, the not high difficulties of discrimination in position, in order to improve the order of accuarcy of Face datection identifying system, and it is real to meet which When property demand, proposes the video human face detection recognition method under a kind of low-light (level).
Technical scheme specifically includes following steps:
S1, the low-light (level) facial image pretreatment based on fuzzy theory;
S1.1, HSV colour space transformation, the RGB image of acquisition is transformed into HSV color space;
S1.2, nonlinear transformation, by a smooth mapping curve so that the variation of image grayscale after process is relatively put down Sliding;
S1.3, the image enhancement processing based on fuzzy theory, image is transformed to mould from spatial domain using membership function In paste domain, enhancement process being carried out on Fuzzy property domain again to image, increases the contrast of image;
S2, the training based on the detector in the Face datection algorithm of Adaboost, using pre-prepd positive negative sample Training set trains several different Weak Classifiers, then several different described Weak Classifiers is combined into one strong point Class device, finally the strong classifier area-of-interest of human face target is scanned for, the positive negative sample refers respectively to only wrap Picture containing a face and the non-face picture comprising other complex backgrounds;
S3, the video human face identification of feature based fusion.
Used as the improvement further of technical solution of the present invention, step S1.2 includes:The nonlinear transformation is using right Transformation of variables, the logarithmic transformation refers to that output image is logarithmic relationship with the gray value of input image pixels point, and the logarithmic transformation is adopted Logarithmic function is:
F '=lg (Vd+1)
Wherein, f ' is the image after logarithmic transformation, and V represents the luminance component of HSV color space, and d is customized parameter.
Used as the improvement further of technical solution of the present invention, step S1.3 specifically includes following steps:
S1.3.1, spatial domain are mapped to the conversion of fuzzy field, using sinusoidal membership function by image by spatial domain to mould Paste domain is mapped, and obtains Fuzzy property domain;
The sinusoidal membership function of definition is as follows:
Wherein, f (i, j) is the gray level of pixel (i, j) in pending image, uijGTG x for pixel (i, j)ijRelative In fmaxDegree of membership, fmaxFor the maximum gray scale of pending image, fminFor the minimal gray level of pending image, k is adjustable JIESHEN number, k value is defined as follows:
K=mean (f)/(fmax-fmin)
Wherein, the average gray value of mean (f) representative image;
Enter line translation and finally obtain a new Fuzzy property domain using the sine membership function and k value formula;
S1.3.2, fuzzy field image enhaucament;
Define new fuzzy enhancement operator:
Wherein, T is used as the critical point of image enhaucament, and with the increase of iterationses, picture contrast also strengthens therewith;
S1.3.3, broad image inverse transformation;
For new Fuzzy property domain, just image can be mapped to gray space from fuzzy field through an inverse transformation In, finally just obtain enhanced image:
Wherein, G-1For the inverse transformation of image, f ' (i, j) is the gray level of image slices vegetarian refreshments (i, j) after strengthening;
S1.3.4, treating output image f ' (i, j) denoising being filtered, output image being treated using two-sided filter is carried out Further filtering and noise reduction is processed, last outputting high quality clearly low-light (level) image.
As the improvement further of technical solution of the present invention, step S2 specifically include comprising rectangular characteristic calculate, weak Classifier training, strong classifier training and cascade classifier construction;
Wherein, the rectangular characteristic is calculated using rectangular characteristic as face feature vector, by two or more congruences The adjacent composition feature templates of rectangle, this feature template includes two kinds of rectangles of black and white, and the wherein upper left corner is white, then two kinds of black and white Rectangular pixels interlock successively, by white rectangle pixel and the feature for deducting black rectangle pixel with can obtain feature templates, and Eigenvalue is calculated using integrogram;
The cascade classifier is to adopt Waterfall type level come structural classification device, the cascade classifier by the way of cascade Connection grader.
Used as the improvement further of technical solution of the present invention, step S3 is comprised the following steps:
S3.1:Discrete cosine transform (Discrete Cosine Transform, DCT) feature is generated:
At the right eye eyeball of facial image and the position of mouth, cover corresponding region with a grid respectively, and split grid For nonoverlapping sub-block, discrete cosine transform is carried out to each sub-block, the DCT coefficient of acquisition uses zig-zag scan sorting, As DCT coefficient low frequency part reflects image outline, HFS reflects image detail, therefore abandons the 1st coefficient after sequence, Subsequently several coefficients are chosen successively, as block feature;
S3.2:SIFT feature is generated:
At the nose of facial image and the position of forehead, cover corresponding region with a grid respectively, and split grid For nonoverlapping sub-block, corresponding local SIFT feature is generated to each block;
S3.3:Set up K-NN model:
Define personage Pi, Pi:X=[I1,i,I2,i,…,Ik,i] it is personage P in a camera lensiFace atlas, in the camera lens Total KiOpen face, Ik,iFor a face picture of atlas X, corresponding 5 local feature vectors expression, connect this 5 local special Vector is levied, and one is formed to Ik,iFeature description vector x, using following process, obtain individual face picture Ik,iClassification knot Really;
The k arest neighbors S of test vector xi, i=1,2 ... k, reciprocal fraction si=d (x, Si), returned using min-max One changes method, is normalized [0,1] to fraction:
And
Wherein, smaxFor apart from the farthest point of test vector x, s in k most adjacent pointminFor k most adjacent point middle-range From the nearest point of test vector x;
Due to the face atlas number of each personage in video lens, in order to reduce what face atlas number was brought to identification Impact, the selection of fraction employs majority voting algorithm (Majority Vote Rule):
Wherein,R is to gather with reference to personage, and w is each two field picture Weights, P is the personage of each frame, and Z is specific a certain frame in whole video image, and i, j, k are correspondingly transformation parameter;
Using sum rule (Sum-rule), individual face picture I that above-mentioned process is obtainedk,iClassification results carry out Fusion, sets up personage PiK-NN model;
The identification of S3.4 target:
Weights ω is assigned using distance model (DTM) and the second closest-approach distance (DT2ND) weights valuation scheme, i.e. each frame =ωDTM×ωDT2ND
For DTM, represent class c recently and correspond to frame Ik,i, i=1,2 ..., the weights of k are:
Wherein, μ is the expectation of face pixel normal distribution on the two field picture, and f is the pixel value of arbitrfary point in two field picture, σ is the standard deviation of normal distribution, and e is natural constant;
For DT2ND, Ik,i, i=1,2 ..., the weights of k are:
λ is statistics exponential parameter of the face pixel on certain two field picture, is that the two field picture pixel falls in certain frame Number of times in the range of facial image;
S3.5 manifold strengthens:
Enhancing rearrangement is carried out to K-NN category of model result using manifold learning, by personage PiFace atlas be abstracted into stream Shape M, personage PiWith personage PjSimilarity weighed with manifold distance (Manifold to Manifold Distance (MMD)) Amount, picks out and test personage P according to MMDiSimilar and similar each other reference personage atlas set R, is finally based on R Average distance K-NN result is reset, the selection of reference set R is based on following equation iteration, and each wheel is selected and test personage Pi The all close personage with reference set R, that is, select personage (atlas X) to cause:
Wherein, T is test assignment (atlas);Set for current reference personage (atlas);D (T, X) is two MMD between individual atlas;α is weight, and X is addedIteration existsReach expected radix or apart from D more than a threshold value when Stop;
Calculate MMD formula as follows:
Wherein, Ci,Cj' it is popular local linear subspace,
d(Ci,Cj')=(1- α) dE(Ci,Cj)+α·dV(Ci,Cj)
dE(Ci,Cj) describe the similarity of appearance between 2 atlas personages, dV(Ci,Cj) describe 2 atlas variable moulds Similarity between type.
Compared with prior art, the invention has the advantages that:
1st, image is transformed into HSV space from rgb space by the present invention, and HSV space is not only than rgb space more suitable for people The description of class color perception, but also colourity, saturation and brightness has been efficiently separated, make colourity approximate with color saturation and brightness Orthogonal, follow-up rgb image is strengthened and is brought great convenience;
2nd, the present invention improves the brightness of low-light (level) image by carrying out nonlinear change to image, so as to allow those due to light The brightness for causing to seem gray level region lower than dark image according to deficiency is quickly lifted, and improves low-light (level) image Visual effect;
3 present invention introduces fuzzy theory, image is transformed to fuzzy field from spatial domain using membership function, in mould Enhancement process is carried out to image on paste characteristic plane, increase the contrast of image;
4th, the human-face detector that the present invention is trained, speed responsive is Millisecond, can meet real-time video people completely The application demand of face quick detection.Meanwhile, in aspect of performance, Face datection result by illumination, attitude, the factor such as block and affected, In real process, when image background is simple, it is possible to obtain good testing result, the extremely low and no loss of false alarm rate.
Description of the drawings
Fig. 1 is total algorithm flow chart described in the present embodiment;
Fig. 2 is the flow chart of Adaboost classification based training device described in the present embodiment and detection process;
Fig. 3 is low-light (level) Image semantic classification flow chart described in the present embodiment;
Fig. 4 is rectangle number of features situation in other subwindows described in the present embodiment;
Fig. 5 is 5 simplest rectangular characteristic templates described in the present embodiment.
Fig. 6 is cascade classifier detects schematic diagram described in the present embodiment.
Specific embodiment:
By taking the video face identification method based on Adaboost with Feature Fusion as an example, the present invention is done into one in conjunction with accompanying drawing Step is described in detail, and the method specifically includes following steps.
S1:Low-light (level) facial image pretreatment based on fuzzy theory.
In the present invention, Image semantic classification is exactly that the RGB that image color space is closely related from Color Channel is transformed into HSV Space the brightness using nonlinear function lifting low-light (level) image.
S1.1HSV colour space transformation.
The image that at present picture pick-up device shoots is mostly RGB image, RGB image be by red (R), green (G), indigo plant (B) 3 Individual color component is weighted being superimposed to obtain shades of colour, but is easily affected by illumination variation, and RGB three primary colours With very high dependency between component, change the colouring information of a certain passage, often have influence on the information of other passages, institute Directly to carry out, to each color component of image, the distortion that Nonlinear Processing can cause color of image.HSV space is not only than RGB sky Between more suitable for the description to mankind's color perception, but also efficiently separated colourity, saturation and brightness, made colourity full with color With degree and brightness nearly orthogonal, follow-up rgb image is strengthened and is brought great convenience.
During illumination compensation, RGB image is transformed into HSV space, luminance component V therein is carried out at enhancing The luminance component V of generation and tone H, saturation component S, while keeping tone H and saturation S constant, are finally carried out inverse by reason Conversion produces new images.As follows by the conversion expression formula of rgb space to HSV space:
V=max (R, G, B) (3)
In formula, R, G, B are respectively the value of normalized rgb space.H component span for [0,360), S, V component value Scope be respectively (0,1] and [0,1].
If i=H/60,Wherein, i is the divisor that is divided exactly by 60, and f is the remainder that is divided exactly by 60.If P=V is (1- ), S Q=V (1-Sf), T=V [1-S (1-f)], the conversion expression formula from HSV space to rgb space is as follows:
S1.2 nonlinear transformation.
The present invention improves the brightness of low-light (level) image by carrying out nonlinear change to image, so as to allow those due to illumination Deficiency causes to seem that the brightness of gray level region lower than dark image is quickly lifted, and improves regarding for low-light (level) image Feel effect.Nonlinear transformation is the mapping curve smooth by so that the variation of image grayscale after process is smoother.Examine Considering during an image being formed to brain from human eye reception picture signal, have the link of an approximate log, commonly uses Nonlinear transformation be logarithmic transformation.
Logarithmic transformation refers to that the gray value of output image and input image pixels point is logarithmic relationship, then output image g (x, Y) with the gray-scale relation of input picture f (x, y) it is:
G (x, y)=log [f (x, y)]
The logarithmic transformation can compress the contrast of higher gray areas in original image, at the same time to image intensity value relatively Low is extended.Under normal circumstances, in order that the dynamic range of conversion can be more flexible, the pace of change of curve is changed Or original position etc., typically some customized parameters can be added, make:
G (x, y)=a+ln (f (x, y)+1)/(blnc)
In formula, a, b, c are the parameters that can adjust, and can artificially be changed and adjust, wherein, f (x, y)+1 be for Avoid taking the logarithm numeral 0, it is ensured that molecular moiety is more than or equal to 0.When f (x, y)=0, ln (f (x, y)+1)=0, then y =a, now a is the intercept in y-axis, it is determined that the functional transformation relation of the initial position of transforming function transformation function, and passes through b, two ginsengs of c The rate of change for counting to determine transforming function transformation function.
Logarithmic function applied in general to dark image, and for extending low gray area, the logarithmic function that the present invention is used is:
F '=lg (Vd+1)
Wherein, f ' is the image after logarithmic transformation, and V represents the luminance component of HSV color space, and d is customized parameter, Plus 1 and be in order to avoid taking the logarithm to 0 when V is equal to 0.For parameter d, the brightness of image can be entered by regulation parameter d The different degrees of lifting of row.It is experimentally confirmed, increasing parameter d can carry out luminance raising to low-light (level) image, so as to improve The visual effect of image.For low-light (level) image, need largely to lift the brightness of image, then the value of d need to take larger, But excessive d value can cause image excessively bright, may lose a lot of image informations.Under normal circumstances, the value of d be set to 10~ Numerical value between 100 is optimal, and concrete image specifically adjusts, can by many experiments finally obtain one more suitable Value.
The brightness of image after the nonlinear transformation can be very significantly improved, but reduce the contrast of image, Image not up to desired visual effect is made, fuzzy theory is introduced, image is transformed to mould from spatial domain using membership function In paste domain, enhancement process being carried out on Fuzzy property domain to image, increases the contrast of image.
S1.3:Image enhancement processing based on fuzzy theory:Image is transformed to mould from spatial domain using membership function In paste domain, enhancement process being carried out on Fuzzy property domain again to image, increases the contrast of image.
S1.3.1 spatial domain is mapped to the conversion of fuzzy field.
Using sinusoidal membership function, image is mapped to fuzzy field by spatial domain, obtained Fuzzy property domain.Fixed Justice sine membership function is as follows:
Wherein, f (i, j) is the gray level of pixel (i, j) in pending image, uijGTG x for pixel (i, j)ijRelative In fmaxDegree of membership, fmaxFor the maximum gray scale of pending image, corresponding fminMinimal gray level for pending image.k For customized parameter, by u can be changed to the regulation of kijValue, produce different Fuzzy property domain for different images, enter And the enhancing for being adapted to different images is required, k value is defined as follows:
K=mean (f)/(fmax-fmin)
Wherein, the average gray value of mean (f) representative image so that k value produces inherent dependency with image, different bright The k value of degree image also can change therewith, increased the motility of algorithm, adapt to different luminance pictures and strengthen demand.
S1.3.2 fuzzy field image enhaucament.
To membership function uij, enter line translation using above-mentioned sine membership function and k value formula and finally obtain one newly Fuzzy property domain.
Define new fuzzy enhancement operator:
With the increase of iterationses, picture contrast also strengthens therewith.
Wherein, T is used as the critical point of image enhaucament, and for the value difference of different luminance picture T, in the present invention, T's takes The meansigma methodss being worth for gradation of image.
S1.3.3 broad image inverse transformation.
For new Fuzzy property domain, just image can be mapped to gray space from fuzzy field through an inverse transformation In, finally just obtain enhanced image:
Wherein, G-1For the inverse transformation of image, f ' (i, j) is the gray level of image slices vegetarian refreshments (i, j) after strengthening.Pending figure Export after image filtering can be carried out after as carrying out broad image inverse transformation.
S1.3.4:Denoising is filtered to pending image:Denoising is filtered based on two-sided filter.
Using existing general two-sided filter, treating output image carries out further filtering and noise reduction process, finally exports High-quality clearly low-light (level) image.To be input in training aidss with low-light (level) image after process, the detection for carrying out face is known Not.
S2. the training based on the detector in the Face datection algorithm of Adaboost;
The core concept of Haar-Adaboost is to train several differences using pre-prepd positive and negative sample training collection Weak Classifier, then these Weak Classifiers are combined into a strong classifier, finally using the strong classifier for obtaining to face The area-of-interest of target is scanned for.Positive negative sample refers respectively to the only picture comprising a face and comprising other complexity back ofs the body The non-face picture of scape, needs all sample image normalization during classifier training.Fig. 2 is that Haar-Adaboost divides The training of class device and Face datection process.In figure training process is calculated comprising rectangular characteristic, Weak Classifier is trained, strong classifier training With core links such as cascade classifier constructions.
Feature selection and eigenvalue calculation are two key factors for affecting Haar-Adaboost Face datection efficiency, this Algorithm rectangular characteristic is used as face feature vector, and disadvantageously rectangular characteristic number is quite huge, 24 × 24 regions inspection Survey device in rectangle feature quantity just more than 160000, if adopt certain derivative rectangular characteristic, rectangular characteristic to be dealt with Ten million magnitude will be increased to.Will be by some effective rectangle spies of Adaboost policy selection from huge number of rectangular characteristic Levy, a large amount of computing resources will be expended.Adjacent for the rectangle of two or more congruences composition feature templates can be reduced calculating complicated Degree, this feature templates include two kinds of rectangles of black and white, and the wherein upper left corner is white, and then two kinds of rectangular pixels of black and white are handed over successively Mistake, by white rectangle pixel and the feature for deducting black rectangle pixel with can obtain feature templates.Fig. 4 illustrates other sub- windows Rectangle number of features situation in mouthful.
Adaboost algorithm have chosen the simplest rectangular characteristic template in as shown in Figure 55 and be trained, this feature Training speed although unhappy, but with very high detection efficiency.Selected after feature, using product according to the feature that above selectes Component calculates eigenvalue.Integrogram can be caused once to travel through image to calculate and just can complete each spy interior in constant time The calculating of value indicative, can greatly promote training and detection speed.Integrogram similar to calculus in principle, as long as to image In each pixel carry out simple calculating can be to obtain " integrogram " of piece image, " integrogram " has the advantage that, It can be with the feature that identical Time Calculation is different under multiple yardsticks, and exactly it improves the key point of detection speed for this.
Human face target often has a number of negative sample that aligns and has the different in nature rectangular characteristic of significance difference, thus permissible Using cascade by the way of come structural classification device.In the Waterfall type cascade classifier that Adaboost is used, window to be detected only has The strong classifier for having passed through current layer can just enter next layer, so just can be washed in a pan rapidly by initially which floor simple judgement Window to be detected in a large number is eliminated, greatly reduces average detected expense.Fig. 6 show cascade classifier detects schematic diagram.
The human-face detector for being trained by above-mentioned steps, speed responsive is Millisecond, can meet real-time video completely The application demand of face quick detection.Meanwhile, in aspect of performance, Face datection result such as receives illumination, attitude, blocks at the factor shadow Ring, in real process, when image background is simple, it is possible to obtain good testing result, the extremely low and no missing inspection of false alarm rate Rate.
S3. the video human face identification of feature based fusion.
Video human face identification is referred to for the face figure for being gone out middle acquisition from video based on Adaboost algorithm detection and localization Collection, with personage PiAnd PjAs a example by, with camera lens as unit, it is designated as:
Pi:X=[I1,i,I2,i,…,Ik,i]
Wherein, Ik,iFor PiA face picture, in the camera lens have KiOpen face.Same:
Pj:X=[I1,j,I2,j,…,Ik,j]
Wherein, Ik,jFor PjA face picture, in the camera lens have KjOpen face.
Recognition of face problem under video environment is described as:Personage P is includedi、PjCamera lens some, be input into a mirror Head or camera lens correspond to face atlas X, and system identification goes out personage Pi.
With any face picture Ik,iAs a example by, with any one existing efficient human face characteristic point extraction algorithm, extract people The characteristic point of face, face is alignd and is divided into 5 by characteristic point overlapping region, respectively left eye, right eye, (two, forehead Between), nose, face.In inventive algorithm, expressed one's feelings due to eyes and face position, attitude etc. affects notable, change is acute Strong, but contour feature is notable, less by expression interference.Therefore adopt DCT feature description;And forehead (between two) and nose areas Less by expression influence, variation in rigidity feature is obvious, and inventive algorithm is described using SIFT feature.Specific as follows:
S3.1:DCT feature is generated.
Discrete cosine transform (Discrete Cosine Transform, DCT) is with good compression expression effect, quilt It is widely used for field of signal processing, such as compression of images etc..The 2-D discrete cosine transform (DCT) of one N × N image is defined as:
Wherein, x, y are respectively the value of the transverse and longitudinal coordinate of each pixel in the image of N × N, for u, v=0,1 ..., N-1, has:
The definition of 2-D inverse discrete cosine transform is:
Inventive algorithm covers corresponding region with a grid respectively at the right eye eyeball of facial image and the position of mouth, and Segmentation grid becomes nonoverlapping sub-block.Discrete cosine transform (DCT) is carried out to each sub-block, the DCT coefficient of acquisition is used Zig-zag scan sorting.As DCT coefficient low frequency part reflects image outline, HFS reflects image detail, therefore abandons After sequence, the 1st coefficient (reduction illumination effect), chooses subsequently several coefficients, successively as block feature.
S3.2:SIFT feature is generated.
After Image semantic classification, metric space extremum extracting is carried out first, it was demonstrated that Gaussian convolution core is to realize yardstick change The unique translation core for changing, and prove that gaussian kernel is unique linear kernel, two-dimensional Gaussian function is defined as follows:
Wherein, σ represents the variance of Gauss normal distribution.
For two-dimensional image I (x, y), the metric space under different scale represent L (x, y, σ) can by image I (x, y) with The convolution of gaussian kernel G (x, y, σ) is obtained:
L (x, y, σ)=G (x, y, σ) × I (x, y)
Wherein, the location of pixels of (x, y) representative image;σ becomes the metric space factor;L (x, y, σ) represents the chi of image Degree space.
Need gaussian pyramid and DOG (Difference of Gaussian) pyramid is set up, extremum extracting is carried out, with Primarily determine that position and the place yardstick of characteristic point.DOG operator definitions are the difference of the gaussian kernel of 2 different scales:
D (x, y, σ)=(G (x, y, k σ)-G (x, y, σ)) × I (x, y)
When yardstick spatial extrema is detected, it is intended that pixel and the surrounding neighbors pixel for including same yardstick and adjacent yardstick pair Answer the surrounding neighbors pixel ratio of position relatively, to guarantee local extremum all to be detected in metric space and two dimensional image space.Then Three-dimensional quadratic function fitting is carried out to Local Extremum with precise positioning feature point position.
It is each key point assigned direction parameter using the gradient direction distribution characteristic of key point neighborhood territory pixel, has operator Standby rotational invariance.Modulus value m (x, y) of (x, y) place gradient and direction θ (x, y) formula are as follows:
θ (x, y)=α tan2 ((L (x, y+1)-L (x, y-1))/(L (x+1, y)-L (x-1, y)))
Wherein, the yardstick used by L is the yardstick that each key point is each located.This algorithm at nose and the position of forehead, Covering corresponding region with grid respectively, and split grid becomes nonoverlapping sub-block.Corresponding office is generated to each block Portion's SIFT feature.
S3.3:Modeling identification:Set up K-NN model.
With personage PiAs a example by identification, Pi:X=[I1,i,I2,i,…,Ik,i] it is personage P in a camera lensiFace atlas, should K is had in camera lensiOpen face, Ik,iFor a face picture of atlas X, corresponding 5 local feature vectors expression.Connect this 5 Local feature vectors, form one to Ik,iFeature description vector x.Using following process, individual face picture I is obtainedk,i's Classification results.
The k arest neighbors S of test vector xi, i=1,2 ... k, reciprocal fraction si=d (x, Si), returned using min-max One changes method, is normalized [0,1] to fraction:
And
Wherein, smaxFor apart from the farthest point of test vector x, s in k most adjacent pointminFor k most adjacent point middle-range From the nearest point of test vector x.
Due to the face atlas number of each personage in video lens, in order to reduce what face atlas number was brought to identification Impact, the selection of fraction employs majority voting algorithm (Majority Vote Rule):
Wherein,R is to gather with reference to personage's (atlas), and w is each frame figure The weights of picture, P is the personage of each frame, and Z is specific a certain frame in whole video image, and i, j, k are correspondingly transformation parameter.
Using sum rule (Sum rule), individual face picture I that above-mentioned process is obtainedk,iClassification results carry out Fusion, sets up personage PiK-NN model, during foundation introduce manifold distance to arest neighbors reset strengthen, be finally based on K- NN realizes piece identity PiIdentification.Specifically, first with following Sum-rule formula to PiCandidate matches personage carry out Sequence:
Manifold distance is re-introduced into, the candidate matches personage to arest neighbors, according to the manifold distance between atlas, enters rearrangement Strengthen, be finally based on K-NN model and complete.
The identification of S3.4 target.
It is suitable for recognition of face in view of not every frame under video environment, low resolution, too blocks and mistake Alignment is likely to affect the quality of frame, and then affects recognition accuracy.Therefore this algorithm is using distance model (DTM) and second most It is close to distance (DT2ND) weights valuation scheme, i.e. each frame and assigns weights ω=ωDTM×ωDT2ND.
For DTM, represent class c recently and correspond to frame Ik,i, i=1,2 ..., the weights of k are:
Wherein, μ is the expectation of face pixel normal distribution on the two field picture, and f is the pixel value of arbitrfary point in two field picture, σ is the standard deviation of normal distribution, and e is natural constant.
For DT2ND, Ik,i, i=1,2 ..., the weights of k are:
λ is statistics exponential parameter of the face pixel on certain two field picture, is that the two field picture pixel falls in certain frame Number of times in the range of facial image.
S3.5 manifold strengthens.
If merely local feature is relied on, the Global Information of face, such as structural information etc. can be lost, while traditional line Property spatial processing method can not effectively mining data concentrate nonlinear organization and internal information.This algorithm adopts manifold learning Enhancing rearrangement is carried out to K-NN category of model result:By personage PiFace atlas be abstracted into manifold M, personage PiWith personage Pj's Similarity is weighed with manifold distance (Manifold to Manifold Distance (MMD)), according to (MMD) pick out with Test personage PiSimilar and similar each other reference personage (atlas) set R, is finally based on the average distance of R to K-NN As a result reset.
The selection of reference set R is based on following equation iteration, and each wheel is selected and test personage PiAll close with reference set R Personage.Personage (atlas X) is selected to cause:
D value is minimized.Wherein, T is test assignment (atlas);Set for current reference personage (atlas); D (T, X) is the MMD between two atlas;α is weight, and X is addedIteration existsReach expected radix or be more than apart from D Stop during one threshold value.It is as follows that this algorithm calculates MMD formula:
Wherein, Ci,Cj' it is popular local linear subspace,
d(Ci,Cj')=(1- α) dE(Ci,Cj)+α·dV(Ci,Cj)
dE(Ci,Cj) describe the similarity of appearance between 2 atlas personages, dV(Ci,Cj) describe 2 atlas variable moulds Similarity between type.
The method for proposing in the present invention can actually be embedded in FPGA realization, apply to recognize with real-time face under low-light (level) In the camera of function or camera supervised system.
Those skilled in the art will be clear that the scope of the present invention is not restricted to example discussed above, it is possible to which which is carried out Some changes and modification, without deviating from the scope of the present invention that appended claims are limited.Although oneself is through in accompanying drawing and explanation Illustrate and describe the present invention in book in detail, but such explanation and description are only explanations or schematic, and nonrestrictive. The present invention is not limited to the disclosed embodiments.
By to accompanying drawing, the research of specification and claims, when the present invention is implemented, those skilled in the art are permissible Understand and realize the deformation of the disclosed embodiments.In detail in the claims, term " including " is not excluded for other steps or element, And indefinite article " one " or " one kind " be not excluded for multiple.The some measures that quotes in mutually different dependent claims The fact does not mean that the combination of these measures can not be advantageously used.It is right that any reference marker in claims is not constituted The restriction of the scope of the present invention.

Claims (5)

1. the video human face detection recognition method under a kind of low-light (level), it is characterised in that comprise the following steps:
S1, the low-light (level) facial image pretreatment based on fuzzy theory;
S1.1, HSV colour space transformation, the RGB image of acquisition is transformed into HSV color space;
S1.2, nonlinear transformation, by a smooth mapping curve so that the variation of image grayscale after process is smoother;
S1.3, the image enhancement processing based on fuzzy theory, image is transformed to fuzzy field from spatial domain using membership function In, carry out enhancement process on Fuzzy property domain again to image, increase the contrast of image;
S2, the training based on the detector in the Face datection algorithm of Adaboost, using pre-prepd positive and negative sample training Collection trains several different Weak Classifiers, then several different described Weak Classifiers is combined into a strong classification Device, finally the strong classifier area-of-interest of human face target is scanned for, the positive negative sample refers respectively to only include The picture of one face and the non-face picture comprising other complex backgrounds;
S3, the video human face identification of feature based fusion.
2. the video human face detection recognition method under a kind of low-light (level) according to claim 1, it is characterised in that the step Rapid S1.2 includes:The nonlinear transformation adopts logarithmic transformation, and the logarithmic transformation refers to output image with input image pixels point Gray value be logarithmic relationship, the logarithmic function that the logarithmic transformation is adopted is:
F '=lg (Vd+1)
Wherein, f ' is the image after logarithmic transformation, and V represents the luminance component of HSV color space, and d is customized parameter.
3. the video human face detection recognition method under a kind of low-light (level) according to claim 1, it is characterised in that the step Rapid S1.3 specifically includes following steps:
S1.3.1, spatial domain are mapped to the conversion of fuzzy field, using sinusoidal membership function by image by spatial domain to fuzzy field Mapped, obtained Fuzzy property domain;
The sinusoidal membership function of definition is as follows:
u i j = [ s i n ( f ( i , j ) - f m i n f max - f m i n ) × π 2 ] k
Wherein, f (i, j) is the gray level of pixel (i, j) in pending image, uijGTG x for pixel (i, j)ijWith respect to fmax Degree of membership, fmaxFor the maximum gray scale of pending image, fminFor the minimal gray level of pending image, k is adjustable JIESHEN Number, k value is defined as follows:
K=mean (f)/(fmax-fmin)
Wherein, the average gray value of mean (f) representative image;
Enter line translation and finally obtain a new Fuzzy property domain using the sine membership function and k value formula;
S1.3.2, fuzzy field image enhaucament;
Define new fuzzy enhancement operator:
u i j ′ = 1 - 2 ( 1 - u i j ) 2 , 0 ≤ u i j ≤ T 2 ( u i j ) 2 , T ≤ u i j ≤ 1
Wherein, T is used as the critical point of image enhaucament, and with the increase of iterationses, picture contrast also strengthens therewith;
S1.3.3, broad image inverse transformation;
For new Fuzzy property domain, just image can be mapped to gray space from fuzzy field through an inverse transformation, Finally enhanced image has just been obtained:
f ′ ( i , j ) = G - 1 ( u i j ′ ) = ( u m i n + ( u m a x - u m i n ) arcsinu i j × 2 π ) 1 k
Wherein, G-1For the inverse transformation of image, f ' (i, j) is the gray level of image slices vegetarian refreshments (i, j) after strengthening;
S1.3.4, treat output image f ' (i, j) denoising is filtered, output image is treated using two-sided filter and enters traveling one Step filtering and noise reduction process, last outputting high quality clearly low-light (level) image.
4. the video human face detection recognition method under a kind of low-light (level) according to claim 1, it is characterised in that the step Rapid S2 is specifically included comprising rectangular characteristic calculating, Weak Classifier training, strong classifier training and cascade classifier construction;
Wherein, the rectangular characteristic is calculated using rectangular characteristic as face feature vector, by the rectangle of two or more congruences Adjacent composition feature templates, this feature template includes two kinds of rectangles of black and white, and the wherein upper left corner is white, then two kinds of rectangles of black and white Pixel is interlocked successively, by white rectangle pixel and the feature for deducting black rectangle pixel with can obtain feature templates, and utilizes Integrogram calculates eigenvalue;
The cascade classifier is to be divided using Waterfall type cascade come structural classification device, the cascade classifier by the way of cascade Class device.
5. the video human face detection recognition method under a kind of low-light (level) according to claim 1, it is characterised in that the step Rapid S3 is comprised the following steps:
S3.1:Discrete cosine transform (Discrete Cosine Transform, DCT) feature is generated:
At the right eye eyeball of facial image and the position of mouth, covering corresponding region with a grid respectively, and split grid becomes not The sub-block of overlap, carries out discrete cosine transform to each sub-block, and the DCT coefficient of acquisition uses zig-zag scan sorting, due to DCT coefficient low frequency part reflects image outline, and HFS reflects image detail, therefore abandons the 1st coefficient after sequence, successively Subsequently several coefficients are chosen, as block feature;
S3.2:SIFT feature is generated:
At the nose of facial image and the position of forehead, covering corresponding region with a grid respectively, and split grid becomes not The sub-block of overlap, generates corresponding local SIFT feature to each block;
S3.3:Set up K-NN model:
Define personage Pi, Pi:X=[I1,i,I2,i,…,Ik,i] it is personage P in a camera lensiFace atlas, in the camera lens have KiOpen face, Ik,iFor a face picture of atlas X, corresponding 5 local feature vectors expression, connect this 5 local features to Amount, forms one to Ik,iFeature description vector x, using following process, obtain individual face picture Ik,iClassification results;
The k arest neighbors S of test vector xi, i=1,2 ... k, reciprocal fraction si=d (x, Si), using min-max normalization Method, is normalized [0,1] to fraction:
And
Wherein, smaxFor apart from the farthest point of test vector x, s in k most adjacent pointminSurvey for distance in k most adjacent point The nearest point of examination vector x;
Due to the face atlas number of each personage in video lens, in order to reduce the shadow that face atlas number brings to identification Ring, the selection of fraction employs majority voting algorithm (Majority Vote Rule):
a s s i g n Z → ω j , i f Σ i = 1 R Δ j i = m a x k - 1 Σ i = 1 R Δ k i
Wherein,R is to gather with reference to personage, and w is the weights of each two field picture, P is the personage of each frame, and Z is specific a certain frame in whole video image, and i, j, k are correspondingly transformation parameter;
Using sum rule (Sum-rule), individual face picture I that above-mentioned process is obtainedk,iClassification results merged, Set up personage PiK-NN model;
The identification of S3.4 target:
Using distance model (DTM) and the second closest-approach distance (DT2ND) weights valuation scheme, i.e., each frame assign weights ω= ωDTM×ωDT2ND
For DTM, represent class c recently and correspond to frame Ik,i, i=1,2 ..., the weights of k are:
&omega; D T M ( f i ) = 1 i f d ( f i , c ) < &mu; e d ( f i , c ) - &mu; 2 &sigma; 2 o t h e r w i s e
Wherein, μ is the expectation of face pixel normal distribution on the two field picture, and f is the pixel value of arbitrfary point in two field picture, and σ is The standard deviation of normal distribution, e is natural constant;
For DT2ND, Ik,i, i=1,2 ..., the weights of k are:
&omega; D T 2 N D ( f i ) = &epsiv; ( &Delta; ( f i ) ) = 1 - e - &lambda; &Delta; ( f i )
λ is statistics exponential parameter of the face pixel on certain two field picture, is that the two field picture pixel falls in certain frame face Number of times in image range;
S3.5 manifold strengthens:
Enhancing rearrangement is carried out to K-NN category of model result using manifold learning, by personage PiFace atlas be abstracted into manifold M, Personage PiWith personage PjSimilarity weighed with manifold distance (Manifold to Manifold Distance (MMD)), root Pick out and test personage P according to MMDiSimilar and similar each other reference personage atlas set R, is finally based on the flat of R All distance is reset to K-NN result, and the selection of reference set R is based on following equation iteration, and each wheel is selected and test personage PiAnd ginseng The all close personage of collection R is examined, that is, selects personage atlas X to cause:
D = d ( T , X ) + &alpha; &CenterDot; 1 R ~ &Sigma; i d ( R i , X )
Wherein, T is test assignment (atlas),For the set of current reference personage (atlas), d (T, X) is two atlas Between MMD, α be weight, X addIteration existsReach expected radix or stop more than during a threshold value apart from D;
Calculate MMD formula as follows:
d ( M 1 , M 2 ) = m i n C i &Element; M 1 d ( C i , M 2 ) = m i n C i &Element; M 1 m i n C j &prime; &Element; M 2 d ( C i , C j &prime; )
Wherein, Ci,C′jFor popular local linear subspace,
d(Ci,C′j)=(1- α) dE(Ci,Cj)+α·dV(Ci,Cj)
dE(Ci,Cj) describe the similarity of appearance between 2 atlas personages, dV(Ci,Cj) describe 2 atlas variate models it Between similarity.
CN201610972195.5A 2016-11-07 2016-11-07 Detection and recognition method of human face in video under low-light conditions Pending CN106446872A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610972195.5A CN106446872A (en) 2016-11-07 2016-11-07 Detection and recognition method of human face in video under low-light conditions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610972195.5A CN106446872A (en) 2016-11-07 2016-11-07 Detection and recognition method of human face in video under low-light conditions

Publications (1)

Publication Number Publication Date
CN106446872A true CN106446872A (en) 2017-02-22

Family

ID=58180723

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610972195.5A Pending CN106446872A (en) 2016-11-07 2016-11-07 Detection and recognition method of human face in video under low-light conditions

Country Status (1)

Country Link
CN (1) CN106446872A (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292848A (en) * 2017-06-29 2017-10-24 华中科技大学鄂州工业技术研究院 A kind of low-light compensation method and system based on logarithmic transformation
CN107633251A (en) * 2017-09-28 2018-01-26 深圳市晟达机械设计有限公司 A kind of vehicle identification system based on image enhaucament
CN107704509A (en) * 2017-08-31 2018-02-16 北京联合大学 A kind of method for reordering for combining stability region and deep learning
CN108319938A (en) * 2017-12-31 2018-07-24 奥瞳***科技有限公司 High quality training data preparation system for high-performance face identification system
CN108563999A (en) * 2018-03-19 2018-09-21 特斯联(北京)科技有限公司 A kind of piece identity's recognition methods and device towards low quality video image
CN108629333A (en) * 2018-05-25 2018-10-09 厦门市美亚柏科信息股份有限公司 A kind of face image processing process of low-light (level), device, equipment and readable medium
CN108664980A (en) * 2018-05-14 2018-10-16 昆明理工大学 A kind of sun crown ring structure recognition methods based on guiding filtering and wavelet transformation
CN109087268A (en) * 2018-08-17 2018-12-25 凌云光技术集团有限责任公司 Image enchancing method under a kind of low light environment
CN109447910A (en) * 2018-10-09 2019-03-08 湖南源信光电科技股份有限公司 A kind of low-luminance color image enchancing method based on fuzzy theory
CN109584423A (en) * 2018-12-13 2019-04-05 佛山单常科技有限公司 A kind of intelligent unlocking system
CN109658627A (en) * 2018-12-13 2019-04-19 深圳桓轩科技有限公司 A kind of Intelligent logistics pickup system based on block chain
CN109766857A (en) * 2019-01-16 2019-05-17 嘉兴学院 A kind of three-dimensional face identification method based on semantic alignment multizone template fusion
CN109918971A (en) * 2017-12-12 2019-06-21 深圳光启合众科技有限公司 Number detection method and device in monitor video
CN109961004A (en) * 2019-01-24 2019-07-02 深圳市梦网百科信息技术有限公司 A kind of polarization light source method for detecting human face and system
CN110516685A (en) * 2019-05-31 2019-11-29 沈阳工业大学 Lenticular opacities degree detecting method based on convolutional neural networks
CN110532939A (en) * 2019-08-27 2019-12-03 南京审计大学 A kind of recognition of face classification method based on fuzzy three-dimensional 2DFDA
CN110782442A (en) * 2019-10-23 2020-02-11 国网陕西省电力公司宝鸡供电公司 Image artificial fuzzy detection method based on multi-domain coupling
CN111382666A (en) * 2018-12-31 2020-07-07 三星电子株式会社 Device and method with user authentication
CN111863232A (en) * 2020-08-06 2020-10-30 罗春华 Remote disease intelligent diagnosis system based on block chain and medical image
CN111931671A (en) * 2020-08-17 2020-11-13 青岛北斗天地科技有限公司 Face recognition method for illumination compensation in underground coal mine adverse light environment
CN112232307A (en) * 2020-11-20 2021-01-15 四川轻化工大学 Method for detecting wearing of safety helmet in night vision environment
CN112233029A (en) * 2020-10-15 2021-01-15 国网电子商务有限公司 Business license image processing method and device
CN112446247A (en) * 2019-08-30 2021-03-05 北京大学 Low-illumination face detection method based on multi-feature fusion and low-illumination face detection network
CN113128511A (en) * 2021-03-31 2021-07-16 武汉钢铁有限公司 Coke tissue identification method and device
CN113689324A (en) * 2021-07-06 2021-11-23 清华大学 Automatic adding and deleting method and device for portrait object based on two classification labels
CN114578807A (en) * 2022-01-05 2022-06-03 北京华如科技股份有限公司 Active target detection and obstacle avoidance method for unmanned target vehicle radar vision fusion

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116756A (en) * 2013-01-23 2013-05-22 北京工商大学 Face detecting and tracking method and device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116756A (en) * 2013-01-23 2013-05-22 北京工商大学 Face detecting and tracking method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
付争方,朱虹: "基于模糊理论的低照度彩色图像增强算法", 《传感器与微***》 *
左登宇: "基于 Adaboost 算法的人脸检测研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
黄存东 等: "基于特征融合和流形增强的视频人脸识别", 《人工智能及识别技术》 *

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292848A (en) * 2017-06-29 2017-10-24 华中科技大学鄂州工业技术研究院 A kind of low-light compensation method and system based on logarithmic transformation
CN107704509A (en) * 2017-08-31 2018-02-16 北京联合大学 A kind of method for reordering for combining stability region and deep learning
CN107633251A (en) * 2017-09-28 2018-01-26 深圳市晟达机械设计有限公司 A kind of vehicle identification system based on image enhaucament
CN109918971A (en) * 2017-12-12 2019-06-21 深圳光启合众科技有限公司 Number detection method and device in monitor video
CN109918971B (en) * 2017-12-12 2024-01-05 深圳光启合众科技有限公司 Method and device for detecting number of people in monitoring video
CN108319938A (en) * 2017-12-31 2018-07-24 奥瞳***科技有限公司 High quality training data preparation system for high-performance face identification system
CN108319938B (en) * 2017-12-31 2022-05-17 奥瞳***科技有限公司 High-quality training data preparation system for high-performance face recognition system
CN108563999A (en) * 2018-03-19 2018-09-21 特斯联(北京)科技有限公司 A kind of piece identity's recognition methods and device towards low quality video image
CN108664980A (en) * 2018-05-14 2018-10-16 昆明理工大学 A kind of sun crown ring structure recognition methods based on guiding filtering and wavelet transformation
CN108629333A (en) * 2018-05-25 2018-10-09 厦门市美亚柏科信息股份有限公司 A kind of face image processing process of low-light (level), device, equipment and readable medium
CN109087268A (en) * 2018-08-17 2018-12-25 凌云光技术集团有限责任公司 Image enchancing method under a kind of low light environment
CN109447910A (en) * 2018-10-09 2019-03-08 湖南源信光电科技股份有限公司 A kind of low-luminance color image enchancing method based on fuzzy theory
CN109658627A (en) * 2018-12-13 2019-04-19 深圳桓轩科技有限公司 A kind of Intelligent logistics pickup system based on block chain
CN109584423A (en) * 2018-12-13 2019-04-05 佛山单常科技有限公司 A kind of intelligent unlocking system
CN111382666A (en) * 2018-12-31 2020-07-07 三星电子株式会社 Device and method with user authentication
CN109766857A (en) * 2019-01-16 2019-05-17 嘉兴学院 A kind of three-dimensional face identification method based on semantic alignment multizone template fusion
CN109961004A (en) * 2019-01-24 2019-07-02 深圳市梦网百科信息技术有限公司 A kind of polarization light source method for detecting human face and system
CN110516685A (en) * 2019-05-31 2019-11-29 沈阳工业大学 Lenticular opacities degree detecting method based on convolutional neural networks
CN110532939A (en) * 2019-08-27 2019-12-03 南京审计大学 A kind of recognition of face classification method based on fuzzy three-dimensional 2DFDA
CN110532939B (en) * 2019-08-27 2022-09-27 南京审计大学 Fuzzy three-way 2 DFDA-based face recognition and classification method
CN112446247A (en) * 2019-08-30 2021-03-05 北京大学 Low-illumination face detection method based on multi-feature fusion and low-illumination face detection network
CN112446247B (en) * 2019-08-30 2022-11-15 北京大学 Low-illumination face detection method based on multi-feature fusion and low-illumination face detection network
CN110782442A (en) * 2019-10-23 2020-02-11 国网陕西省电力公司宝鸡供电公司 Image artificial fuzzy detection method based on multi-domain coupling
CN110782442B (en) * 2019-10-23 2023-03-24 国网陕西省电力公司宝鸡供电公司 Image artificial fuzzy detection method based on multi-domain coupling
CN111863232A (en) * 2020-08-06 2020-10-30 罗春华 Remote disease intelligent diagnosis system based on block chain and medical image
CN111931671A (en) * 2020-08-17 2020-11-13 青岛北斗天地科技有限公司 Face recognition method for illumination compensation in underground coal mine adverse light environment
CN112233029A (en) * 2020-10-15 2021-01-15 国网电子商务有限公司 Business license image processing method and device
CN112232307B (en) * 2020-11-20 2022-07-05 四川轻化工大学 Method for detecting wearing of safety helmet in night vision environment
CN112232307A (en) * 2020-11-20 2021-01-15 四川轻化工大学 Method for detecting wearing of safety helmet in night vision environment
CN113128511A (en) * 2021-03-31 2021-07-16 武汉钢铁有限公司 Coke tissue identification method and device
CN113128511B (en) * 2021-03-31 2023-07-25 武汉钢铁有限公司 Coke tissue identification method and device
CN113689324A (en) * 2021-07-06 2021-11-23 清华大学 Automatic adding and deleting method and device for portrait object based on two classification labels
CN113689324B (en) * 2021-07-06 2024-04-26 清华大学 Automatic portrait object adding and deleting method and device based on two classification labels
CN114578807A (en) * 2022-01-05 2022-06-03 北京华如科技股份有限公司 Active target detection and obstacle avoidance method for unmanned target vehicle radar vision fusion

Similar Documents

Publication Publication Date Title
CN106446872A (en) Detection and recognition method of human face in video under low-light conditions
CN110348319B (en) Face anti-counterfeiting method based on face depth information and edge image fusion
CN103632132B (en) Face detection and recognition method based on skin color segmentation and template matching
CN103942577B (en) Based on the personal identification method for establishing sample database and composite character certainly in video monitoring
CN101359365B (en) Iris positioning method based on maximum between-class variance and gray scale information
CN112215180B (en) Living body detection method and device
CN103400110B (en) Abnormal face detecting method before ATM cash dispenser
CN110516576A (en) Near-infrared living body faces recognition methods based on deep neural network
CN109858439A (en) A kind of biopsy method and device based on face
CN106845328B (en) A kind of Intelligent human-face recognition methods and system based on dual camera
CN101739546A (en) Image cross reconstruction-based single-sample registered image face recognition method
CN105956578A (en) Face verification method based on identity document information
CN109685713B (en) Cosmetic simulation control method, device, computer equipment and storage medium
CN105426843B (en) The single-lens lower vena metacarpea of one kind and palmprint image collecting device and image enhancement and dividing method
CN101339607A (en) Human face recognition method and system, human face recognition model training method and system
de Souza et al. On the learning of deep local features for robust face spoofing detection
CN106778474A (en) 3D human body recognition methods and equipment
CN109543632A (en) A kind of deep layer network pedestrian detection method based on the guidance of shallow-layer Fusion Features
CN110059546A (en) Vivo identification method, device, terminal and readable medium based on spectrum analysis
CN106485222A (en) A kind of method for detecting human face being layered based on the colour of skin
CN106529494A (en) Human face recognition method based on multi-camera model
CN108629262A (en) Iris identification method and related device
CN108416291A (en) Face datection recognition methods, device and system
CN107798279A (en) Face living body detection method and device
CN109255319A (en) For the recognition of face payment information method for anti-counterfeit of still photo

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170222