Specific embodiment
The invention will be further described with the following Examples.
Referring to Fig. 1, a kind of super anti-theft monitoring system of quotient, the anti-theft monitoring system includes video camera 1, processor 2 and display
Terminal 3, the video camera 1 are arranged at each point to be monitored, carry out facial image candid photograph and in real time view to treat monitoring point crowd
Frequency is shot, and gives facial image and vision signal real-time Transmission to the processor 2;The processor 2 is used for by camera shooting
The facial image that machine 1 transmits carries out processing identification, judge face in facial image whether the black list database with processor 2
In face it is corresponding, if it does correspond, then determining that the face is to identify target, its corresponding identity information is transferred to display eventually
End 3;At monitoring backstage, the display terminal 3 is used to receive the identification that the processor 2 is sent for the setting of display terminal 3
The identity information of target, and carry out real-time display.
The utility model has the advantages that (1) the present invention is based on the super anti-theft monitoring systems of the quotient of recognition of face by entering and leaving bayonet, again in key
Video camera is installed in point region, and the face head portrait for carrying out discrepancy crowd is captured, and by carrying out recognition of face to the image of candid photograph, is obtained
Face characteristic data are compared with the human face data of the thief imported in black list database, can quickly and effectively screen small
Steathily, timely and effective prevention and the pilferage behavior of prevention thief occur.The system improves small compared with traditional supermarket's monitor mode
The efficiency and accuracy identified steathily, realizes that quotient is super and the intelligence of other commercial buildings is antitheft, to the intelligence of a suspect
Recognition of face greatly improves commercial space anti-theft horizontal and efficiency.
(2) increase the generation function of warning message in processor, and warning message is exported to display terminal and shown
Alarm is popped up on interface, can more rapidly be found thief's information that monitoring information identifies, be played the role of fright to thief.
(3) video information number that processor can also simultaneously carry out all camera transmissions is analyzed, and obtains signal mesh
Target real time position and mobile trajectory data are realized automatically tracking for track, and are exported to display terminal, after identifying thief
In the video pictures and action trail figure of display terminal real-time display thief, efficiency is further improved.
Preferably, referring to fig. 2, the processor 2 includes human face analysis module 4, face characteristic contrast module 5, blacklist
Database 6, alarm module 7 and video analysis tracing module 8;
The human face analysis module 4, the facial image for transmitting to video camera 1 are analyzed, and the face figure is obtained
The face feature vector of picture;
The black list database 6, for storing the face feature vector of stealing personnel;
The face characteristic contrast module 5, for obtained face feature vector and the black list database 6 will to be extracted
In the face feature vector that prestores be compared, judge the face whether with the face pair that is prestored in the black list database 6
It answers, if it does correspond, then determining that the face is to identify target, and its corresponding identity information is transferred to display terminal;
The alarm module 7 for generating warning message after obtaining the identification target, and is exported to display terminal 3;
The video analysis tracking module 8, the vision signal for coming to all camera transmissions are analyzed, and are obtained and are known
The real time position and mobile trajectory data of other target, and export to display terminal.
Preferably, referring to Fig. 3, the human face analysis module 4 includes that facial image pretreatment submodule 9 and facial image are special
Levy extracting sub-module 10.The facial image pre-processes submodule 9, and the facial image for transmitting to video camera 1 is located in advance
Reason;The facial image feature extraction submodule 10, for being extracted in the facial image from pretreated facial image
Face feature vector.
Preferably, the facial image pretreatment submodule 9 includes gray processing unit 11, denoising unit 12 and enhancement unit
13;
The gray processing unit 11, the facial image for transmitting to video camera 1 carry out gray processing processing;The denoising is single
Member 12, for removing the random noise in the facial image after gray processing;The enhancement unit 12, for the face after denoising
Image carries out enhancing processing.
Preferably, the random noise in the facial image after the removal gray processing, specifically:
(1) K layers of wavelet decomposition are carried out to the facial image after gray processing using wavelet transformation, obtains one group of wavelet coefficient;
(2) threshold process is carried out to the wavelet coefficient of each decomposition layer respectively using thresholding functions, wherein kth layer
The thresholding functions of wavelet coefficient are as follows:
In formula, b 'J, kFor j-th of wavelet coefficient of the kth layer after denoising, bJ, kFor j-th of small echo of the kth layer before denoising
Coefficient, λ1, kFor the bottom threshold value of the kth layer wavelet coefficient of setting, λ2, kFor the upper threshold of the kth layer wavelet coefficient of setting
Value, and λ2, k=ζ λ1, k, ζ is a proportionality coefficient, meets 0 < ζ < 1, and a, η are form factor, and sgn (r) is sign function, works as r
When for positive number, 1 is taken, when being negative, takes -1;
(3) wavelet coefficient after denoising is reconstructed using wavelet transformation, the facial image after being denoised.
The utility model has the advantages that carrying out threshold process, the threshold to the wavelet coefficient of different decomposition layer respectively using thresholding functions
Value Processing Algorithm can adaptively remove the random noise in facial image according to the wavelet coefficient of each decomposition layer;In threshold value
It handles in function, a, η are form factor, which is used to control the shape of threshold function table in each section, i.e. control decaying journey
Degree;According to λ1, k、λ2, kWith bJ, kAbsolute value size relation, select different threshold function tables to be denoised, which can have
Random noise in effect ground removal facial image, retains the effective information in facial image, while the thresholding functions are in λ1, k
And λ2, kPlace is continuous, the additional concussion that the facial image after capable of effectively avoiding denoising generates, in Near Threshold, the threshold process
Function has preferable smooth transition band, so that the facial image after the denoising arrived is closer to true picture, after being conducive to
Accurately identifying for the continuous personnel identity to candid photograph, improves the recognition accuracy of the super burglary-resisting system of the quotient.
In the embodiment that one can be realized, the facial image of candid photograph is carried out at denoising using the method for threshold value
Reason, can be by setting fixed a upper threshold value and bottom threshold value
In one more preferably embodiment, by solving the upper threshold value of each decomposition layer, and then realize to face
The denoising process of image.Wherein, upper threshold value can be calculated using following formula:
In formula, λ2, kFor the upper threshold value of kth layer wavelet coefficient, CJ, kFor j-th of wavelet systems of kth layer wavelet coefficient
Number, JkFor the number of kth layer wavelet coefficient,For the average value of all wavelet coefficients, ξ1、ξ2For weight coefficient, meet ξ1+ξ2
=1.
The utility model has the advantages that when solving the upper threshold value of each decomposition layer, by the average value for seeking all wavelet coefficients
And the mean value of the quadratic sum of kth layer wavelet coefficient, and then the upper threshold value of kth layer wavelet coefficient is solved, which can
The case where according to each decomposition layer adaptive each layer of determination of upper threshold value and bottom threshold value, and then select different
To realize denoising, which avoids setting fixed threshold bring noise wavelet coefficients quilt for upper threshold value and bottom threshold value
It remains, and to still remain much noise in the facial image after denoising, while also avoiding useful wavelet systems
Number treats as noise information, and makes the target after denoising too smooth, has lost detailed information;And different threshold values is selected to be gone
It makes an uproar and also improves the accuracy of denoising.
Preferably, the facial image after described pair of denoising carries out enhancing processing, specifically:
(1) facial image after denoising is transformed from a spatial domain into fuzzy field using customized subordinating degree function, and counted
All pixels point is subordinate to angle value in facial image after calculating denoising, wherein customized subordinating degree function are as follows:
In formula, μxyIt is the angle value that is subordinate to of the pixel at coordinate (x, y), pxyBe denoising after facial image in coordinate (x,
Y) gray value of the pixel at place, pTFor preset threshold value, P is the maximum gradation value in the facial image after denoising;
(2) in fuzzy field, it can be modified, be obtained by the angle value that is subordinate to of the nonlinear transformation to obtained each pixel
It is subordinate to angle value to each pixel is revised, wherein customized nonlinear transformation formula are as follows:
In formula, μ 'xyRevised for the pixel at coordinate (x, y) is subordinate to angle value, μxyFor the picture at coordinate (x, y)
Vegetarian refreshments is subordinate to angle value, μTFor pTIt is corresponding to be subordinate to angle value, μTIt can be calculated by the subordinating degree function of step (1);
(3) the gray value for being subordinate to angle value and being converted to respective pixel point of revised pixel, after obtaining enhanced fuzzy
Facial image, wherein revised the pixel at coordinate (x, y) is subordinate to angle value μ 'xyBe converted to its gray value p 'xy
Formula are as follows:
In formula, p 'xyIt is the gray value of the pixel at the coordinate (x, y) after inverse transformation;
All pixels point in fuzzy field is traversed, the set that all pixels point is constituted after inverse transformation is enhanced face
Image.
The utility model has the advantages that the facial image after denoising transformed from a spatial domain to using customized subordinating degree function fuzzy
Domain is allowed in fuzzy field, and each pixel gray value is mapped in [0,1] section;By setting a threshold value pT, after denoising
Facial image is divided into the higher region of gray level and the lower region of gray level, and respectively in the two regions with different persons in servitude
Pixel is subordinate to angle value in category degree function domain, and the lower part of gray level can be weakened by doing so, make corresponding picture
The gray level of vegetarian refreshments is lower, while enhancing the higher part of gray level, keep the gray level of corresponding pixel higher, is reached with this
To the purpose of image enhancement;By completing the enhancing processing to the facial image after denoising in fuzzy field, so that after denoising
Facial image is effectively enhanced, and while so that entire enhanced facial image brightens, can preferably retain face figure
Minutia as in, is conducive to subsequent feature extraction and identification to facial image, convenient for the subsequent facial image to candid photograph
Accurately identify, effectively screen out the identity of thief, and then effectively prevent and prevents the generation for stealing behavior of thief.
Preferably, the face characteristic that obtained face feature vector will be extracted and prestored in the black list database
Vector is compared, and judges whether the face is corresponding with the face prestored in the black list database, if it does correspond, then determining
The face is to identify target, and its corresponding identity information is transferred to display terminal, specifically: the face for obtaining extraction
Feature vectorWith the face feature vector prestored in the black list databaseIt is compared, if metThen judge that the face is to identify target, and its corresponding identity information is transferred to display terminal, whereinFor the face feature vector of the facial image of camera transmissions,For the face characteristic that is prestored in the black list database to
Amount, δ are the customized similarity factor.
Finally it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention, rather than the present invention is protected
The limitation of range is protected, although explaining in detail referring to preferred embodiment to the present invention, those skilled in the art are answered
Work as understanding, it can be with modification or equivalent replacement of the technical solution of the present invention are made, without departing from the reality of technical solution of the present invention
Matter and range.