CN102750546B - Face shielding detection method based on structured error code - Google Patents
Face shielding detection method based on structured error code Download PDFInfo
- Publication number
- CN102750546B CN102750546B CN201210187427.8A CN201210187427A CN102750546B CN 102750546 B CN102750546 B CN 102750546B CN 201210187427 A CN201210187427 A CN 201210187427A CN 102750546 B CN102750546 B CN 102750546B
- Authority
- CN
- China
- Prior art keywords
- centerdot
- error
- sigma
- lambda
- support
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 16
- 238000012549 training Methods 0.000 claims abstract description 27
- 230000001815 facial effect Effects 0.000 claims description 18
- 238000012545 processing Methods 0.000 claims description 8
- 238000009499 grossing Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 2
- 238000000034 method Methods 0.000 abstract description 14
- 230000000903 blocking effect Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 5
- 238000006467 substitution reaction Methods 0.000 description 5
- 238000013441 quality evaluation Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 3
- 241000282693 Cercopithecidae Species 0.000 description 2
- 230000005366 Ising model Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000007306 turnover Effects 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
The invention provides a face shielding detection method based on a structured error code, which has high detection accuracy and good feasibility, and is suitable for dealing with situations of lower image dimension and larger shielding area. The method comprises the following concrete steps of: step 1, stretching detected face image data and training sample data into a column vector; step 2, defining an error support, and initializing; step 3, under a minimum CD (Compact Disc) error standard, calculating a sparse code and a reconstruction error of the detected face image data on a dictionary formed by the training sample data through the error support; step 4, estimating an error support according to the reconstruction error; step 5, building an aspect graph for describing the error support, and estimating an error support again through the aspect graph and the reconfiguration error; step 6, iterating the step 3-the step 5 to obtain a reconfiguration error sequence and an error support sequence; and step 7, selecting the optimal error support, and obtaining a set of occluded pixel points in the detected face image according to the optimal error support.
Description
Technical field
The invention belongs to image processing field, particularly relate to recognition of face field.
Background technology
Along with the high speed development of infotech, face recognition technology has been widely used in actual life, as: the monitoring of all kinds of ATM cash dispensers of bank to remittee, customs, the monitoring of critical point to turnover passenger.In actual face image processing process, the blocking of facial image (as glasses, mouth mask, scarf etc.) can often occur, and block for recognition of face or face synthetic be a great obstacle.Therefore, how to detect quickly and automatically face and block and rebuild the image of face occlusion area, become one of study hotspot of face image processing in recent years.
Current existing occlusion detection technology is mainly the processing based on to reconstructed error, that is: first with training sample to there being the image blocking to be reconstructed, obtain unscreened reconstructed image, then calculate error between the two, by the size of analytical error, judge the region being blocked.Occlusion detection technology based on error analysis mainly can be divided into: the sparse coding method based on dictionary, and based on the method for error metrics, the method distributing based on error, and method based on error structure.These methods processing while blocking continuously, have a common problem: when image dimension is lower than a certain critical value or shielded area during higher than a certain percentage point, Detection accuracy can be significantly but not gently declined.For this problem, the present invention proposes a kind of new face occlusion detection method based on structuring error coding.
Summary of the invention
The invention provides a kind of face occlusion detection method based on structuring error coding that Detection accuracy is high, feasibility is good, be suitable for processing the lower or larger situation of shielded area of image dimension.
For solving the problems of the technologies described above, the technical solution used in the present invention is: the method for the face occlusion detection based on structuring error coding is provided, comprises the following steps:
Step 1: facial image data to be detected and training sample data are stretched as to column vector;
Step 2: definition error supports, and initialization;
Step 3: under minimize CD error criterion, supported by error, calculate sparse coding and the reconstructed error of facial image data to be detected to the dictionary being formed by training sample data;
Step 4: support according to reconstructed error evaluated error;
Step 5: set up and describe the aspect graph that error supports, by aspect graph and reconstructed error, evaluated error supports again;
Step 6: iterative step 3-5, obtains reconstructed error sequence and error and support sequence;
Step 7: choose Optimal error and support, and according to the set of the pixel that is blocked in Optimal error support acquisition facial image to be detected.
Further, the column vector that facial image to be detected and each training sample are stretched as in described step 1 is the column vector that the image data matrix of m × n dimension is stretched as to M=m × n dimension, and m, n are respectively line number and the columns of view data.
Further, the error in described step 2 is supported for s ∈ { 1,1}
m, wherein s
i=-1 represents not to be blocked, s
i=1 represents to be blocked; It is that error is supported and is initialized as s that initialization error supports
i=-1, i=1 ..., M.{ 1,1}
mrepresent the set of M dimensional vector, and in this set the element of column vector from set { 1,1}.
Further, the dictionary being made up of training sample in described step 3 is by the training sample after each stretch processing, by row discharge, forms dictionary.
Further, the CD error in described step 3 is for measuring the vector of any two same dimension
with
between error, be defined as: CD (a
i, b
i)=1-exp (| loga
i-logb
i| σ), wherein,
θ is empirical constant, and recommended value is 1; I is the subscript of vectorial a and b, a
i, b
irepresent i the element of vectorial a and b, M is the dimension of the column vector after stretching,
represent real number field
middle dimension is the vectorial set of M.
Further, sparse coding and the reconstructed error of the facial image to be detected in described step 3 to the dictionary being made up of training sample calculates as follows:
Wherein, x is non-negative sparse coding, and e is reconstructed error,
D is the dictionary being made up of training sample; S.t. represent that (x, e) needs to meet
and the constraint of x>=0, e
irepresent y
iwith
cD error.
Further, the concrete steps that in described step 4, evaluated error supports are: if iteration first, t=1, carries out two class mean clusters to reconstruct error e, that is: K mean cluster, and K=2, obtains error and supports s, and initialization threshold tau
(1)=max{e
i| s
i=-1}; Otherwise t > 1, carries out threshold value cluster to reconstruct error e, obtain error and support
Wherein, threshold value
T and κ are empirical parameter, and t is iterations.The value suggestion of T and κ is T=5, κ=0.3.
Further, the concrete steps of described step 5 are:
Step 5.1: set up the aspect graph G=(V, E, B) that describes error support s, wherein: the set V={1 on the summit (Vertex) that V is G, 2 ..., M} and each vertex v
iclass be designated as s
i; E is the set E={ (i, j) on the limit (Edge) of G | i, and j ∈ V, || c
i-c
j||
2=1}, wherein c
i=[c
i1, c
i2]
t, c
j=[c
j1, c
j2]
tit is vertex v
i, v
jcoordinate; B is the set B={ B on the border of each subgraph of G
k| k=-1,1}, wherein: B
k=(V
k, E
k),
K belongs to set
l belongs to set
Step 5.2: make s'=s, by aspect graph G and reconstructed error e, evaluated error supports s again:
Wherein, λ
efor smoothing parameter,
λ
bfor boundary parameter.λ
erecommended value be 2; λ
brecommended value be 0.5.This optimized-type meets Ising model, can be solved by GraphCuts.
Further, the reconstructed error sequence of described step 6 is E={e
(t)| t=1,2 ..., 2T-1}, wherein e
(t)the reconstructed error producing for the t time iteration of above-mentioned steps 3-5; It is S={s that error supports sequence
(t)| t=1,2 ..., 2T-1}, wherein s
(t)the error producing for the t time iteration of above-mentioned steps 3-5 supports.
Further, the concrete steps of described step 7 are:
Step 7.1: step 7.1 makes
Wherein V={1,2 ..., M};
Wherein
Step 7.2: order
To all t=1 ..., 2T-1, standardization
with
to interval [0,1]:
Step 7.3: to error energy
carry out border
regularization:
wherein,
Step 7.4: choose optimum error and support
wherein
Step 7.5: by
obtain the set of the pixel being blocked of all images to be detected:
Compared with prior art, beneficial effect is: adopt the method for the invention, when image dimension is lower than a certain critical value or shielded area during higher than a certain percentage point, Detection accuracy can be not significantly but not is gently declined, improve accuracy rate and the scope of application to face occlusion detection, there is significant practical value.
Brief description of the drawings
Fig. 1 is the schematic flow sheet of detection method of the present invention;
Fig. 2 is the schematic diagram that image array is stretched as to column vector;
Fig. 3 is from the image to be detected (a) of AR picture library and the schematic diagram of training plan image set (b);
Fig. 4 is the sparse coding schematic diagram of image to be detected about the dictionary being made up of training sample data;
Fig. 5 is reconstructed image (a) and reconstructed error (b) schematic diagram of image to be detected about the dictionary being made up of training sample data;
Fig. 6 is by two class means clustering methods, and by reconstructed error (a), evaluated error supports the schematic diagram of (b);
Fig. 7 supports (a) by error to set up the schematic diagram of aspect graph (b), and wherein, black represents not to be blocked, and grey represents to be blocked, represent the to be blocked border in region of the white in aspect graph (b);
Fig. 8 is passing threshold clustering method, is supported the schematic diagram of (b) by reconstructed error (a) evaluated error;
Fig. 9 be by threshold value clustering method estimate error support (a), further estimate error support the schematic diagram of (b);
Figure 10 is to image to be detected and training set in Fig. 3, and the schematic diagram that all previous iteration bears results wherein, (a) is the reconstructed image of image to be detected; (b) reconstructed error sequence; (c) error supports sequence; (d) threshold series; (e) quality evaluation sequence;
Figure 11 is the testing result of the present invention to all kinds of all previous iteration of blocking, wherein, (a) be that scarf blocks, (b) be that sunglasses blocks, (c) be that monkey is blocked, (d) be that apple blocks, (a)-(b) be the blocking of real scene shooting, from AR database; (c)-and (d) be artificial synthetic blocking, original image (containing the facial image blocking) is from Extended Yale B database.
Embodiment
According to drawings and embodiments invention is described further below.
As shown in Figure 1, specific implementation step of the present invention can be expressed as follows:
1, by facial image data to be detected
be stretched as column vector
wherein M=m × n.
2, will come from K people's
(wherein N
kbe the sample number of k people's facial image) individual not containing the facial image data of blocking
(i=1 ..., N) and as training sample.Make M=m × n, by C
ibe stretched as one dimension column data vector
(i=1 ..., N), form dictionary
3, initialization error supports s ∈ { 1,1}
m(s
i=-1 represents y
ibe not blocked, s
i=1 represents y
ibe blocked):
4, initialization iterations: t=0.
5, make iterations: t=t+1.
6,, under the criterion of minimize CD error, calculate the facial image y to be detected sparse coding x about dictionary D
(t)with reconstructed error e
(t):
7 if iteration first, and t=1, to reconstruct error e
(1)carry out two class mean clusters, support s with evaluated error
(1), and initialization threshold tau
(1):
Skip to following step 10; Otherwise t > 1, skips to following step 8.
8, estimate threshold tau
(t):
Here, T and κ are empirical parameter, and recommended value is T=5, κ=0.3.
9, by reconstructed error e
(t)and threshold tau
(t)evaluated error supports s
(t):
10, set up and describe error support s
(t)aspect graph G
b=(V, E, B), wherein: V is G
bthe set V={1 on summit (Vertex), 2 ..., M} and each vertex v
iclass be designated as
e is G
bthe set E={ (i, j) on limit (Edge) | i, j ∈ V, || c
i-c
j||
2=1}, wherein c
i=[c
i1, c
i2]
tit is vertex v
icoordinate; B is G
bthe set B={ B on border of each subgraph
k| k=-1,1}, wherein: B
k=(V
k, E
k),
11, make s'=s
(t), by aspect graph G
bwith reconstructed error e
(t), evaluated error supports s again
(t):
Wherein: λ
efor smoothing parameter (given by user, recommended value is 2),
λ
bfor boundary parameter (determined by user, recommended value is 0.5).This optimized-type meets Ising model, can be solved by GraphCuts.
12, calculate the error energy that does not block part
With the regular coefficient that blocks border partly
13, iteration above-mentioned steps 4~12, until maximum iteration time 2T-1.
14, order
To all t=1 ..., 2T-1, standardization
with
to interval [0,1]:
To error energy
carry out border
regularization:
Wherein,
15, choosing optimum error supports
wherein
by
obtain the set of all pixels that are blocked:
For better explanation, provide concrete example below:
1, from AR storehouse, get and in the 2nd people, have sample that scarf blocks as image y to be detected, dimension is 112 × 92, it is stretched as to 10304 × 1 column vector (10304=112 × 92) according to the mode shown in Fig. 2.32 that from AR storehouse, get front 4 people do not block sample (everyone 8 samples) as training sample set, as shown in Figure 3, the dimension of each sample is 112 × 92, by each sample according to shown in Fig. 2 mode be stretched as 10304 × 1 column vector, and the training sample set after stretching is organized as to dictionary D, now the dimension of dictionary D is 10304 × 32.
2, initialization error supports s
(0)(dimension is 10304 × 1),
determine parameter: λ
e=2, λ
b=0.5, T=5, κ=0.3.
3, make t=1, image y to be detected, initialization error are supported to s
(0)with training sample set D, substitution formula (1), calculate y about the sparse coding x of D
(1)with reconstructed error e
(1), as shown in Figure 4, Figure 5.To reconstruct error e
(t)carry out the mean cluster of two classes, evaluated error supports s
(1)=K (e
(1)), as shown in Figure 6, black represents
(not being blocked), grey represents
(being blocked), by formula (2) initialization threshold tau
(1)=0.4886.
4, set up and characterize error support s
(1)aspect graph G
b=(V, E, B), as shown in Figure 7, black represents
(not being blocked), grey represents
(being blocked), white represents to be blocked the boundary B in region
1.Make s'=s
(1),
s
(1), e
(1), G
bsubstitution formula (5), reappraises s
(1).
5,, by formula (6) and (7), calculate the error energy that does not block part
with the regular coefficient that blocks border partly
6, make t=2 that image y to be detected, initialization error are supported to s
(1)with sample training collection D, substitution formula (1), calculate y about the non-negative sparse coding x of D
(2)with reconstructed error e
(2), estimate threshold tau by formula (3)
(2)=0.4031.
7, reconstructed error e
(2)and threshold tau
(2)substitution formula (4), evaluated error supports s
(2), as shown in Figure 8, passing threshold clustering method, is supported by reconstructed error evaluated error.
8, set up and describe error support s
(2)aspect graph G
b=(V, E, B), makes s'=s
(2),
S
(2), e
(2), G
bsubstitution formula (5), reappraises s
(2).Fig. 9 be by threshold value clustering method estimate error support, further estimate error support, wherein black is not blocked, grey represents to be blocked.
9,, by formula (6) and (7), calculate the error energy that does not block part
with the regularity of blocking border partly
10, after 2T-1=9 iteration, obtain
with
sequence, as Table I:
The all previous iteration of Table I calculate
with
11, right by formula (8)
with
standardize, as Table II:
After Table II standardization
with
12, by formula (9) to error energy
carry out border
regularization, as shown in Table III.
Table III by formula (9) to error energy
carry out border
regularization (λ
b=1.2726)
13, choosing optimum error supports
(seeing last row of Figure 10 (b)), wherein
14, by
obtain the set of all pixels that are blocked:
as the grey part in the picture of last row of Figure 10 (b).
Figure 10 is to image to be detected and training set in Fig. 3, and the schematic diagram that all previous iteration bears results wherein, (a) is the reconstructed image of image to be detected; (b) reconstructed error sequence; (c) error supports sequence, and black is not blocked, and grey represents to be blocked; (d) threshold series; (e) quality evaluation sequence, assessed value 0.3101 minimum (being risen by square frame frame), its corresponding testing result, in (c), the error of last row supports, also best, supports s with the error of other each iterative estimate that is:
(t)(t=1,2 ..., 8) compare,
estimation be optimum, so by
the pixel being blocked obtaining is also close to truth, and effect is better.
Figure 11 is the testing result of the present invention to all kinds of all previous iteration of blocking, wherein, (a) be that scarf blocks, (b) be that sunglasses blocks, (c) be that monkey is blocked, (d) be that apple blocks, (a)-(b) be the blocking of real scene shooting, from AR database; (c)-and (d) be artificial synthetic blocking, original image (containing the facial image blocking) is from Extended Yale B database; S supports sequence corresponding to error, and black represents not to be blocked, and grey represents to be blocked, and white represents the edge of occlusion area; τ is corresponding to threshold series; C is corresponding to quality evaluation sequence, minimum value with square frame frame get up, it is the result detecting that its corresponding error supports.The optimal detection result that can see each example occurs at different iterationses, need to choose optimum result by quality evaluation.
Claims (1)
1. the face occlusion detection method based on structuring error coding, is characterized in that, comprises the following steps:
Step 1: facial image data to be detected and training sample data are stretched as to column vector;
Step 2: definition error supports, and initialization;
Step 3: under minimize CD error criterion, supported by error, calculate sparse coding and the reconstructed error of facial image data to be detected to the dictionary being formed by training sample data;
Step 4: support according to reconstructed error evaluated error;
Step 5: set up and describe the aspect graph that error supports, by aspect graph and reconstructed error, evaluated error supports again;
Step 6: iterative step 3-5, obtains reconstructed error sequence and error and support sequence;
Step 7: choose Optimal error and support, and according to the set of the pixel that is blocked in Optimal error support acquisition facial image to be detected;
Column vector that facial image to be detected and each training sample are stretched as in described step 1 is the column vector that the image data matrix of m ' n dimension is stretched as to M=m ' n dimension;
Error in described step 2 is supported for s ∈ { 1,1}
m, wherein s
i=-1 represents not to be blocked, s
i=1 represents to be blocked; It is error to be supported to s be initialized as s that initialization error supports
i=-1, i=1 ..., M; I is the subscript of s, s
irepresent i the element of s;
The dictionary being made up of training sample in described step 3 is by the training sample after each stretch processing, by row discharge, forms dictionary;
CD error in described step 3 is for measuring the vector of any two same dimension
with
between error, be defined as: CD (a
i, b
i)=1-exp (| loga
i-logb
i|/σ), wherein,
q is empirical constant; I is the subscript of vectorial a and b, a
i, b
irepresent i the element of a and b, M is the dimension of the column vector after stretching,
represent real number field
middle dimension is the vectorial set of M;
Sparse coding and the reconstructed error of facial image to be detected in described step 3 to the dictionary being made up of training sample calculates as follows:
Wherein, x is non-negative sparse coding, and e is reconstructed error,
d is the dictionary being made up of training sample, and y is facial image to be detected; S.t. represent that (x, e) needs to meet
with the constraint of x30, e
irepresent y
iwith
cD error;
The concrete steps that in described step 4, evaluated error supports are: if iteration first, t=1, carries out two class mean clusters to reconstruct error e, obtain error and support s, and initialization threshold value t
(1)=max{e
i| s
i=-1}; Otherwise t>1, carries out threshold value cluster to reconstruct error e, obtain error and support
Wherein, threshold value
T and k are empirical parameter, and t is iterations;
Described step 5 concrete steps are:
Step 5.1: set up the aspect graph G=(V, E, B) that describes error support s, wherein: the set V={1 on the summit that V is G, 2 ..., M} and each vertex v
iclass be designated as s
i; E is the set E={ (i, j) on the limit of G | i, and j ∈ V, || c
i-c
j||
2=1}, wherein c
i=[c
i1, c
i2]
t, c
j=[c
j1, c
j2]
tit is vertex v
i, v
jcoordinate; B is the set B={ B on the border of each subgraph of G
k| k=-1,1}, wherein: B
k=(V
k, E
k),
K belongs to set
l belongs to set
Step 5.2: make s'=s, by aspect graph G and reconstructed error e, evaluated error supports s again:
Wherein, λ
efor smoothing parameter,
λ
bfor boundary parameter;
The reconstructed error sequence of described step 6 is E={e
(t)| t=1,2 ..., 2T-1}, wherein e
(t)the reconstructed error producing for the t time iteration of step 3-5; It is S={s that error supports sequence
(t)| t=1,2 ..., 2T-1}, wherein s
(t)the error producing for the t time iteration of step 3-5 supports;
The concrete steps of described step 7 are:
Step 7.1: order
Wherein V={1,2 ..., M};
wherein
Step 7.2: order
,
to all t=1 ..., 2T-1, standardization
with
to interval [0,1]:
Step 7.3: to error energy
carry out border
regularization:
Wherein,
Step 7.4: choose optimum error and support
wherein
by
obtain the set of the pixel being blocked of all images to be detected:
i=1 ..., M.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210187427.8A CN102750546B (en) | 2012-06-07 | 2012-06-07 | Face shielding detection method based on structured error code |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210187427.8A CN102750546B (en) | 2012-06-07 | 2012-06-07 | Face shielding detection method based on structured error code |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102750546A CN102750546A (en) | 2012-10-24 |
CN102750546B true CN102750546B (en) | 2014-10-29 |
Family
ID=47030711
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210187427.8A Expired - Fee Related CN102750546B (en) | 2012-06-07 | 2012-06-07 | Face shielding detection method based on structured error code |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102750546B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103544683B (en) * | 2013-10-12 | 2016-04-20 | 南京理工大学 | A kind of night vision image of view-based access control model cortex highlights contour extraction method |
CN104915639B (en) * | 2015-05-19 | 2018-01-09 | 浙江工业大学 | Face identification method based on combined error coding |
CN105787432B (en) * | 2016-01-15 | 2019-02-05 | 浙江工业大学 | Face occlusion detection method based on structure perception |
CN108805179B (en) * | 2018-05-24 | 2022-03-29 | 华南理工大学 | Face local constraint coding based calibration and recognition method |
CN109711283B (en) * | 2018-12-10 | 2022-11-15 | 广东工业大学 | Occlusion expression recognition method combining double dictionaries and error matrix |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1811456A1 (en) * | 2004-11-12 | 2007-07-25 | Omron Corporation | Face feature point detector and feature point detector |
CN102270308A (en) * | 2011-07-21 | 2011-12-07 | 武汉大学 | Facial feature location method based on five sense organs related AAM (Active Appearance Model) |
-
2012
- 2012-06-07 CN CN201210187427.8A patent/CN102750546B/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1811456A1 (en) * | 2004-11-12 | 2007-07-25 | Omron Corporation | Face feature point detector and feature point detector |
CN102270308A (en) * | 2011-07-21 | 2011-12-07 | 武汉大学 | Facial feature location method based on five sense organs related AAM (Active Appearance Model) |
Also Published As
Publication number | Publication date |
---|---|
CN102750546A (en) | 2012-10-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109344736B (en) | Static image crowd counting method based on joint learning | |
CN106874894B (en) | Human body target detection method based on regional full convolution neural network | |
JP6547069B2 (en) | Convolutional Neural Network with Subcategory Recognition Function for Object Detection | |
CN113658115B (en) | Image anomaly detection method based on depth convolution generation countermeasure network | |
CN102592268B (en) | Method for segmenting foreground image | |
CN103048329B (en) | A kind of road surface crack detection method based on active contour model | |
CN102750546B (en) | Face shielding detection method based on structured error code | |
CN110287777B (en) | Golden monkey body segmentation algorithm in natural scene | |
CN110930387A (en) | Fabric defect detection method based on depth separable convolutional neural network | |
CN112767357A (en) | Yolov 4-based concrete structure disease detection method | |
CN112597985B (en) | Crowd counting method based on multi-scale feature fusion | |
CN108960141A (en) | Pedestrian's recognition methods again based on enhanced depth convolutional neural networks | |
CN106778687A (en) | Method for viewing points detecting based on local evaluation and global optimization | |
CN104091157A (en) | Pedestrian detection method based on feature fusion | |
CN113569756B (en) | Abnormal behavior detection and positioning method, system, terminal equipment and readable storage medium | |
CN110263731B (en) | Single step human face detection system | |
CN110991444A (en) | Complex scene-oriented license plate recognition method and device | |
CN111738164B (en) | Pedestrian detection method based on deep learning | |
CN111191610A (en) | People flow detection and processing method in video monitoring | |
CN115601332A (en) | Embedded fingerprint module appearance detection method based on semantic segmentation | |
CN116092179A (en) | Improved Yolox fall detection system | |
CN116129242A (en) | Aluminum product surface defect identification method based on improved YOLOv4 | |
CN115410059A (en) | Remote sensing image part supervision change detection method and device based on contrast loss | |
CN113689431B (en) | Industrial product appearance defect detection method and device | |
CN101520850B (en) | Construction method of object detection classifier, object detection method and corresponding system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20141029 Termination date: 20180607 |
|
CF01 | Termination of patent right due to non-payment of annual fee |