CN102063613A - People counting method and device based on head recognition - Google Patents

People counting method and device based on head recognition Download PDF

Info

Publication number
CN102063613A
CN102063613A CN 201010607822 CN201010607822A CN102063613A CN 102063613 A CN102063613 A CN 102063613A CN 201010607822 CN201010607822 CN 201010607822 CN 201010607822 A CN201010607822 A CN 201010607822A CN 102063613 A CN102063613 A CN 102063613A
Authority
CN
China
Prior art keywords
movement locus
point
head detection
detection zone
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201010607822
Other languages
Chinese (zh)
Other versions
CN102063613B (en
Inventor
游磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netposa Technologies Ltd
Original Assignee
Beijing Zanb Science & Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zanb Science & Technology Co Ltd filed Critical Beijing Zanb Science & Technology Co Ltd
Priority to CN 201010607822 priority Critical patent/CN102063613B/en
Publication of CN102063613A publication Critical patent/CN102063613A/en
Application granted granted Critical
Publication of CN102063613B publication Critical patent/CN102063613B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides people counting method and device based on head recognition. The method comprises the following steps of: firstly, establishing and updating a background image according to a video frame image; extracting a head detecting area from the video frame image; tracking the head detecting area for obtaining the motion track of the head detecting area; optimizing the motion track of the head detecting area; and finally, obtaining the people number according to the optimized motion track. The invention can accurately realize the counting of people.

Description

Crowd's method of counting and device based on head identification
Technical field
The present invention relates to Flame Image Process, video monitoring, particularly crowd's method of counting and device.
Background technology
Crowd's counting is an important application of intelligent monitoring, in various scenes various important application is arranged.In crowd concentration points such as subway, railway stations, add up passenger's the quantity and the data of distribution timely, improve the efficient of distribution services and management resource, and then realization science and efficient scheduling, for it provides effective safety guarantee; At business districts such as market, supermarkets, by crowd's quantity statistics, can understand customer behavior indirectly, detect the service facility in market, optimize human resources configuration, even potential safety hazard such as crowded is carried out early warning.This technology is based on Flame Image Process, need not extraneous the intervention, and advantages such as simple to operate, that independence is strong are arranged.Therefore, have a wide range of applications scope and important use of demographics is worth.
Publication number is the method that the Chinese patent application of CN1687955A discloses a kind of people from gateway counting number, and this method is by traditional target detection track algorithm realization crowd counting.Publication number is detection method and the system that the Chinese patent application of CN101325690A discloses people's flow analysis and crowd massing process in a kind of monitoring video flow, and this system is according to the tracking results statistics crowd to multiple mobile object.Publication number be US2007/0003141A1 U.S. Patent Application Publication a kind of method and system of automatic crowd counting, this system is based on the geometry feature realization crowd's of pin-head counting.Publication number is that the Chinese patent application of CN101872414A discloses a kind of people flow rate statistical method and system that detect based on the number of people, detect the number of people, follow the tracks of number of people target trajectory by sorter, and number of people target trajectory carried out the smoothness analysis with the filtering false target, obtain flow of the people accurately.
In sum, press for a kind of method of counting of crowd exactly of proposition and device at present.
Summary of the invention
In view of this, fundamental purpose of the present invention is the crowd's of realizing accurate counting.
For achieving the above object, according to first aspect of the present invention, provide a kind of crowd's method of counting based on head identification, this method comprises:
First step is according to video frame images foundation, background image updating;
Second step is extracted the head detection zone from video frame images;
Third step is followed the tracks of the head surveyed area, obtains the movement locus in head detection zone;
The 4th step, the movement locus in optimization head detection zone; And
The 5th step is obtained number according to the movement locus of optimizing.
Described first step: suppose
Figure 677160DEST_PATH_IMAGE001
Represent that k(k is an integer) two field picture,
Figure 7778DEST_PATH_IMAGE002
Represent that (wherein the initial value of background image is k frame background image
Figure 281502DEST_PATH_IMAGE003
), the more new formula of background image is as follows:
Figure 722979DEST_PATH_IMAGE004
Wherein, the horizontal ordinate and the ordinate of x, y difference remarked pixel point.
Described second step may further comprise the steps:
Step 1021 is done the poor foreground image that obtains with current frame image and background image;
Step 1022 is carried out rim detection to foreground image and is obtained marginal point;
Step 1023 is carried out the hough ballot to each marginal point and is obtained the ballot matrix;
Step 1024 is obtained in the ballot matrix greater than the point of first threshold T1, and generates the ternary number of this point;
Step 1025, calculate the ternary number the gradient inner product and;
Step 1026, according to gradient inner product and obtain the local maximum point, this local maximum point is the head detection point;
Step 1027 is obtained the head detection zone according to the head detection point.
Described step 1023 is to each marginal point
Figure 696751DEST_PATH_IMAGE005
Around carry out hough ballot, formula is as follows:
Figure 696806DEST_PATH_IMAGE006
Figure 959291DEST_PATH_IMAGE007
(
Figure 437415DEST_PATH_IMAGE008
),
Figure 468956DEST_PATH_IMAGE009
And be integer
The three-dimensional matrice that forms is
Figure 702229DEST_PATH_IMAGE010
( And be integer).Wherein, vn ∈ [9,36] and be integer, R is a standard radius, sets head size according to the image concrete condition, △ is the head variation range, can elect R/3 as.
Described step 1024 is obtained the point of ballot matrix greater than first threshold T1
Figure 297606DEST_PATH_IMAGE012
, and generate the ternary number of this point
Figure 682189DEST_PATH_IMAGE013
, wherein
Figure 961730DEST_PATH_IMAGE014
Wherein, T1 ∈
Figure 995545DEST_PATH_IMAGE015
Described step 1025: at first with point
Figure 316936DEST_PATH_IMAGE016
Be the center of circle,
Figure 845042DEST_PATH_IMAGE017
Be radius, the point on the sampling ellipse
Figure 921582DEST_PATH_IMAGE018
, calculate the gradient of each point in image
Figure 878912DEST_PATH_IMAGE019
And corresponding normal vector
Figure 941677DEST_PATH_IMAGE020
Then according to gradient and normal vector compute gradient inner product and
Figure 300852DEST_PATH_IMAGE021
Wherein,
Figure 922195DEST_PATH_IMAGE022
Described the 4th step comprises:
Step 1041, deletion traces into the movement locus in non-foreground detection zone: if promptly the movement locus point of following the tracks of in the current frame image belongs to non-foreground detection zone, then delete this movement locus;
Step 1042, deletion is static movement locus obviously: promptly calculate the average displacement of current frame image and preceding N two field picture on the movement locus, if this average displacement<the 3rd threshold value T3 then deletes this movement locus.Wherein, N and T3 can be provided with according to practical application request;
Step 1043, deletion do not satisfy the conforming movement locus of motion;
Step 1044 merges the movement locus that overlaps or intersect.
Described step 1043 operation is as follows: the speed of calculating movement locus
Figure 930602DEST_PATH_IMAGE023
, movement locus direction
Figure 295593DEST_PATH_IMAGE024
If
Figure 948422DEST_PATH_IMAGE025
(wherein,
Figure 865301DEST_PATH_IMAGE026
The expression former frame arrives the motion vector of current frame image,
Figure 407009DEST_PATH_IMAGE027
The expression present frame is to the motion vector of next frame image;
Figure 139473DEST_PATH_IMAGE028
Figure 207661DEST_PATH_IMAGE029
), think that then this track satisfies the motion consistance and keeps this movement locus, otherwise think and do not satisfy the motion consistance and delete this movement locus.
Described step 1044: the Euclidean distance that calculates the trace point of two movement locus; If Euclidean distance<the 4th threshold value T4(T4 ∈ [5,15] and be integer), then calculate the analog quantity of these two movement locus respectively
Figure 734588DEST_PATH_IMAGE030
,
Figure 153806DEST_PATH_IMAGE031
, wherein, The gray-scale value in expression article one track head detection zone in current frame image,
Figure 284759DEST_PATH_IMAGE033
The gray-scale value in expression second track head detection zone in current frame image,
Figure 248167DEST_PATH_IMAGE034
The head detection provincial characteristics that the expression present frame upgrades,
Figure 889102DEST_PATH_IMAGE035
(
Figure 697789DEST_PATH_IMAGE036
The expression prior image frame, the average gray in head detection zone in the expression prior image frame , Expression back two field picture,
Figure 675213DEST_PATH_IMAGE038
The average gray in head detection zone in the two field picture of expression back); If
Figure 989388DEST_PATH_IMAGE039
, then delete the second track, otherwise deletion article one track.
Described the 5th step comprises following any one or two steps:
Step 1051, specify the number of xsect according to the statistics of the movement locus after optimizing: promptly when the xsect of movement locus and appointment is crossing, by adding up the number of crossing movement locus, this number is the number of statistics;
Step 1052 is according to the number in the statistics of the movement locus after the optimizing appointed area: number (the T5 ∈ that promptly adds up the movement locus of course length in the appointed area>the 5th threshold value T5 ), this number is the number of statistics.
According to another aspect of the present invention, a kind of crowd's counting assembly based on head identification is provided, this device comprises:
Background is set up updating block, according to video frame images foundation, background image updating;
Head detection extracted region unit extracts the head detection zone from video frame images;
The movement locus acquiring unit is followed the tracks of the head surveyed area, obtains the movement locus in head detection zone;
Movement locus is optimized the unit, optimizes the movement locus in head detection zone; And
The demographics unit obtains number according to the movement locus of optimizing.
Described head detection extracted region unit comprises:
The foreground image acquisition module is done the poor foreground image that obtains with current frame image and background image;
The marginal point acquisition module carries out rim detection to foreground image and obtains marginal point;
Ballot matrix acquisition module carries out the hough ballot to each marginal point and obtains the ballot matrix;
Ternary is counted acquisition module, obtains in the ballot matrix greater than the point of first threshold T1, and generates the ternary number of this point;
Gradient inner product and computing module, calculate the ternary number the gradient inner product and;
Head detection point acquisition module, according to gradient inner product and obtain the local maximum point, this local maximum point is the head detection point;
Head detection zone acquisition module obtains the head detection zone according to the head detection point.
Described movement locus is optimized the unit and is comprised:
Non-foreground detection zone track filtering module, deletion traces into the movement locus in non-foreground detection zone;
Obvious static track filtering module, deletion is static movement locus obviously;
Non-motion consistance track filtering module, deletion do not satisfy the conforming movement locus of motion;
Overlap or crossover track merging module, merge the movement locus that overlaps or intersect.
Compared with prior art, crowd's method of counting and device based on head identification of the present invention can be realized crowd's counting exactly.
Compare with common non-demographic method based on number of people identification, crowd's method of counting and device based on head identification of the present invention, energy filtering false target is realized crowd's counting exactly.With publication number is that the method based on the people flow rate statistical of number of people statistics of CN101872414A is compared, the present invention adopts hough ballot matrix, ternary to count detection heads such as inner product, and by non-foreground detection zone track filtering module, obvious static track filtering module, non-motion consistance track filtering module and coincidence or crossover track merging module filtering false target, thereby obtain number accurately.
Description of drawings
Fig. 1 shows the process flow diagram according to the crowd's method of counting based on head identification of the present invention.
Fig. 2 shows the process flow diagram according to second step of the present invention.
Fig. 3 shows the structural drawing according to ADM operator of the present invention.
Fig. 4 shows the process flow diagram according to the 4th step of the present invention.
Fig. 5 shows the structural drawing according to the crowd's counting assembly based on head identification of the present invention.
Fig. 6 shows the structural drawing according to head detection extracted region of the present invention unit 2.
Fig. 7 shows the structural drawing of optimizing unit 4 according to movement locus of the present invention.
Embodiment
For making your auditor can further understand structure of the present invention, feature and other purposes, now be described in detail as follows in conjunction with appended preferred embodiment, illustrated preferred embodiment only is used to technical scheme of the present invention is described, and non-limiting the present invention.
Fig. 1 represents the process flow diagram according to the crowd's method of counting based on head identification of the present invention.As shown in Figure 1, comprise according to the crowd's method of counting based on head identification of the present invention:
First step 101 is according to video frame images foundation, background image updating;
Second step 102 is extracted the head detection zone from video frame images;
Third step 103 is followed the tracks of the head surveyed area, obtains the movement locus in head detection zone;
The 4th step 104, the movement locus in optimization head detection zone;
The 5th step 105 is obtained number according to the movement locus of optimizing.
First step:
Preferably, described first step 101: suppose
Figure 926306DEST_PATH_IMAGE001
Represent that k(k is an integer) two field picture,
Figure 43298DEST_PATH_IMAGE002
Represent that (wherein the initial value of background image is k frame background image ), the more new formula of background image is as follows:
Figure 670816DEST_PATH_IMAGE004
Wherein, the horizontal ordinate and the ordinate of x, y difference remarked pixel point.
Second step:
Fig. 2 shows the process flow diagram according to second step 102 of the present invention.As shown in Figure 2, preferably, described second step 102 may further comprise the steps:
Step 1021 is done the poor foreground image that obtains with current frame image and background image;
Step 1022 is carried out rim detection to foreground image and is obtained marginal point;
Step 1023 is carried out the hough ballot to each marginal point and is obtained the ballot matrix;
Step 1024 is obtained in the ballot matrix greater than the point of first threshold T1, and generates the ternary number of this point;
Step 1025, calculate the ternary number the gradient inner product and;
Step 1026, according to gradient inner product and obtain the local maximum point, this local maximum point is the head detection point;
Step 1027 is obtained the head detection zone according to the head detection point.
Preferably, described step 1022 is at first carried out smothing filtering to foreground image, utilizes ADM operator extraction marginal point then.The ADM operator (Figure 3 shows that the structural drawing of ADM operator, but it is as follows concrete list of references: " Fahad Alzahrani; Tom Chen. A Real_Time High Performance Edge Detector for Computer Vision Applications. 1997 Proceedings of the Asia and South Pacific Design Automation Conference, 671~672 ") to extract marginal point: the edge strength that calculates the foreground point
Figure 209244DEST_PATH_IMAGE041
, wherein ,
Figure 328565DEST_PATH_IMAGE043
,
Figure 701909DEST_PATH_IMAGE044
,
Figure 581878DEST_PATH_IMAGE045
Choosing edge strength is candidate point greater than the foreground point of the second threshold value T2; Along candidate point
Figure 735778DEST_PATH_IMAGE046
The counterparty promptly compares to carrying out non-very big inhibition
Figure 493650DEST_PATH_IMAGE046
With
Figure 220035DEST_PATH_IMAGE046
The make progress size of adjacent two point values of counterparty, if >adjacent 2 value thinks that then this point is a marginal point.Wherein, the second threshold value T2 ∈ [10,40] and be integer.
Preferably, described step 1023 is to each marginal point
Figure 646523DEST_PATH_IMAGE005
Around carry out hough ballot, formula is as follows:
Figure 208086DEST_PATH_IMAGE006
Figure 352759DEST_PATH_IMAGE007
(
Figure 840110DEST_PATH_IMAGE008
), And be integer
The three-dimensional matrice that forms is
Figure 301233DEST_PATH_IMAGE010
(
Figure 565993DEST_PATH_IMAGE011
And be integer).Wherein, vn ∈ [9,36] and be integer, R is a standard radius, sets head size according to the image concrete condition, △ is the head variation range, can elect R/3 as.
Preferably, described step 1024 is obtained the point of ballot matrix greater than first threshold T1
Figure 725710DEST_PATH_IMAGE012
, and generate the ternary number of this point
Figure 308875DEST_PATH_IMAGE013
, wherein
Figure 274557DEST_PATH_IMAGE014
Wherein, T1 ∈
Figure 393823DEST_PATH_IMAGE015
Preferably, described step 1025: at first with point
Figure 222977DEST_PATH_IMAGE016
Be the center of circle,
Figure 60483DEST_PATH_IMAGE017
Be radius, the point on the sampling ellipse
Figure 501959DEST_PATH_IMAGE018
, calculate the gradient of each point in image And corresponding normal vector Then according to gradient and normal vector compute gradient inner product and
Figure 535009DEST_PATH_IMAGE021
Wherein,
Figure 278712DEST_PATH_IMAGE022
Third step:
Preferably, described third step 103 adopts the tracking (referring to list of references: " David A. Ross; Jongwoo Lim; Ruei-Sung Lin; Ming-Hsuan Yang. Incremental Learning for Robust Visual Tracking. IJCV ") of PCA particle filters, and its step is as follows:
Step 1031 is extracted the feature in head detection zone, initialization feature space according to PCA
Figure 372570DEST_PATH_IMAGE047
(n is the initialized dimension of feature space, n ∈ [5,15] and be integer), and A is carried out svd obtain
Figure 107308DEST_PATH_IMAGE048
Step 1032 all projects to feature space with each particle, calculates the weight of particle, and the heavy maximum particle of weighting is real tracking results, and the weight calculation formula of particle is as follows:
Figure 152362DEST_PATH_IMAGE049
Wherein, k ∈ [1, Max] and be integer, Max is the maximum number of particle;
Step 1033 is updated to feature space with new tracking results characteristic of correspondence: establish
Figure 935641DEST_PATH_IMAGE050
(m represents to upgrade the required characteristic number of feature space, is preferably m=n/2), calculate (orth represents orthogonalization), , R is carried out svd obtains
Figure 321995DEST_PATH_IMAGE053
, the feature space after the renewal is
Figure 908965DEST_PATH_IMAGE054
,
Figure 475951DEST_PATH_IMAGE055
Step 1034 connects all tracking results to form the movement locus in head detection zone.
The 4th step:
Fig. 4 shows the process flow diagram according to the 4th step of the present invention.As shown in Figure 4, preferably, described the 4th step 104 step is as follows:
Step 1041, deletion traces into the movement locus in non-foreground detection zone;
Step 1042, deletion is static movement locus obviously;
Step 1043, deletion do not satisfy the conforming movement locus of motion;
Step 1044 merges the movement locus that overlaps or intersect.
Preferably, described step 1041:, then delete this movement locus if the movement locus of following the tracks of in current frame image point belongs to non-foreground detection zone.
Preferably, described step 1042: the average displacement of current frame image and preceding N two field picture on the calculating movement locus, if this average displacement<the 3rd threshold value T3 then deletes this movement locus.Wherein, N and T3 can be provided with according to practical application request.
Preferably, described step 1043 operation is as follows: the speed of calculating movement locus
Figure 552491DEST_PATH_IMAGE023
, movement locus direction
Figure 808023DEST_PATH_IMAGE024
If
Figure 697219DEST_PATH_IMAGE025
(wherein,
Figure 620176DEST_PATH_IMAGE026
The expression former frame arrives the motion vector of current frame image,
Figure 867618DEST_PATH_IMAGE027
The expression present frame is to the motion vector of next frame image;
Figure 610446DEST_PATH_IMAGE028
), think that then this track satisfies the motion consistance and keeps this movement locus, otherwise think and do not satisfy the motion consistance and delete this movement locus.
Preferably, described step 1044: the Euclidean distance that calculates the trace point of two movement locus; If Euclidean distance<the 4th threshold value T4(T4 ∈ [5,15] and be integer), then calculate the analog quantity of these two movement locus respectively ,
Figure 233560DEST_PATH_IMAGE031
, wherein,
Figure 188919DEST_PATH_IMAGE032
The gray-scale value in expression article one track head detection zone in current frame image, The gray-scale value in expression second track head detection zone in current frame image,
Figure 615669DEST_PATH_IMAGE034
The head detection provincial characteristics that the expression present frame upgrades,
Figure 641132DEST_PATH_IMAGE035
(
Figure 358552DEST_PATH_IMAGE036
The expression prior image frame, the average gray in head detection zone in the expression prior image frame ,
Figure 879718DEST_PATH_IMAGE037
Expression back two field picture,
Figure 639864DEST_PATH_IMAGE038
The average gray in head detection zone in the two field picture of expression back); If
Figure 280798DEST_PATH_IMAGE039
, then delete the second track, otherwise deletion article one track.
The 5th step:
Preferably, described the 5th step 105 comprises following any one or two steps:
Step 1051 is according to the number of the movement locus statistics appointment xsect after optimizing;
Step 1052 is according to the number in the movement locus statistics appointed area after optimizing.
Preferably, described step 1051: when the xsect of movement locus and appointment intersected, by adding up the number of crossing movement locus, this number was the number of statistics.
Preferably, described step 1052: number (the T5 ∈ of the movement locus of course length>the 5th threshold value T5 in the statistics appointed area ), this number is the number of statistics.
Fig. 5 shows the frame diagram according to the crowd's counting assembly based on head identification of the present invention.As shown in Figure 5, the crowd's counting assembly based on head identification comprises:
Background is set up updating block 1, according to video frame images foundation, background image updating;
Head detection extracted region unit 2 extracts the head detection zone from video frame images;
Movement locus acquiring unit 3 is followed the tracks of the head surveyed area, obtains the movement locus in head detection zone;
Movement locus is optimized unit 4, optimizes the movement locus in head detection zone;
Demographics unit 5 obtains number according to the movement locus of optimizing.
Fig. 6 shows the structural drawing according to head detection extracted region of the present invention unit 2.As shown in Figure 6, comprise according to head detection extracted region of the present invention unit 2:
Foreground image acquisition module 21 is done the poor foreground image that obtains with current frame image and background image;
Marginal point acquisition module 22 carries out rim detection to foreground image and obtains marginal point;
Ballot matrix acquisition module 23 carries out the hough ballot to each marginal point and obtains the ballot matrix;
Ternary is counted acquisition module 24, obtains in the ballot matrix greater than the point of first threshold T1, and generates the ternary number of this point;
Gradient inner product and computing module 25, calculate the ternary number the gradient inner product and;
Head detection point acquisition module 26, according to gradient inner product and obtain the local maximum point, this local maximum point is the head detection point;
Head detection zone acquisition module 27 obtains the head detection zone according to the head detection point.
Fig. 7 shows the structural drawing of optimizing unit 4 according to movement locus of the present invention.As shown in Figure 7, optimizing unit 4 according to movement locus of the present invention comprises:
Non-foreground detection zone track filtering module 41, deletion traces into the movement locus in non-foreground detection zone;
Obvious static track filtering module 42, deletion is static movement locus obviously;
Non-motion consistance track filtering module 43, deletion do not satisfy the conforming movement locus of motion;
Overlap or crossover track merging module 44, merge the movement locus that overlaps or intersect.
Compare with common non-demographic method based on number of people identification, crowd's method of counting and device based on head identification of the present invention, energy filtering false target is realized crowd's counting exactly.With publication number is that the method based on the people flow rate statistical of number of people statistics of CN101872414A is compared, the present invention adopts hough ballot matrix, ternary to count detection heads such as inner product, and by non-foreground detection zone track filtering module, obvious static track filtering module, non-motion consistance track filtering module and coincidence or crossover track merging module filtering false target, thereby obtain number accurately.
What need statement is that foregoing invention content and embodiment are intended to prove the practical application of technical scheme provided by the present invention, should not be construed as the qualification to protection domain of the present invention.Those skilled in the art are in spirit of the present invention and principle, when doing various modifications, being equal to and replacing or improve.Protection scope of the present invention is as the criterion with appended claims.

Claims (13)

1. based on crowd's method of counting of head identification, it is characterized in that this method comprises:
First step is according to video frame images foundation, background image updating;
Second step is extracted the head detection zone from video frame images;
Third step is followed the tracks of the head surveyed area, obtains the movement locus in head detection zone;
The 4th step, the movement locus in optimization head detection zone; And
The 5th step is obtained number according to the movement locus of optimizing.
2. the method for claim 1 is characterized in that, described first step: suppose
Figure 2010106078228100001DEST_PATH_IMAGE002
Represent that k(k is an integer) two field picture, Represent that (wherein the initial value of background image is k frame background image
Figure 2010106078228100001DEST_PATH_IMAGE006
), the more new formula of background image is as follows:
Figure 2010106078228100001DEST_PATH_IMAGE008
Wherein, the horizontal ordinate and the ordinate of x, y difference remarked pixel point.
3. the method for claim 1 is characterized in that, described second step may further comprise the steps:
Step 1021 is done the poor foreground image that obtains with current frame image and background image;
Step 1022 is carried out rim detection to foreground image and is obtained marginal point;
Step 1023 is carried out the hough ballot to each marginal point and is obtained the ballot matrix;
Step 1024 is obtained in the ballot matrix greater than the point of first threshold T1, and generates the ternary number of this point;
Step 1025, calculate the ternary number the gradient inner product and;
Step 1026, according to gradient inner product and obtain the local maximum point, this local maximum point is the head detection point;
Step 1027 is obtained the head detection zone according to the head detection point.
4. method as claimed in claim 3, described step 1023 is to each marginal point
Figure 2010106078228100001DEST_PATH_IMAGE010
Around carry out hough ballot, formula is as follows:
Figure 2010106078228100001DEST_PATH_IMAGE012
Figure 2010106078228100001DEST_PATH_IMAGE014
(
Figure 2010106078228100001DEST_PATH_IMAGE016
), And be integer
The three-dimensional matrice that forms is
Figure 2010106078228100001DEST_PATH_IMAGE020
(
Figure 2010106078228100001DEST_PATH_IMAGE022
And be integer), wherein, vn ∈ [9,36] and be integer, R is a standard radius, △ is the head variation range.
5. method as claimed in claim 3, described step 1024 are obtained the point of ballot matrix greater than first threshold T1 , and generate the ternary number of this point
Figure 2010106078228100001DEST_PATH_IMAGE026
, wherein
Figure 2010106078228100001DEST_PATH_IMAGE028
, T1 ∈
Figure 2010106078228100001DEST_PATH_IMAGE030
6. method as claimed in claim 3, described step 1025: at first with point
Figure 2010106078228100001DEST_PATH_IMAGE032
Be the center of circle,
Figure 2010106078228100001DEST_PATH_IMAGE034
Be radius, the point on the sampling ellipse , calculate the gradient of each point in image
Figure 2010106078228100001DEST_PATH_IMAGE038
And corresponding normal vector
Figure 2010106078228100001DEST_PATH_IMAGE040
Then according to gradient and normal vector compute gradient inner product and , wherein,
Figure 2010106078228100001DEST_PATH_IMAGE044
7. the method for claim 1 is characterized in that, described the 4th step comprises:
Step 1041, deletion traces into the movement locus in non-foreground detection zone: if promptly the movement locus point of following the tracks of in the current frame image belongs to non-foreground detection zone, then delete this movement locus;
Step 1042, deletion is static movement locus obviously: promptly calculate the average displacement of current frame image and preceding N two field picture on the movement locus, if this average displacement<the 3rd threshold value T3 then deletes this movement locus, wherein, N and T3 can be provided with according to practical application request;
Step 1043, deletion do not satisfy the conforming movement locus of motion;
Step 1044 merges the movement locus that overlaps or intersect.
8. method as claimed in claim 7 is characterized in that, described step 1043 operation is as follows: the speed of calculating movement locus
Figure 2010106078228100001DEST_PATH_IMAGE046
, movement locus direction
Figure 2010106078228100001DEST_PATH_IMAGE048
If
Figure 2010106078228100001DEST_PATH_IMAGE050
(wherein,
Figure 2010106078228100001DEST_PATH_IMAGE052
The expression former frame arrives the motion vector of current frame image,
Figure 2010106078228100001DEST_PATH_IMAGE054
The expression present frame is to the motion vector of next frame image;
Figure 2010106078228100001DEST_PATH_IMAGE056
), think that then this track satisfies the motion consistance and keeps this movement locus, otherwise think and do not satisfy the motion consistance and delete this movement locus.
9. method as claimed in claim 7 is characterized in that, described step 1044: the Euclidean distance that calculates the trace point of two movement locus; If Euclidean distance<the 4th threshold value T4(T4 ∈ [5,15] and be integer), then calculate the analog quantity of these two movement locus respectively
Figure 2010106078228100001DEST_PATH_IMAGE060
,
Figure 2010106078228100001DEST_PATH_IMAGE062
, wherein,
Figure 2010106078228100001DEST_PATH_IMAGE064
The gray-scale value in expression article one track head detection zone in current frame image,
Figure 2010106078228100001DEST_PATH_IMAGE066
The gray-scale value in expression second track head detection zone in current frame image,
Figure 2010106078228100001DEST_PATH_IMAGE068
The head detection provincial characteristics that the expression present frame upgrades,
Figure 2010106078228100001DEST_PATH_IMAGE070
(
Figure 2010106078228100001DEST_PATH_IMAGE072
The expression prior image frame, the average gray in head detection zone in the expression prior image frame
Figure 759365DEST_PATH_IMAGE068
, Expression back two field picture, The average gray in head detection zone in the two field picture of expression back); If
Figure 2010106078228100001DEST_PATH_IMAGE078
, then delete the second track, otherwise deletion article one track.
10. the method for claim 1 is characterized in that, described the 5th step comprises following any one or two steps:
Step 1051, specify the number of xsect according to the statistics of the movement locus after optimizing: promptly when the xsect of movement locus and appointment is crossing, by adding up the number of crossing movement locus, this number is the number of statistics;
Step 1052 is according to the number in the statistics of the movement locus after the optimizing appointed area: number (the T5 ∈ that promptly adds up the movement locus of course length in the appointed area>the 5th threshold value T5
Figure 2010106078228100001DEST_PATH_IMAGE080
), this number is the number of statistics.
11. low-density crowd's statistic device is characterized in that, this device comprises:
Background is set up updating block, according to video frame images foundation, background image updating;
Head detection extracted region unit extracts the head detection zone from video frame images;
The movement locus acquiring unit is followed the tracks of the head surveyed area, obtains the movement locus in head detection zone;
Movement locus is optimized the unit, optimizes the movement locus in head detection zone; And
The demographics unit obtains number according to the movement locus of optimizing.
12. device as claimed in claim 11 is characterized in that, described head detection extracted region unit comprises:
The foreground image acquisition module is done the poor foreground image that obtains with current frame image and background image;
The marginal point acquisition module carries out rim detection to foreground image and obtains marginal point;
Ballot matrix acquisition module carries out the hough ballot to each marginal point and obtains the ballot matrix;
Ternary is counted acquisition module, obtains in the ballot matrix greater than the point of first threshold T1, and generates the ternary number of this point;
Gradient inner product and computing module, calculate the ternary number the gradient inner product and;
Head detection point acquisition module, according to gradient inner product and obtain the local maximum point, this local maximum point is the head detection point;
Head detection zone acquisition module obtains the head detection zone according to the head detection point.
13. device as claimed in claim 11 is characterized in that, described movement locus is optimized the unit and is comprised:
Non-foreground detection zone track filtering module, deletion traces into the movement locus in non-foreground detection zone;
Obvious static track filtering module, deletion is static movement locus obviously;
Non-motion consistance track filtering module, deletion do not satisfy the conforming movement locus of motion;
Overlap or crossover track merging module, merge the movement locus that overlaps or intersect.
CN 201010607822 2010-12-28 2010-12-28 People counting method and device based on head recognition Active CN102063613B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201010607822 CN102063613B (en) 2010-12-28 2010-12-28 People counting method and device based on head recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201010607822 CN102063613B (en) 2010-12-28 2010-12-28 People counting method and device based on head recognition

Publications (2)

Publication Number Publication Date
CN102063613A true CN102063613A (en) 2011-05-18
CN102063613B CN102063613B (en) 2012-12-05

Family

ID=43998882

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201010607822 Active CN102063613B (en) 2010-12-28 2010-12-28 People counting method and device based on head recognition

Country Status (1)

Country Link
CN (1) CN102063613B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663776A (en) * 2012-03-31 2012-09-12 北京智安邦科技有限公司 Violent movement detection method based on characteristic point analysis and device thereof
CN102663491A (en) * 2012-03-13 2012-09-12 浙江工业大学 Method for counting high density population based on SURF characteristic
CN102867349A (en) * 2012-08-20 2013-01-09 无锡慧眼电子科技有限公司 People counting method based on elliptical ring template matching
CN103049787A (en) * 2011-10-11 2013-04-17 汉王科技股份有限公司 People counting method and system based on head and shoulder features
CN103065123A (en) * 2012-12-21 2013-04-24 南京邮电大学 Head tracking and counting method based on image preprocessing and background difference
CN103975343A (en) * 2011-11-09 2014-08-06 塔塔咨询服务有限公司 A system and method for enhancing human counting by fusing results of human detection modalities
CN104318578A (en) * 2014-11-12 2015-01-28 苏州科达科技股份有限公司 Video image analyzing method and system
CN105225485A (en) * 2015-10-09 2016-01-06 山东高速信息工程有限公司 The monitoring method of a kind of Expressway Service service capacity, system and device
CN105704434A (en) * 2014-11-28 2016-06-22 上海新联纬讯科技发展有限公司 Stadium population monitoring method and system based on intelligent video identification
CN105868697A (en) * 2016-03-25 2016-08-17 北京智芯原动科技有限公司 Method and device for quickly detecting human head
CN106156695A (en) * 2015-03-30 2016-11-23 日本电气株式会社 Outlet and/or entrance area recognition methods and device
CN106250820A (en) * 2016-07-20 2016-12-21 华南理工大学 A kind of staircase mouth passenger flow based on image procossing is blocked up detection method
CN106815563A (en) * 2016-12-27 2017-06-09 浙江大学 A kind of crowd's quantitative forecasting technique based on human body apparent structure
CN108647631A (en) * 2013-06-28 2018-10-12 日本电气株式会社 Training data generates equipment, methods and procedures and crowd state identification equipment, methods and procedures
CN114333120A (en) * 2022-03-14 2022-04-12 南京理工大学 Bus passenger flow detection method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101188743A (en) * 2007-09-17 2008-05-28 深圳先进技术研究院 An intelligent digital system based on video and its processing method
CN101464946A (en) * 2009-01-08 2009-06-24 上海交通大学 Detection method based on head identification and tracking characteristics
CN101477641A (en) * 2009-01-07 2009-07-08 北京中星微电子有限公司 Demographic method and system based on video monitoring

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101188743A (en) * 2007-09-17 2008-05-28 深圳先进技术研究院 An intelligent digital system based on video and its processing method
CN101477641A (en) * 2009-01-07 2009-07-08 北京中星微电子有限公司 Demographic method and system based on video monitoring
CN101464946A (en) * 2009-01-08 2009-06-24 上海交通大学 Detection method based on head identification and tracking characteristics

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《机械制造与自动化》 20100831 顾德军,伍铁军 《一种基于人头特征的人数统计方法研究》 134-138 1 , 第4期 *

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049787A (en) * 2011-10-11 2013-04-17 汉王科技股份有限公司 People counting method and system based on head and shoulder features
CN103049787B (en) * 2011-10-11 2015-11-25 汉王科技股份有限公司 A kind of demographic method based on head shoulder feature and system
CN103975343B (en) * 2011-11-09 2018-06-01 塔塔咨询服务有限公司 For enhancing the system and method that the mankind count by merging the result of mankind's sensed-mode
CN103975343A (en) * 2011-11-09 2014-08-06 塔塔咨询服务有限公司 A system and method for enhancing human counting by fusing results of human detection modalities
CN102663491A (en) * 2012-03-13 2012-09-12 浙江工业大学 Method for counting high density population based on SURF characteristic
CN102663491B (en) * 2012-03-13 2014-09-03 浙江工业大学 Method for counting high density population based on SURF characteristic
CN102663776A (en) * 2012-03-31 2012-09-12 北京智安邦科技有限公司 Violent movement detection method based on characteristic point analysis and device thereof
CN102867349A (en) * 2012-08-20 2013-01-09 无锡慧眼电子科技有限公司 People counting method based on elliptical ring template matching
CN102867349B (en) * 2012-08-20 2015-03-25 无锡慧眼电子科技有限公司 People counting method based on elliptical ring template matching
CN103065123A (en) * 2012-12-21 2013-04-24 南京邮电大学 Head tracking and counting method based on image preprocessing and background difference
CN108647631A (en) * 2013-06-28 2018-10-12 日本电气株式会社 Training data generates equipment, methods and procedures and crowd state identification equipment, methods and procedures
US11132587B2 (en) 2013-06-28 2021-09-28 Nec Corporation Training data generating device, method, and program, and crowd state recognition device, method, and program
US11836586B2 (en) 2013-06-28 2023-12-05 Nec Corporation Training data generating device, method, and program, and crowd state recognition device, method, and program
US20210350191A1 (en) * 2013-06-28 2021-11-11 Nec Corporation Training data generating device, method, and program, and crowd state recognition device, method, and program
CN104318578B (en) * 2014-11-12 2017-07-21 苏州科达科技股份有限公司 A kind of video image analysis method and system
CN104318578A (en) * 2014-11-12 2015-01-28 苏州科达科技股份有限公司 Video image analyzing method and system
CN105704434A (en) * 2014-11-28 2016-06-22 上海新联纬讯科技发展有限公司 Stadium population monitoring method and system based on intelligent video identification
CN106156695A (en) * 2015-03-30 2016-11-23 日本电气株式会社 Outlet and/or entrance area recognition methods and device
CN106156695B (en) * 2015-03-30 2019-09-20 日本电气株式会社 Outlet and/or entrance area recognition methods and device
CN105225485A (en) * 2015-10-09 2016-01-06 山东高速信息工程有限公司 The monitoring method of a kind of Expressway Service service capacity, system and device
CN105868697A (en) * 2016-03-25 2016-08-17 北京智芯原动科技有限公司 Method and device for quickly detecting human head
CN105868697B (en) * 2016-03-25 2019-08-13 北京智芯原动科技有限公司 A kind of quick number of people detection method and device
CN106250820B (en) * 2016-07-20 2019-06-18 华南理工大学 A kind of staircase mouth passenger flow congestion detection method based on image procossing
CN106250820A (en) * 2016-07-20 2016-12-21 华南理工大学 A kind of staircase mouth passenger flow based on image procossing is blocked up detection method
CN106815563B (en) * 2016-12-27 2020-06-02 浙江大学 Human body apparent structure-based crowd quantity prediction method
CN106815563A (en) * 2016-12-27 2017-06-09 浙江大学 A kind of crowd's quantitative forecasting technique based on human body apparent structure
CN114333120A (en) * 2022-03-14 2022-04-12 南京理工大学 Bus passenger flow detection method and system

Also Published As

Publication number Publication date
CN102063613B (en) 2012-12-05

Similar Documents

Publication Publication Date Title
CN102063613A (en) People counting method and device based on head recognition
Zhang et al. An empirical study of multi-scale object detection in high resolution UAV images
CN102521565B (en) Garment identification method and system for low-resolution video
CN102799935B (en) Human flow counting method based on video analysis technology
CN103246896B (en) A kind of real-time detection and tracking method of robustness vehicle
CN101303727B (en) Intelligent management method based on video human number Stat. and system thereof
CN102163290B (en) Method for modeling abnormal events in multi-visual angle video monitoring based on temporal-spatial correlation information
CN101325690A (en) Method and system for detecting human flow analysis and crowd accumulation process of monitoring video flow
CN104978567B (en) Vehicle checking method based on scene classification
CN101799968B (en) Detection method and device for oil well intrusion based on video image intelligent analysis
CN102034212A (en) City management system based on video analysis
CN102324016A (en) Statistical method for high-density crowd flow
CN105528794A (en) Moving object detection method based on Gaussian mixture model and superpixel segmentation
CN102903124A (en) Moving object detection method
CN102298709A (en) Energy-saving intelligent identification digital signage fused with multiple characteristics in complicated environment
CN105893962A (en) Method for counting passenger flow at airport security check counter
CN109446989A (en) Crowd massing detection method, device and storage medium
CN103489012B (en) Crowd density detecting method and system based on support vector machine
CN104732236B (en) A kind of crowd's abnormal behaviour intelligent detecting method based on layered shaping
CN101976353A (en) Statistical method and device of low density crowd
CN102254394A (en) Antitheft monitoring method for poles and towers in power transmission line based on video difference analysis
Li et al. Improved YOLOv4 network using infrared images for personnel detection in coal mines
CN103577804B (en) Based on SIFT stream and crowd's Deviant Behavior recognition methods of hidden conditional random fields
CN103049749A (en) Method for re-recognizing human body under grid shielding
Yan-yan et al. Pedestrian detection and tracking for counting applications in metro station

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: NETPOSA TECHNOLOGIES, LTD.

Free format text: FORMER OWNER: BEIJING ZANB SCIENCE + TECHNOLOGY CO., LTD.

Effective date: 20150716

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20150716

Address after: 100102, Beijing, Chaoyang District, Tong Tung Street, No. 1, Wangjing SOHO tower, two, C, 26 floor

Patentee after: NETPOSA TECHNOLOGIES, Ltd.

Address before: 100048 Beijing city Haidian District Road No. 9 Building No. four Room 501 International Building subject

Patentee before: Beijing ZANB Technology Co.,Ltd.

PP01 Preservation of patent right

Effective date of registration: 20220726

Granted publication date: 20121205

PP01 Preservation of patent right