CN102136060A - Method for detecting population density - Google Patents

Method for detecting population density Download PDF

Info

Publication number
CN102136060A
CN102136060A CN2011100512256A CN201110051225A CN102136060A CN 102136060 A CN102136060 A CN 102136060A CN 2011100512256 A CN2011100512256 A CN 2011100512256A CN 201110051225 A CN201110051225 A CN 201110051225A CN 102136060 A CN102136060 A CN 102136060A
Authority
CN
China
Prior art keywords
population density
digital video
detection method
density detection
density
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2011100512256A
Other languages
Chinese (zh)
Inventor
赵春水
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SUZHOU VISION WISE COMMUNICATION TECHNOLOGY Co Ltd
Original Assignee
SUZHOU VISION WISE COMMUNICATION TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SUZHOU VISION WISE COMMUNICATION TECHNOLOGY Co Ltd filed Critical SUZHOU VISION WISE COMMUNICATION TECHNOLOGY Co Ltd
Priority to CN2011100512256A priority Critical patent/CN102136060A/en
Publication of CN102136060A publication Critical patent/CN102136060A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a method for detecting population density. The method is implemented under the support of a digital signal processing chip and a digital camera serving as a sensor, and is characterized by comprising the following steps of: (1) continuously acquiring a fixed point area by using the digital camera serving as the sensor to form a digital video; (2) extracting the angular point characteristic of the digital video obtained in the step (1) on the basis of a gradient method; (3) extracting the textural characteristic of the digital video obtained in the step (1) on the basis of scaling invariant local triple property (SILIP); and (4) estimating the population density of the fixed point area by using the angular point characteristic obtained in the step (2) and the regression relation between the characteristic combination of the textural characteristic obtained in the step (3) and the population density. By adopting the method for detecting the population density, the problem of density estimating error caused by an angle difference, an over dense population, illumination variation and the like existing in the conventional density detection method based on video analysis is solved. The method has the advantages of high accuracy, strong robustness and the like.

Description

A kind of population density detection method
Technical field
The invention belongs to the population density detection method, especially based on the population density detection method of video analysis.
Background technology
Colony (vehicle/crowd) density early warning comprises vehicle block up early warning and crowd massing early warning etc., and there is very important meaning in unimpeded and public safety strick precaution field in public transport.Rely on the artificial estimation approach can't the scale support application, use video analysis and realize that the on-line intelligence of population density detects and become current research focus.Based on the method for video intelligent analyzing and testing population density, mainly comprise the Target Signature Analysis method and the population density estimation technique at present.
The Target Signature Analysis method is added up on the accurate detection basis based on individuality.At first detect, follow the tracks of the target signature in the video sequence image, identify the target of existence, add up then.These class methods are better to the more sparse scene detection effect of target, but for scene intensive as wagon flows such as intersection or subway stations or that the crowd is dense, go out target because problem such as block is difficult to Accurate Analysis, so can't add up target.
The population density estimation technique is not discerned statistics to individuality, but integral body is analyzed, and has the dense degree of colony in mind, and the individual concrete quantity of out of true statistic mass.At present the population density method of estimation based on video analysis has: based on " foreground pixel sum/density " regression relation, " gray level co-occurrence matrixes/density " regression relation and add the technology that some supplementary meanss are carried out extensive diagnostic, comprise wavelet transformation technique etc.Analyze based on pixel, because problems such as angles, there are not linear relationship in number of pixels and target, and close-range target pixels statistics number is more, long-range objectives pixels statistics number is less, so analysis result has often caused than mistake owing to angle difference or population density are different.And the illumination instability is also bigger to the density analysis influence based on pixel scale.
In sum, the existing problem that exists based on the Density Detection method of video analysis is that aspect abilities such as opposing angle difference, colony's dense degree and illumination variation are relatively poor.
Summary of the invention
The invention provides a kind of solution of the above problems, a kind of population density detection method based on video analysis is provided.
Technical scheme of the present invention provides a kind of population density detection method, and this method is to realize under the support of digital camera as sensor and digital signal processing chip, and it is characterized in that: this method may further comprise the steps:
1) adopt digital camera as sensor, continuous acquisition fixed point zone forms digital video;
2) the angle point feature of digital video image frame extraction 1);
3) textural characteristics of digital video image frame extraction 1);
4) characteristics combination of resulting described textural characteristics and the estimation of the regression relation between the population density 1 resulting described angle point feature and 3 application 2))) the regional population density of described fixed point.
Preferably, extract the angle point feature of described digital video image frame step 2 wherein) based on gradient method.
Preferably, wherein extract the textural characteristics of described digital video image frame in the step 3) based on the constant local ternary feature SILTP of convergent-divergent.。
Preferably, described regression relation obtains by support vector regression algorithm SVR study great amount of samples.
Preferably, when estimating the population density in described fixed point zone in the step 4), the distance of the distance of the described digital video of foundation adopts different regression relation response functions.
Population density detection method of the present invention has solved effectively that existing Density Detection method based on video analysis exists because angle difference, colony is overstocked and the problem of the density Estimation error that illumination variation etc. is brought.Have advantages such as precision height, strong robustness.
Description of drawings
Fig. 1 is an algorithm principle block diagram of the present invention.
Embodiment
Below the specific embodiment of the present invention is described in further detail.
As shown in Figure 1, population density detection method of the present invention is at first carried out video acquisition, and the video information that collects through angle point feature extraction, edge texture feature extraction, the estimation of support vector regression SVR algorithm, finally draws testing result successively.
Scheme of the present invention is to realize under the support of digital camera as sensor, estimates according to the regression relation between coarse texture degree and colony (vehicle/crowd) density.Below in conjunction with the algorithm principle figure shown in the accompanying drawing 1, the embodiment of process in detail.
1. adopt digital camera to form digital video frequency flow as signal source as each kind of groups focusing field scape of sensor continuous acquisition, digital camera adopts A/D chips such as CCD or CMOS;
2. manually choose in 1. described each the kind of groups focusing field scape digital video of step and have the population density value parameter of the video frame image of typical meaning as training sample and this sample of respective markers;
3. at first be based on the angle point feature that gradient method extracts sample.
Angle point is the point that two dimensional image brightness changes curvature maximum value on violent point or the image border curve.Angle point is when keeping the image graphics key character, can reduce the data volume of information effectively, make the content of its information very high, improved the speed of calculating effectively, played important effect at computer vision fields such as 3 D scene rebuilding, estimation, target following, Target Recognition, image registration and couplings.Method based on gradient is to judge the existence of angle point by the curvature of edge calculation, and the size of angle point evaluation is not only relevant with edge strength, and relevant with the rate of change of edge direction.Step is as follows:
(1) computed image pixel gradient in the horizontal and vertical directions, and both products obtains the value of 4 elements among the M;
M = I x 2 I x I y I x I y I y 2 .
Wherein, I x 2=I x* I xI 2 y=I y* I y
(2) image is carried out gaussian filtering, obtain new M;
Discrete two-dimensional zero-mean Gaussian function is
Gauss = exp ( - ( x 2 + y 2 ) 2 σ 2 )
(3) interest value of each corresponding pixel, i.e. R value on the calculating original image;
R={I x 2×I y 2-(I xI y) 2}-k{I x 2+I x 2} 2
(4) choose Local Extremum.Unique point is the pixel of the very big interest value correspondence in the subrange;
(5) setting threshold is chosen a certain amount of angle point;
4. then be based on the textural characteristics that the constant local ternary feature SILTP of convergent-divergent extracts sample;
Given any location of pixels (xc, yc), SILTP coding is as follows:
SILTP N , P τ ( x c , y c ) = ⊕ k = 0 N - 1 s τ ( I c , I k ) , - - - ( 1 )
Ic is the center pixel gray-scale value, and Ik is to be that N of equal intervals on the circle of radius faces the territory pixel with R,
Figure BDA0000048750050000052
Expression string of binary characters operator, τ are the interval zoom factors of comparison, s τBe piecewise function, be defined as follows:
s &tau; ( I c , I k ) = 01 , if I k > ( 1 + &tau; ) I c , 10 , if I k < ( 1 - &tau; ) I c , 00 , otherwise . - - - ( 2 )
The SILTP operator has the ability of very strong anti-local noise, has extraordinary robustness for shadow region, illumination sudden change etc. especially.
5. use the regression relation between support vector regression SVR algorithm learning sample proper vector and colony (vehicle/crowd) density at last, the textural characteristics that 4. angle point feature that 3. the sample characteristics vector is extracted by step and step are extracted combines.
Solve the new tool of machine learning problem by means of optimization method.Support vector machine is used for regression problem and is called support vector regression SVRM, and implementation algorithm is called support vector regression SVR algorithm.
When support vector regression SVR algorithm is realized returning estimation function, 3 characteristics are arranged: 1) utilize the linear function that defines in the higher dimensional space to estimate to return; 2) utilize linearity to minimize and realize returning estimation; 3) risk function of Cai Yonging partly is made up of regularization of being derived by structural risk minimization of experience sum of errors.
Given n group concern unknown sample data set H={ (x i, yi) } i=1 ... n, wherein xi is an input vector, yi is an expectation value, n is the sum of data point.SVM can utilize a Nonlinear Mapping, and data x is mapped to high-dimensional feature space H, and carries out linear proximity in this space.By Statistical Learning Theory as can be known, this function has following form:
Figure BDA0000048750050000061
Regression estimation problem is defined as offers as a tribute minimized problem to a loss function, and optimum regression function is by minimize the regularization risk functional under certain constraint condition:
| | &omega; | | 2 + C l 1 2 1 &Sigma; i = 1 l L &epsiv; ( y i , f ( x i ) ) - - - ( 2 )
Wherein
Figure BDA0000048750050000063
Make function more smooth, be called regularization term; Second is the empiric risk functional, can determine by different loss functions, and constant C>0, penalty is controlled the punishment degree to the sample that exceeds error.ε-insensitive loss function that this paper adopts,
L ε(y i,f(x i))=max(|y i-f(x i)-ε,0)
For L ε(y i, f (x i)), if the absolute value of estimating output f (xi) and the deviation of desired output yi less than the time, it equals 0; Otherwise the absolute value that it equals deviation deducts ε, by introducing non-negative slack variable ξ i, ξ i* then top equation can be exchanged into:
min | | &omega; | | 2 + C l 1 2 1 &Sigma; i = 1 l ( &xi; i + &xi; i * )
s.t.
Figure BDA0000048750050000065
ξ i *≥0
Introduce Lagrangian function, we finally can obtain:
&omega; - &Sigma; i = 1 l ( a i - a i * ) x i = 0
f ( x ) = &Sigma; i = 1 l ( a i - a i * ) j ( x i ) gj ( x ) + b
f ( x ) = &Sigma; i = 1 l ( a i - a i * ) K ( x i , x ) + b
Can get thus,
Introduce kernel function K (xi, xj), above formula can become:
(xi xj) is vector x i to K, and xj is at feature space (x i) with the inner product (xj), i.e. K (xi, xj)=(xi) (xj).Can directly calculate on the input space by the computing that kernel function is all, the selection of kernel function is extremely important to support vector machine.
During by the regression relation between support vector regression SVR algorithm learning sample proper vector and the density, consider the perspective effect of video camera, the not of uniform size of picture near objects and distant objects causes, adopted different regression relation response functions in zones of different, made that colony (vehicle/crowd) density Estimation is more accurate.When carrying out the training of crowd massing detecting device, training set has 1000 images, picks up from the crowd zone at Tube and Train station, and test set comprises 500 images, comprises the crowd zone on subway and the square.Through test, population density detection method of the present invention has reached 89.63% correct classification rate on test set.
Above embodiment only is the present invention's a kind of embodiment wherein, and it describes comparatively concrete and detailed, but can not therefore be interpreted as the restriction to claim of the present invention.Should be pointed out that for the person of ordinary skill of the art without departing from the inventive concept of the premise, can also make some distortion and improvement, these all belong to protection scope of the present invention.Therefore, the protection domain of patent of the present invention should be as the criterion with claims.

Claims (5)

1. population density detection method, this method is to realize under the support of digital camera as sensor and digital signal processing chip, it is characterized in that: this method may further comprise the steps:
1) adopt digital camera as sensor, continuous acquisition fixed point zone forms digital video;
2) the angle point feature of digital video image frame extraction 1);
3) textural characteristics of digital video image frame extraction 1);
4) characteristics combination of resulting described textural characteristics and the estimation of the regression relation between the population density 1 resulting described angle point feature and 3 application 2))) the regional population density of described fixed point.
2. population density detection method according to claim 1 is characterized in that: step 2 wherein) in extract the angle point feature of described digital video image frame based on gradient method.
3. population density detection method according to claim 1 is characterized in that: the textural characteristics that wherein extracts described digital video image frame in the step 3) based on the constant local ternary feature SILTP of convergent-divergent.
4. population density detection method according to claim 1 is characterized in that: described regression relation draws by support vector regression algorithm SVR study great amount of samples.
5. population density detection method according to claim 1 is characterized in that: when estimating the population density in described fixed point zone in the step 4), the distance of the distance of the described digital video of foundation adopts different regression relation response functions.
CN2011100512256A 2011-03-03 2011-03-03 Method for detecting population density Pending CN102136060A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011100512256A CN102136060A (en) 2011-03-03 2011-03-03 Method for detecting population density

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011100512256A CN102136060A (en) 2011-03-03 2011-03-03 Method for detecting population density

Publications (1)

Publication Number Publication Date
CN102136060A true CN102136060A (en) 2011-07-27

Family

ID=44295842

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011100512256A Pending CN102136060A (en) 2011-03-03 2011-03-03 Method for detecting population density

Country Status (1)

Country Link
CN (1) CN102136060A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390172A (en) * 2013-07-24 2013-11-13 佳都新太科技股份有限公司 Estimating method of crowd density under high-density scene
CN103971385A (en) * 2014-05-27 2014-08-06 重庆大学 Detecting method for moving object in video
CN107452212A (en) * 2016-05-30 2017-12-08 杨高林 Crossing signals lamp control method and its system
CN110084112A (en) * 2019-03-20 2019-08-02 太原理工大学 A kind of traffic congestion judgment method based on image procossing
CN110751620A (en) * 2019-08-28 2020-02-04 宁波海上鲜信息技术有限公司 Method for estimating volume and weight, electronic device, and computer-readable storage medium
CN111767881A (en) * 2020-07-06 2020-10-13 中兴飞流信息科技有限公司 Self-adaptive crowd density estimation device based on AI technology

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101739569A (en) * 2009-12-17 2010-06-16 北京中星微电子有限公司 Crowd density estimation method, device and monitoring system
CN101840507A (en) * 2010-04-09 2010-09-22 江苏东大金智建筑智能化***工程有限公司 Target tracking method based on character feature invariant and graph theory clustering

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101739569A (en) * 2009-12-17 2010-06-16 北京中星微电子有限公司 Crowd density estimation method, device and monitoring system
CN101840507A (en) * 2010-04-09 2010-09-22 江苏东大金智建筑智能化***工程有限公司 Target tracking method based on character feature invariant and graph theory clustering

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
《IEEE》 20101231 Shengcai Liao; 等 Modeling pixel process with scale invariant local patterns for background subtraction in complex scenes 1302 3 , *
SHENGCAI LIAO; 等: "Modeling pixel process with scale invariant local patterns for background subtraction in complex scenes", 《IEEE》, 31 December 2010 (2010-12-31), pages 1302 *
赵万金等: "一种自适应的Harris角点检测算法", 《计算机工程》, vol. 34, no. 10, 31 May 2008 (2008-05-31), pages 212 - 214 *
赵卿等: "混沌-支持向量机在大坝安全监控预测中的应用", 《大地测量与地球动力学》, vol. 28, no. 02, 30 April 2008 (2008-04-30), pages 115 - 119 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390172A (en) * 2013-07-24 2013-11-13 佳都新太科技股份有限公司 Estimating method of crowd density under high-density scene
CN103971385A (en) * 2014-05-27 2014-08-06 重庆大学 Detecting method for moving object in video
CN103971385B (en) * 2014-05-27 2016-08-24 重庆大学 The detection method of moving objects in video
CN107452212A (en) * 2016-05-30 2017-12-08 杨高林 Crossing signals lamp control method and its system
CN110084112A (en) * 2019-03-20 2019-08-02 太原理工大学 A kind of traffic congestion judgment method based on image procossing
CN110084112B (en) * 2019-03-20 2022-09-20 太原理工大学 Traffic jam judging method based on image processing
CN110751620A (en) * 2019-08-28 2020-02-04 宁波海上鲜信息技术有限公司 Method for estimating volume and weight, electronic device, and computer-readable storage medium
CN110751620B (en) * 2019-08-28 2021-03-16 宁波海上鲜信息技术有限公司 Method for estimating volume and weight, electronic device, and computer-readable storage medium
CN111767881A (en) * 2020-07-06 2020-10-13 中兴飞流信息科技有限公司 Self-adaptive crowd density estimation device based on AI technology

Similar Documents

Publication Publication Date Title
CN100565559C (en) Image text location method and device based on connected component and support vector machine
CN104978567B (en) Vehicle checking method based on scene classification
CN102184550B (en) Mobile platform ground movement object detection method
Cao et al. A coarse-to-fine weakly supervised learning method for green plastic cover segmentation using high-resolution remote sensing images
CN104835182A (en) Method for realizing dynamic object real-time tracking by using camera
CN105160310A (en) 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN102609720B (en) Pedestrian detection method based on position correction model
CN104077577A (en) Trademark detection method based on convolutional neural network
CN108921120B (en) Cigarette identification method suitable for wide retail scene
CN102136060A (en) Method for detecting population density
CN102496001A (en) Method of video monitor object automatic detection and system thereof
CN107767400A (en) Remote sensing images sequence moving target detection method based on stratification significance analysis
CN103593832A (en) Method for image mosaic based on feature detection operator of second order difference of Gaussian
CN102915544A (en) Video image motion target extracting method based on pattern detection and color segmentation
CN104835175A (en) Visual attention mechanism-based method for detecting target in nuclear environment
CN108256462A (en) A kind of demographic method in market monitor video
CN104517095A (en) Head division method based on depth image
CN102663777A (en) Target tracking method and system based on multi-view video
CN106991686A (en) A kind of level set contour tracing method based on super-pixel optical flow field
CN104463248A (en) High-resolution remote sensing image airplane detecting method based on high-level feature extraction of depth boltzmann machine
CN102663778B (en) A kind of method for tracking target based on multi-view point video and system
CN105405138A (en) Water surface target tracking method based on saliency detection
Ghanta et al. Automatic road surface defect detection from grayscale images
CN106023249A (en) Moving object detection method based on local binary similarity pattern
CN106056078B (en) Crowd density estimation method based on multi-feature regression type ensemble learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20110727