CN109101888B - Visitor flow monitoring and early warning method - Google Patents

Visitor flow monitoring and early warning method Download PDF

Info

Publication number
CN109101888B
CN109101888B CN201810763293.7A CN201810763293A CN109101888B CN 109101888 B CN109101888 B CN 109101888B CN 201810763293 A CN201810763293 A CN 201810763293A CN 109101888 B CN109101888 B CN 109101888B
Authority
CN
China
Prior art keywords
model
gaussian
value
pixel value
distribution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810763293.7A
Other languages
Chinese (zh)
Other versions
CN109101888A (en
Inventor
刘璎瑛
丁绍刚
赵维铎
许凯
屈鹏程
周源赣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Agricultural University
Original Assignee
Nanjing Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Agricultural University filed Critical Nanjing Agricultural University
Priority to CN201810763293.7A priority Critical patent/CN109101888B/en
Publication of CN109101888A publication Critical patent/CN109101888A/en
Application granted granted Critical
Publication of CN109101888B publication Critical patent/CN109101888B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a visitor flow monitoring and early warning method, which relates to the field of intelligent tourism and can count the resident quantity of visitors in real time and complement the information about the resident quantity in safety early warning information in scenic spots. The invention comprises the following steps: acquiring video images of scenic spots with dense pedestrian flow by using a camera; homogenizing videos collected under different illumination by adopting an illumination compensation method, and extracting a tourist target based on a Gaussian mixture model; judging the high-density people stream, setting threshold values of all scenic spots according to the area ratio of segmentation pixels of the tourist images in the ROI, judging the high-density people stream when the threshold values are exceeded, tracking the residence time of the high-density people stream, and starting a high-density tourist output program to monitor the people flow when the threshold values are exceeded; the passenger volume calculation adopts a person head detection technology based on a deep learning network, so that the front, side and back head characteristics of a person can be accurately identified, and the accurate detection of high-density people flow can be realized. And if the numerical value exceeds a preset threshold value, giving out early warning.

Description

Visitor flow monitoring and early warning method
Technical Field
The invention relates to the field of intelligent tourism, in particular to a visitor flow monitoring and early warning method.
Background
With the improvement of the living standard of Chinese people and the development of tourism industry, the number of tourists in scenic spots is rapidly increased. Particularly, tourists are full in famous scenic spots of national legal holidays, so that the visiting comfort of the tourists is reduced, and the potential safety hazard of visiting is also brought. At present, there are several methods for counting the passenger flow in scenic spots: firstly, passenger flow statistics is carried out based on a ticketing system, the technology is suitable for closed scenic spots, specific media are required to be held, the statistical area range is limited, and open scenic spots are not suitable. And secondly, based on a video monitoring system, personnel identification is carried out through a face identification technology, so that the statistics of the number of the floating tourists is further realized. The technology is seriously influenced by weather and light, and the accuracy of the tourist number statistics is influenced under natural conditions such as rainy days, heavy fog, dark night and the like. Thirdly, through the mobile internet technology, the mobile phone signaling data is utilized to collect the real-time position information of the mobile phone, the information of the number, the position distribution, the source distribution and the like of the passengers in the scenic spot is accurately mastered, real-time and dynamic monitoring and statistics can be carried out on people, the support of a mobile communication network is needed, the privacy of users is involved, and the practical application is greatly limited. The technologies have advantages and disadvantages, but statistics of the number of tourists in the scenic spot is realized, and no relevant research is provided in the aspect of monitoring the resident quantity of the tourists. The residence amount refers to the number of tourists staying at a certain scenic spot for a certain time, can reflect the attraction degree of the tourists at the scenic spot and the staying time of the tourists at the scenic spot, and is different from the traditional tourist flow monitoring. The main reason for the scenic spot people flow congestion is that a large number of tourists rush in a short time and stay for a long time, and if the tourists are only counted in real time to carry out tourist visit safety early warning, an early warning mechanism is not comprehensive and accurate enough.
Therefore, a statistical method for the tourist residence amount is lacked in the prior art, and the residence amount of the tourist can be counted in real time to complement the information about the residence amount in the safety early warning information in the scenic spot.
Disclosure of Invention
The invention provides a visitor flow monitoring and early warning method, which can apply a deep learning technology to high-density visitor flow detection, provides a person head model detection method based on transfer learning, monitors the resident quantity of visitors in a scenic spot, and realizes accurate detection and early warning of high-density visitor flow in the scenic spot.
In order to achieve the purpose, the invention adopts the following technical scheme:
a visitor flow monitoring and early warning method adopts a visitor flow monitoring and early warning system to operate. A visitor flow monitoring and early warning system comprises: the monitoring system comprises a camera, a network video recorder, a monitoring host and an alarm box, wherein the camera is connected with the network video recorder, the network video recorder is connected with the monitoring host, the monitoring host is connected with the alarm box, and the monitoring host triggers the alarm box to alarm when a fault is found.
A visitor flow monitoring and early warning method comprises the following steps:
and S1, acquiring the camera video of each sight spot camera within one day, and performing foreground extraction on the camera video by adopting a Gaussian model detection algorithm based on illumination compensation to obtain foreground output.
S2, calculating the area ratio of the foreground output in an ROI (Region of interest), tracking high-density crowd according to the density threshold of the stream of people in the scenic spot, and outputting a high-density crowd picture when the preset time is reached. Selecting an ROI (region of interest) as a region range to be monitored from a first frame image of a video camera, performing foreground extraction according to S1, performing connected domain marking on extracted foreground output, and acquiring the maximum connected domain in a foreground frame, namely the area of a foreground block; dividing the area of the foreground block in the current frame by the area of the framed stationary point area, judging whether the ratio is greater than the early warning value, tracking the block area exceeding the early warning value, and judging whether the ratio is greater than the early warning value within preset time, wherein the system judges the high-density crowd.
And S3, training a human head detection model by using the labeled pictures of the high-density population on the deep learning network by adopting a transfer learning technology to obtain a trained model. The model training is completed off-line, and the trained model can be loaded and then subjected to on-line detection and output. A human head model detection method based on deep learning is characterized in that a human head detection model adopts a residual error network model structure, namely a deep convolutional neural network added with a residual error block, and the network comprises an input layer, a convolutional layer, a pooling layer, a full-link layer and an output layer. The pictures are imported from an input layer, the features are extracted from the convolutional layer, the features are selected in a dimensionality reduction mode in the pooling layer, and the effective features are linked through the full-connection layer to achieve human head detection in an output layer. A plurality of detection algorithms based on deep learning can be selected according to actual needs, and the following steps of training and detecting the human head model by taking the R-FCN algorithm as an example are introduced:
(1) marking the head of a high-density crowd picture in a scenic spot by using an open source marking tool Labeling, inputting the marked head picture, and generating a feature map of the picture by using an FCN (full-connectivity Network) full convolution neural Network;
(2) inputting the calculated feature map into an RPN (Region extraction Network), and further generating an ROIS (Region of Interest S is a plurality of regions of Interest); then inputting the generated ROIS into an ROI pooling layer sensitive to the position, and predicting a target area for subnet learning;
(3) and the ROI subnet reversely propagates the error between the predicted target and the label target according to the characteristic extracted by the FCN and the candidate region output by the RPN, calculates the loss value of training, and enables the loss value to reach the possible minimum value through multiple iterations so as to complete the classification and positioning of the human head region.
(4) After a certain number of times of training, the total loss curve graph is used for judging whether the network weight is optimal or not, and a detection model capable of judging the head and the position is obtained. And (4) carrying out human head detection on the selected test set picture by using the detection model obtained by training, and judging the model by taking the accuracy and the false detection rate as standards.
And S4, inputting the output high-density crowd pictures into the trained model, detecting the number of tourists, and outputting an alarm signal by the trained model when the number of people exceeds a threshold value. And (4) judging the previous frame to give out a high-density crowd residence early warning, and starting a visitor number counting algorithm. The trained human head model detection algorithm is used for detecting and counting the high-density human flow, and the human flow can be early warned when the number of people exceeds a preset value.
Further, in S1, the illumination compensation based gaussian model detection algorithm includes: performing single-channel brightness equalization processing on a current frame image of the video camera, constructing a single-channel global difference matrix by utilizing brightness interpolation, and performing brightness enhancement processing on a three-channel image to obtain a video subjected to illumination compensation;
and performing foreground extraction on the video subjected to illumination compensation by using a Gaussian mixture model, and performing complete output on a foreground target by using morphological operation to obtain foreground output.
Further, the Gaussian mixture model method comprises the following steps:
SS1, initializing a Gaussian mixture model, and calculating the mean value mu of each gray pixel of the video sequence image in the time period T0Sum variance σ0 2By mu0Sum variance σ0 2To initialize the parameters of k Gaussian models, k being a positive integer, mu0And σ0 2Is calculated as follows
Figure BDA0001727520150000041
Figure BDA0001727520150000042
Wherein, ItThe value of T is 1,2, … T;
SS2, each new pixel value ItComparing with the k-th Gaussian model until a matched pixel value distribution model is found, wherein the matching means that a new pixel value I is obtainedtAnd the mean deviation of the kth Gaussian model is within 2.5 sigma, and the formula adopted by comparison is as follows:
|Itk,t-1|≤2.5σk,t-1
in the formula ofk,t-1、σk,t-1Respectively is the distribution mean and variance of the Gaussian model at the t-1 moment;
SS3, if the matched pixel value distribution model meets the requirement of the background, the pixel corresponding to the matched pixel value distribution model is marked as the background part, otherwise, the pixel is marked as the foreground part;
SS4, if new pixel value ItMatching with one or more of the k Gaussian models to account for new pixel values ItThe distribution of the current pixel value is relatively satisfied, the weight value needs to be properly increased, and the new pixel value I at this timetThe updating formulas of the mean value, the variance and the weight value are as follows:
μk,t=(1-α)μk,t-1+αIt
Figure BDA0001727520150000051
ωk,t=(1-β)ωk,t-1+βθ
wherein, ω isk,tWeight of the kth Gaussian distribution at time t, ωk,t-1Weight of the k-th Gaussian distribution at time t-1, muk,t,σk,tRespectively the mean value and the variance of the kth Gaussian distribution at the time t, theta is a matching parameter, and theta is 1 when the new pixel value accords with the k Gaussian distributions and 0 when the new pixel value does not accord with the k Gaussian distributions; α is a parameter update rate indicating a background conversion rate, β is a learning rate, and θ is 1 when a new pixel value matches k gaussian distributions and 0 when it does not match;
SS5, if SS2 shows new pixel value ItIf no Gaussian model is matched with the model, replacing the Gaussian distribution mode with the minimum weight, namely, the mean value of the mode is the current pixel value, the standard deviation is an initial large value, and the weight is a small value;
SS6, and omega corresponding to each Gaussian modelk,tThe values of the Gaussian models are sorted from large to small, the Gaussian models with large weight and small standard deviation are arranged in front, and a sequence of the Gaussian models is obtained;
SS7, marking B Gaussian distribution models in front of the sequence as background B, wherein B satisfies the following formula, parameter T represents the ratio of the background and is a set threshold, T is greater than or equal to 0.5 and less than or equal to 1, B is a positive integer
Figure BDA0001727520150000061
The beneficial effects of the invention are:
the invention improves the current face detection technology, starts from a detection object and expands the face detection range to the whole human head. The human head detection model based on deep learning is utilized to learn a large number of data sets through the neural network, the human head detection model based on deep learning can improve the multi-angle of a target and the detection capability under shielding, the algorithm adaptability is greatly improved, and the performance and performance of the target detection algorithm on individual detection are improved, so that the staying quantity of tourists in a scenic spot is monitored in real time, and accurate detection and early warning of high-density pedestrian volume in the scenic spot are realized.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic flow diagram of the process of the present invention;
FIG. 2 is a block diagram of a hybrid Gaussian video foreground extraction algorithm based on illumination compensation;
FIG. 3 is a schematic illustration of color image illumination compensation;
FIG. 4 is a diagram of an R-FCN network architecture;
FIG. 5 is a block diagram of a passenger statistics routing based on a head detection model;
FIG. 6 is a block diagram of a scenic spot pedestrian volume counting and early warning system based on deep learning and head detection models.
Detailed Description
In order that those skilled in the art will better understand the technical solutions of the present invention, the present invention will be described in further detail with reference to specific embodiments.
The embodiment provides a visitor flow monitoring and early warning method, which adopts a visitor flow monitoring and early warning system to operate. The operation of the visitor flow monitoring and early warning system comprises the following steps: a visitor flow monitoring and early warning system comprises: the monitoring system comprises a camera, a network video recorder, a monitoring host and an alarm box, wherein the camera is connected with the network video recorder, the network video recorder is connected with the monitoring host, the monitoring host is connected with the alarm box, and the monitoring host triggers the alarm box to alarm when a fault is found.
A visitor flow monitoring and early warning method is shown in a flow chart of a method of the visitor flow monitoring and early warning method in figure 1 and comprises the following steps:
s1, collecting the camera videos of the cameras of each scenic spot within one day, carrying out foreground extraction on the camera videos by adopting a Gaussian model detection algorithm based on illumination compensation, and obtaining foreground output, wherein the process is shown in figure 2.
The illumination compensation based gaussian model detection algorithm, as shown in fig. 3, includes: performing single-channel brightness equalization processing on a current frame image of the video camera, constructing a single-channel global difference matrix by utilizing brightness interpolation, and performing brightness enhancement processing on a three-channel image to obtain a video subjected to illumination compensation;
and performing foreground extraction on the video subjected to illumination compensation by using a Gaussian mixture model, and performing complete output on a foreground target by using morphological operation to obtain foreground output.
The Gaussian mixture model method comprises the following steps:
SS1, initializing a Gaussian mixture model, and calculating the mean value mu of each gray pixel of the video sequence image in the time period T0And variance, in μ0Sum variance σ0 2To initialize k Gaussian modes
Parameter of type, k is a positive integer, mu0And σ0Is calculated as follows
Figure BDA0001727520150000081
Figure BDA0001727520150000082
Wherein, ItFor the new pixel value, T takes the value 1,2, … T
SS2, converting each new pixel value ItComparing with the k-th Gaussian model until findingA matched pixel value distribution model, matching means that a new pixel value ItAnd the mean deviation of the kth Gaussian model is within 2.5 sigma, and the formula adopted by comparison is as follows:
|Itk,t-1|≤2.5σk,t-1
in the formula muk,t-1、σk,t-1Respectively is the distribution mean and variance of the Gaussian model at the t-1 moment;
SS3, if the matched pixel value distribution model meets the requirement of the background, the pixel corresponding to the matched pixel value distribution model is marked as the background part, otherwise, the pixel is marked as the foreground part;
SS4, if new pixel value ItMatching with one or more of the k Gaussian models to account for new pixel values ItThe distribution of the current pixel value is relatively satisfied, the weight value needs to be properly increased, and the new pixel value I at this timetThe updating formulas of the mean value, the variance and the weight value are as follows:
μk,t=(1-α)μk,t-1+αIt
Figure BDA0001727520150000083
ωk,t=(1-β)ωk,t-1+βθ
wherein, ω isk,tWeight, ω, of the kth Gaussian distribution at time tk,t-1Weight of the k-th Gaussian distribution at time t-1, muk,t,σk,tRespectively the mean value and the variance of the kth Gaussian distribution at the time t, theta is a matching parameter, and theta is 1 when the new pixel value accords with the k Gaussian distributions and 0 when the new pixel value does not accord with the k Gaussian distributions; α is a parameter update rate indicating a background conversion rate, β is a learning rate such that θ becomes 1 when a new pixel value matches k gaussian distributions, and θ becomes 0 when the new pixel value does not match;
SS5, if SS2 shows new pixel value ItIf no Gaussian model is matched with the model, the weight of the Gaussian distribution model with the minimum weight is obtained, namely the mean value of the model is the current pixel value, the standard deviation is an initial large value, and the weight is a small value;
SS6 Gauss modesForm according to its corresponding ωk,tThe values of the Gaussian models are sorted from large to small, the Gaussian models with large weight and small standard deviation are arranged in front, and a sequence of the Gaussian models is obtained;
SS7, marking B Gaussian distribution models in front of the sequence as background B, wherein B satisfies the following formula, parameter T represents the ratio occupied by the background and is a set threshold, T has a value range of 0.5-1, B is a positive integer,
Figure BDA0001727520150000091
s2, calculating the area ratio of the foreground output in an ROI (Region of interest), tracking high-density crowd according to the density threshold of the stream of people in the scenic spot, and outputting a high-density crowd picture when the preset time is reached. Selecting an ROI (region of interest) as a region range to be monitored from a first frame image of a video camera, performing foreground extraction according to S1, performing connected domain marking on extracted foreground output, and acquiring the maximum connected domain in a foreground frame, namely the area of a foreground block; dividing the area of the foreground block in the current frame by the area of the framed stationary point area, judging whether the ratio is greater than the early warning value, tracking the block area exceeding the early warning value, and judging whether the ratio is greater than the early warning value within preset time, wherein the system judges the high-density crowd.
And S3, training a human head detection model by using the labeled pictures of the high-density population on the deep learning network by adopting a transfer learning technology to obtain a trained model. The model training is completed off-line, and the trained model can be loaded and then subjected to on-line detection and output. A human head model detection method based on deep learning is characterized in that a human head detection model adopts a residual error network model structure, namely a deep convolutional neural network added with a residual error block, and the network comprises an input layer, a convolutional layer, a pooling layer, a full-link layer and an output layer. The pictures are imported from an input layer, the features are extracted from the convolutional layer, the features are selected in a dimensionality reduction mode in the pooling layer, and the effective features are linked through the full-connection layer to achieve human head detection in an output layer. The detection algorithms based on deep learning are many and can be selected according to actual needs, and the following steps of training and detecting a human head model by taking an R-FCN algorithm as an example are introduced, wherein the structure diagram of an R-FCN network is shown in FIG. 4:
(1) marking the head of a high-density crowd picture in a scenic spot by using an open source marking tool Labeling, inputting the marked head picture, and generating a feature map of the picture by using an FCN (full-connectivity Network) full convolution neural Network;
(2) inputting the calculated feature map into an RPN (Region pro real Network), and further generating an ROIS (Region of Interest, S is a plurality of regions of Interest); then inputting the generated ROIS into an ROI pooling layer sensitive to the position, and predicting a target area for subnet learning;
(3) and the ROI subnet reversely propagates the error between the predicted target and the label target according to the characteristic extracted by the FCN and the candidate region output by the RPN, calculates the loss value of training, and enables the loss value to reach the possible minimum value through multiple iterations so as to complete the classification and positioning of the human head region.
(4) After a certain number of times of training, the total loss curve graph is used for judging whether the network weight is optimal or not, and a detection model capable of judging the head and the position is obtained. And (3) carrying out human head detection on the selected test set picture by using the detection model obtained by training, and judging the model by taking the accuracy and the false detection rate as standards as shown in figure 5.
And S4, inputting the output high-density crowd pictures into the trained model, detecting the number of tourists, and outputting an alarm signal by the trained model when the number of people exceeds a threshold value. And (4) judging the previous frame to give out a high-density crowd residence early warning, and starting a visitor number counting algorithm. And (3) detecting and counting the high-density pedestrian flow by using a trained human head model detection algorithm, and early warning the pedestrian flow when the number of people exceeds a preset value. A scenic spot pedestrian volume statistics and early warning system block diagram based on deep learning and head detection models is shown in fig. 6.
The invention has the beneficial effects that:
the invention improves the current face detection technology, starts from a detection object and expands the face detection range to the whole human head. The human head detection model based on deep learning is utilized to learn a large number of data sets through the neural network, the human head detection model based on deep learning can improve the multi-angle of a target and the detection capability under shielding, the algorithm adaptability is greatly improved, and the performance and performance of the target detection algorithm on individual detection are improved, so that the staying quantity of tourists in a scenic spot is monitored in real time, and accurate detection and early warning of high-density pedestrian volume in the scenic spot are realized.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (1)

1. A visitor flow monitoring and early warning method is characterized by comprising the following steps of:
s1, collecting the camera videos of the cameras of each scenic spot in one day, and performing foreground extraction on the camera videos by adopting a Gaussian model detection algorithm based on illumination compensation, wherein the Gaussian model detection algorithm based on illumination compensation comprises the following steps: performing single-channel brightness equalization processing on a current frame image of the video camera, constructing a single-channel global difference matrix by utilizing brightness interpolation, and performing brightness enhancement processing on a three-channel image to obtain a video subjected to illumination compensation;
carrying out foreground extraction on the video subjected to illumination compensation by using a Gaussian mixture model, and carrying out complete output on a foreground target by using morphological operation to obtain foreground output, wherein the foreground extraction by using the Gaussian mixture model comprises the following steps:
SS1, initializing a Gaussian mixture model, and calculating the mean value mu of each gray pixel of the video sequence image in the time period T0Sum variance σ0 2By mu0Sum variance σ0 2To initialize the parameters of k Gaussian models, k being a positive integer, mu0And σ0 2The calculation formula of (a) is as follows:
Figure FDA0003627603930000011
Figure FDA0003627603930000012
wherein, ItThe value of T is 1,2, … T;
SS2, each new pixel value ItComparing with the k-th Gaussian model until a matched pixel value distribution model is found, wherein the matching means that a new pixel value I is obtainedtAnd the mean deviation of the kth Gaussian model is within 2.5 sigma, and the formula adopted by comparison is as follows:
|Itk,t-1|≤2.5σk,t-1
in the formula ofk,t-1、σk,t-1Respectively is the distribution mean and variance of the kth Gaussian model at the time of t-1;
SS3, if the matched pixel value distribution model meets the requirement of the background, the pixel corresponding to the matched pixel value distribution model is marked as the background part, otherwise, the pixel is marked as the foreground part;
SS4, if new pixel value ItMatching one or more of the k Gaussian models to account for new pixel values ItThe distribution of the current pixel value is satisfied, and the weight value is increased properly, and the new pixel value ItThe updating formulas of the mean value, the variance and the weight value are as follows:
μk,t=(1-α)μk,t-1+αIt
Figure FDA0003627603930000021
ωk,t=(1-β)ωk,t-1+βθ
wherein, ω isk,tWeight, ω, of the kth Gaussian distribution at time tk,t-1Weight of the k-th Gaussian distribution at time t-1, μk,t,σk,tRespectively, the mean value and the variance of the kth Gaussian distribution at the time t, theta is a matching parameter, and theta is 1 when a new pixel value accords with the kth Gaussian distribution and 0 when the new pixel value does not accord with the kth Gaussian distribution; alpha is a parameter updating rate and represents a background transformation speed, and beta is a learning rate;
SS5, if SS2 shows new pixel value ItIf no Gaussian model is matched with the model, replacing the Gaussian distribution model with the minimum weight, namely, the mean value of the model is the current pixel value, the standard deviation is an initial large value, and the weight is a small value;
SS6, and omega corresponding to each Gaussian modelk,tThe values of the Gaussian models are sorted from large to small, the Gaussian models with large weight and small standard deviation are arranged in front, and a sequence of the Gaussian models is obtained;
SS7, marking B Gaussian distribution models in front of the sequence as background B, wherein B satisfies the following formula, parameter T represents the ratio occupied by the background and is a set threshold, T is in the value range of 0.5-1, B is a positive integer,
Figure FDA0003627603930000031
s2, calculating the area ratio of the foreground output in the region of interest, tracking high-density crowd according to the scenic spot pedestrian flow density threshold, and outputting a high-density crowd picture when the preset time is reached;
s3, training a head detection model by using a transfer learning technology and using a labeled picture of a high-density crowd on a deep learning network to obtain a trained model;
and S4, inputting the output high-density crowd pictures into the trained model, detecting the number of tourists, and outputting an alarm signal by the trained model when the number of people exceeds a threshold value.
CN201810763293.7A 2018-07-11 2018-07-11 Visitor flow monitoring and early warning method Active CN109101888B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810763293.7A CN109101888B (en) 2018-07-11 2018-07-11 Visitor flow monitoring and early warning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810763293.7A CN109101888B (en) 2018-07-11 2018-07-11 Visitor flow monitoring and early warning method

Publications (2)

Publication Number Publication Date
CN109101888A CN109101888A (en) 2018-12-28
CN109101888B true CN109101888B (en) 2022-06-14

Family

ID=64846107

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810763293.7A Active CN109101888B (en) 2018-07-11 2018-07-11 Visitor flow monitoring and early warning method

Country Status (1)

Country Link
CN (1) CN109101888B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109819208B (en) * 2019-01-02 2021-01-12 江苏警官学院 Intensive population security monitoring management method based on artificial intelligence dynamic monitoring
CN109948550A (en) * 2019-03-20 2019-06-28 北京百分点信息科技有限公司 A kind of wisdom railway station flow of the people monitoring system and method
CN110135274B (en) * 2019-04-19 2023-06-16 佛山科学技术学院 Face recognition-based people flow statistics method
CN110390266A (en) * 2019-06-24 2019-10-29 黄燕 A kind of system and its measurement method of the measurement scenic spot flow of the people based on area algorithm
CN112146666A (en) * 2019-06-27 2020-12-29 奥迪股份公司 Vehicle driving route marking method and device, computer equipment and storage medium
CN110517251B (en) * 2019-08-28 2022-04-08 嘉应学院 Scenic spot area overload detection and early warning system and method
CN110688924A (en) * 2019-09-19 2020-01-14 天津天地伟业机器人技术有限公司 RFCN-based vertical monocular passenger flow volume statistical method
CN111640150B (en) * 2019-09-20 2021-04-02 贵州英弗世纪科技有限公司 Video data source analysis system and method
CN111010439A (en) * 2019-12-16 2020-04-14 重庆锐云科技有限公司 Scenic spot comfort level monitoring and early warning method
CN111885202B (en) * 2020-08-03 2024-05-31 南京亚太嘉园智慧空间营造有限公司 VGG algorithm-based information processing platform for exhibition hall of Internet of things
CN114283386B (en) * 2022-01-28 2024-06-21 浙江传媒学院 Real-time monitoring system for analyzing and adapting to dense scene people stream based on big data
CN114926973B (en) * 2022-04-06 2023-07-14 珠海市横琴渤商数字科技有限公司 Video monitoring method, device, system, server and readable storage medium
CN116542509A (en) * 2023-06-21 2023-08-04 广东致盛技术有限公司 Campus logistics task management method and device
CN116523319B (en) * 2023-06-30 2023-09-08 中国市政工程西南设计研究总院有限公司 Comprehensive management method and system for intelligent park
CN117041484B (en) * 2023-07-18 2024-05-24 中建科工集团运营管理有限公司 People stream dense area monitoring method and system based on Internet of things

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845344A (en) * 2016-12-15 2017-06-13 重庆凯泽科技股份有限公司 Demographics' method and device
CN107229894A (en) * 2016-03-24 2017-10-03 上海宝信软件股份有限公司 Intelligent video monitoring method and system based on computer vision analysis technology
CN107301387A (en) * 2017-06-16 2017-10-27 华南理工大学 A kind of image Dense crowd method of counting based on deep learning
CN107679502A (en) * 2017-10-12 2018-02-09 南京行者易智能交通科技有限公司 A kind of Population size estimation method based on the segmentation of deep learning image, semantic

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107229894A (en) * 2016-03-24 2017-10-03 上海宝信软件股份有限公司 Intelligent video monitoring method and system based on computer vision analysis technology
CN106845344A (en) * 2016-12-15 2017-06-13 重庆凯泽科技股份有限公司 Demographics' method and device
CN107301387A (en) * 2017-06-16 2017-10-27 华南理工大学 A kind of image Dense crowd method of counting based on deep learning
CN107679502A (en) * 2017-10-12 2018-02-09 南京行者易智能交通科技有限公司 A kind of Population size estimation method based on the segmentation of deep learning image, semantic

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
景区""空间游客驻留分布密度研究;杨文学;《中国优秀硕士学位论文全文数据库 信息科技辑》;20180415;第9、14、19-21页 *

Also Published As

Publication number Publication date
CN109101888A (en) 2018-12-28

Similar Documents

Publication Publication Date Title
CN109101888B (en) Visitor flow monitoring and early warning method
CN109670429B (en) Method and system for detecting multiple targets of human faces of surveillance videos based on instance segmentation
CN111368687B (en) Sidewalk vehicle illegal parking detection method based on target detection and semantic segmentation
CN111598030B (en) Method and system for detecting and segmenting vehicle in aerial image
CN110443969B (en) Fire detection method and device, electronic equipment and storage medium
CN110232330B (en) Pedestrian re-identification method based on video detection
CN110276264B (en) Crowd density estimation method based on foreground segmentation graph
CN101470809B (en) Moving object detection method based on expansion mixed gauss model
CN108039044B (en) Vehicle intelligent queuing system and method based on multi-scale convolutional neural network
CN109657581A (en) Urban track traffic gate passing control method based on binocular camera behavioral value
CN111274926B (en) Image data screening method, device, computer equipment and storage medium
CN112712052A (en) Method for detecting and identifying weak target in airport panoramic video
Yaghoobi Ershadi et al. Vehicle tracking and counting system in dusty weather with vibrating camera conditions
CN114373162A (en) Dangerous area personnel intrusion detection method and system for transformer substation video monitoring
CN108154199B (en) High-precision rapid single-class target detection method based on deep learning
CN112132839B (en) Multi-scale rapid face segmentation method based on deep convolution cascade network
He et al. A double-region learning algorithm for counting the number of pedestrians in subway surveillance videos
CN110765900B (en) Automatic detection illegal building method and system based on DSSD
CN116091964A (en) High-order video scene analysis method and system
CN110909645A (en) Crowd counting method based on semi-supervised manifold embedding
Muniruzzaman et al. Deterministic algorithm for traffic detection in free-flow and congestion using video sensor
Wassantachat et al. Traffic density estimation with on-line SVM classifier
CN112270232A (en) Method and device for classifying weak traffic participants around vehicle
Pandya et al. A novel approach for vehicle detection and classification
CN115439727B (en) Weather forecast method, system, device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant