CN111191667A - Crowd counting method for generating confrontation network based on multiple scales - Google Patents

Crowd counting method for generating confrontation network based on multiple scales Download PDF

Info

Publication number
CN111191667A
CN111191667A CN201811356818.1A CN201811356818A CN111191667A CN 111191667 A CN111191667 A CN 111191667A CN 201811356818 A CN201811356818 A CN 201811356818A CN 111191667 A CN111191667 A CN 111191667A
Authority
CN
China
Prior art keywords
density map
crowd
density
network
discriminator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811356818.1A
Other languages
Chinese (zh)
Other versions
CN111191667B (en
Inventor
咸良
杨建兴
周圆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University Marine Technology Research Institute
Original Assignee
Tianjin University Marine Technology Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University Marine Technology Research Institute filed Critical Tianjin University Marine Technology Research Institute
Priority to CN201811356818.1A priority Critical patent/CN111191667B/en
Publication of CN111191667A publication Critical patent/CN111191667A/en
Application granted granted Critical
Publication of CN111191667B publication Critical patent/CN111191667B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

A crowd counting method based on multi-scale generation of an confrontation network adopts a confrontation training mode to predict crowd density. And the maximum and minimum problems of the generated model and the discriminant model are optimized by adopting a training mode of joint alternating iteration. Wherein the training generation network is used to generate an accurate population density map to fool the discriminators, and conversely, the discriminators are trained to discriminate between the generated density map and the true density map labels. At the same time, the output of the discriminator will provide feedback to the generator of the density map location and prediction accuracy. The two networks compete for training at the same time so as to improve the generated effect until the sample generated by the generator cannot be correctly judged by the discriminator. After the confrontation loss is introduced, the crowd density detection algorithm provided by the patent adopts the confrontation training mode to enable the convolutional neural network to generate a density map with higher quality, so that the accuracy of network crowd counting is improved.

Description

Crowd counting method for generating confrontation network based on multiple scales
Technical Field
The invention relates to the field of image processing and computer vision, in particular to a crowd counting algorithm for generating an confrontation network based on multiple scales.
Background
With the increasing population of China, large-scale crowds gather more and more. In order to effectively control the number of people in public places and prevent accidents caused by crowd density overload, video monitoring is the current main means. In the field of video monitoring and security protection, people analysis attracts more and more researchers' attention, and becomes a research subject of intense fire in the field of computer vision at present. The crowd counting task is to accurately estimate the total number of people in the picture and simultaneously give the distribution condition of crowd density. The picture crowd counting can be used in many fields, such as accident prevention, space planning, consumption habit analysis, traffic scheduling, and the like.
At present, the mainstream population counting algorithms applied to intelligent monitoring are mainly divided into two types: a population counting algorithm based on detection and a population counting algorithm based on regression. The people counting algorithm based on detection is that in a monitoring video, all pedestrians in each frame of image are supposed to be accurately detected and positioned through a manually designed visual target detector, and the estimated number of people is obtained by accumulating all detected targets. Papageorgiou et al earlier in 1998 proposed training SVM classifiers for use in pedestrian detection tasks by extracting wavelet features at different scales in the image. Lin et al, in 2001, proposed an improved method, namely, by performing histogram equalization and Haar wavelet transform on an image in advance, then extracting multi-scale human head profile statistical features, and finally using an SVM for detector training, the algorithm can obtain a more accurate crowd detection counting result when the video definition is high, but the algorithm is greatly influenced by environmental changes and the view angle of a monitoring lens. Dalal et al put forward a pedestrian detection algorithm based on Histogram of Oriented Gradient (HOG) features in 2005, combined with a linear SVM to classify and detect the crowd in the image and count to obtain the crowd number, further improving the accuracy of pedestrian detection.
However, when the crowd density in the monitored scene is high, the crowd occlusion problem always causes that the crowd counting algorithm based on the detector cannot accurately detect and track most pedestrians.
Disclosure of Invention
In order to solve the problems in the prior art, the crowd counting method for generating the confrontation network based on multiple scales is characterized in that the characteristics of a single-row convolutional neural network at different depths are fused, the problems of scale change, shielding and the like in crowd images are solved, meanwhile, the confrontation loss of a discriminator is added into a network model, the crowd density is predicted by adopting a confrontation training mode, and a density map with higher quality is generated.
The crowd counting method for generating the confrontation network based on the multi-scale comprises the following specific steps:
1. population scene Gaussian kernel density map
The present invention converts the given head coordinate data into the form of a crowd density distribution map, and for the given head coordinate data in the crowd image of the data set, the given head coordinate data is provided with the crowd image and the correspondingly marked head coordinate
Figure 830709DEST_PATH_IMAGE001
Can be made of discrete
Figure 160060DEST_PATH_IMAGE002
The coordinate positions of the corresponding heads are represented, so the positions of the N heads in each image can be labeled:
Figure 172009DEST_PATH_IMAGE003
to convert the head position coordinate function into a continuous density function, a Gaussian filter is used
Figure 903205DEST_PATH_IMAGE004
And (3) convolving with the head position function to obtain a density equation, wherein the specific equation is as follows:
Figure 771935DEST_PATH_IMAGE005
2. building a multiscale generative confrontation network
The crowd counting method based on the deep convolutional neural network has the advantages that the quality of a predicted crowd density map is not ideal enough in a complex kernel high-density crowd scene, and the reason is mainly that in the complex crowd scene, pedestrians and backgrounds have high similarity, and the convolutional neural network method has the phenomena of error detection and classification. Meanwhile, the quality of the predicted density map greatly affects the accuracy of population counting. The invention provides a crowd counting method for generating a confrontation network (MS-GAN) based on Multi-Scale convolution, and a Multi-Scale generation adaptive network (MS-GAN) function is introduced to improve the accuracy of prediction.
The structure diagram of the multi-scale generation confrontation network model is shown in a figure I, and mainly comprises two parts: a generator and a discriminator. The generator is a multi-scale convolution neural network, the generator takes a crowd image as input and outputs the image which is already introduced in the forecast of the predicted crowd density, then the obtained density map and the crowd image are overlapped and input into a discriminator, and the discriminator is trained to discriminate whether the generated density map or the real density map is input. Meanwhile, due to the overlapping input of the crowd images, the discriminator needs to distinguish whether the generated density map is matched with the crowd images.
3. Content loss function based design
In the network model proposed herein, the generator is configured to learn a mapping from a crowd image to a corresponding crowd density map, an output of the network model is a predicted crowd density map, a loss function based on a pixel level is adopted herein, a euclidean distance between the predicted density map and a true density map is calculated as a loss function of the network, and the loss function is a Mean Square Error (MSE) at the pixel level:
Figure 336384DEST_PATH_IMAGE006
wherein
Figure 135712DEST_PATH_IMAGE007
Representing density maps, parameters, generated by the generator
Figure 190256DEST_PATH_IMAGE008
As parameters of the generator network model. In addition to this, the present invention is,
Figure 901991DEST_PATH_IMAGE009
represents the first
Figure 205934DEST_PATH_IMAGE010
An image of a group of people is displayed,
Figure 12216DEST_PATH_IMAGE011
representing images of a group of people
Figure 937577DEST_PATH_IMAGE009
True label density map of (1).
Figure 335061DEST_PATH_IMAGE012
Expressed as the number of all training images.
4. Design of penalty function
The purpose of the discriminator is to distinguish the difference between the generated density map and the true label density map. Therefore, the density map label herein that marks the generated density map as 0 true is marked as 1. The output of the discriminator represents the probability that the generated density map is a true density map. An additional penalty function is used in the method to improve the quality of the generated density map. The penalty on confrontation (adaptive Loss) function is expressed as follows:
Figure 595141DEST_PATH_IMAGE013
wherein ,
Figure 955846DEST_PATH_IMAGE014
density map representing predictions
Figure 516140DEST_PATH_IMAGE015
Degree of matching with the corresponding crowd image. The output of the discriminator is in the form of tensor, and the crowd image is obtained
Figure 818946DEST_PATH_IMAGE016
And the generated density map
Figure 48545DEST_PATH_IMAGE017
Or true density map labels
Figure 993368DEST_PATH_IMAGE018
The superposition is performed in a third dimension. Finally, the loss function for the generator is a weighted sum of the mean squared error and the penalty, as follows:
Figure 877010DEST_PATH_IMAGE019
the effect of the text through a large number of experiments, setting
Figure 101449DEST_PATH_IMAGE020
To balance the specific gravity of the two loss values. The model proves that the combination of two loss functions enables the training of the network to be more stable and the prediction of the density map to be more accurate in the actual training process.
5. Antagonistic network joint training
The crowd density map prediction model based on the confrontation network is different from the original aim of generating the confrontation network, and the aim of crowd density estimation is to generate an accurate density map instead of generating a real natural image. Thus, the input in the population density estimation model proposed herein is no longer subject to random noise that is too distributed but rather to a population image. Secondly, because the crowd image contains the distribution information of the crowd scene, the crowd image is taken as the condition information of the crowd density map in the crowd density prediction model provided by the text, and the crowd density map and the crowd image are simultaneously input into the discriminator. In the actual training process, a conditional countermeasure network model is adopted, and the purpose of joint training is to estimate a high-quality crowd density map. The joint training formula of the generator and the arbiter is as follows:
Figure 70542DEST_PATH_IMAGE021
wherein ,
Figure 553476DEST_PATH_IMAGE022
represented as a generator, a network of generators, and a crowd image
Figure 307937DEST_PATH_IMAGE016
The output of the network is a predicted population density map as input
Figure 952545DEST_PATH_IMAGE017
Figure 408934DEST_PATH_IMAGE023
Denoted as the discriminator, the output of the discriminator is the probability that the crowd density image is a true density map. The purpose of the discriminator is to discriminate the density map generated by the generator
Figure 446291DEST_PATH_IMAGE017
And true label density map
Figure 38946DEST_PATH_IMAGE018
. While the training generator produces a high quality density map that cannot be discerned by the discriminator.
And carrying out crowd density prediction by adopting a confrontation training mode based on the crowd counting model for generating the confrontation network. And the maximum and minimum problems of the generated model and the discriminant model are optimized by adopting a training mode of joint alternating iteration. Wherein the training generation network is used to generate an accurate population density map to fool the discriminators, and conversely, the discriminators are trained to discriminate between the generated density map and the true density map labels. At the same time, the output of the discriminator will provide feedback to the generator of the density map location and prediction accuracy. The two networks compete for training at the same time so as to improve the generated effect until the sample generated by the generator cannot be correctly judged by the discriminator. After the confrontation loss is introduced, the crowd density detection algorithm provided by the patent adopts the confrontation training mode to enable the convolutional neural network to generate a density map with higher quality, so that the accuracy of network crowd counting is improved.
Drawings
Fig. 1 is a diagram of a multi-scale generative confrontation network architecture.
Detailed Description
The present invention needs to solve the problem of giving a crowd image or a frame in a video and then estimating the crowd density and the crowd total in each area of the image.
The structure of the multi-level convolutional neural network is shown as a generator part in fig. 1, in the first three volume blocks of the network, a multi-scale convolution module (initiation) is adopted to respectively extract multi-scale features on three different volume blocks of Conv-1, Conv-2 and Conv-3, and the multi-scale convolution module adopts three different scales and sizes
Figure 854456DEST_PATH_IMAGE024
The convolution kernel obtains the features of different scales, and each multi-scale convolution module performs multi-scale feature expression on the depth features. In order to make the sizes of the different-sized characteristic diagrams consistent, the characteristic diagrams with different sizes are pooled to be uniform by adopting a pooling method. Wherein, conv-1 adopts two-layer pooling, and conv-2 adopts one-layer pooling, and the sizes of the two layers are consistent with those of conv-3. And finally, inputting the features of different levels and scales into a conv-4 convolution layer, and performing feature fusion by adopting a 1 x 1 convolution kernel. And finally fusing three features with different scales in the network, and performing density map regression by using the fused feature map. The network can greatly improve the detection of small-scale pedestrians in a high-density crowd scene, and finally improves the prediction effect of the crowd density map.
The multi-scale generation confrontation network model of the fusion discriminator is shown in fig. 1, and mainly comprises two parts, namely a generator and a discriminator, wherein the generator is a multi-scale convolution neural network already introduced in the foregoing, the generator takes a crowd image as an input and outputs the crowd image as a predicted crowd density image, then the obtained density image and the crowd image are overlapped and simultaneously input into the discriminator, and the discriminator is trained to discriminate whether the input is the generated density image or a real density image. Meanwhile, due to the overlapping input of the crowd images, the discriminator needs to distinguish whether the generated density map is matched with the crowd images.
In the experiment, training is carried out on an NVIDIA GeForce GTX TITAN X graphic display card based on a Tensorflow deep learning framework, a random gradient descent (SGD) method is adopted in the whole network, parameter optimization is carried out on the network by using an Adam algorithm, and the learning rate is set to be
Figure 88821DEST_PATH_IMAGE025
The momentum is set to 0.9. The parameters of the generator and the discriminator are initialized using normal distribution functions. Because the training data set adopted in the method is small in data volume, the batch size (batch size) is set to be 1, and in the training process, the generator and the arbiter alternately and iteratively optimize, firstly, the generator is trained on 20 epochs based on the average square loss, on the basis, the arbiter is added, and the two networks are alternately optimized to perform 100 epochs of training. The input of the discriminator is input in the form of tensor, the structure of the tensor mainly comprises an original image of three channels of RGB and a density image of a single channel, and the final dimension of the structure is
Figure 647978DEST_PATH_IMAGE026
The tensor of (a).
The present invention compares to other methods on the UCF _ CC _50 dataset. The evaluation criteria for experimental results used Mean Absolute Error (MAE):
Figure 360719DEST_PATH_IMAGE027
and Mean Square Error (MSE):
Figure 363441DEST_PATH_IMAGE028
(N is the number of pictures,
Figure 794423DEST_PATH_IMAGE029
the actual number of people in the ith image,
Figure 891692DEST_PATH_IMAGE030
the number of people who output the ith image through the network provided by the invention) to measure the accuracy of the algorithm. On the UCF _ CC _50 mall data set, the present invention compares to the prior art algorithm, as followsAs shown in the table (MS-GAN is the algorithm of the present invention):
experimental performance comparison in the table shows that the method is better than MCNN and CrowdNet in accuracy and stability, and MS-GAN performs best in various crowd counting algorithms based on the convolutional neural network on two indexes of MSE and MAE, so that the estimation error of the number of people in various scenes is relatively average, and the method has relatively higher stability. The quality of the predicted crowd density map is obviously superior to other crowd counting methods based on the convolutional neural network.
Figure 6409DEST_PATH_IMAGE031

Claims (1)

1. The crowd counting method for generating the confrontation network based on multiple scales is characterized by comprising the following steps: the method comprises the following specific steps:
1 population scene gaussian kernel density map:
converting given head coordinate data into a form of a population density profile for a given head coordinate in a population image of the data set
Figure 22337DEST_PATH_IMAGE001
Can be made of discrete
Figure 261689DEST_PATH_IMAGE002
The coordinate positions of the corresponding heads are represented, so the positions of the N heads in each image can be labeled:
Figure 788309DEST_PATH_IMAGE003
to convert the head position coordinate function into a continuous density function, a Gaussian filter is used
Figure 706587DEST_PATH_IMAGE004
And (3) convolving with the head position function to obtain a density equation, wherein the specific equation is as follows:
Figure 14071DEST_PATH_IMAGE005
2, constructing a multi-scale generation countermeasure network:
the multi-scale generation countermeasure network model structure mainly comprises two parts: the device comprises a generator and a discriminator, wherein the generator is a multi-scale convolutional neural network, the generator takes a crowd image as input and outputs the crowd image as a predicted crowd density map, then the obtained density map and the crowd image are overlapped and simultaneously input into the discriminator, the discriminator is trained to discriminate whether the input is the generated density map or a real density map, and meanwhile, due to the overlapping input of the crowd image, the discriminator needs to discriminate whether the generated density map is matched with the crowd image;
3, designing based on a content loss function:
calculating Euclidean distance between the predicted density map and the real density map as a loss function of the network by adopting a loss function based on a pixel level, wherein the loss function adopts Mean Square Error (MSE) of the pixel level:
Figure 560459DEST_PATH_IMAGE006
wherein
Figure 888672DEST_PATH_IMAGE007
Representing density maps, parameters, generated by the generator
Figure 435191DEST_PATH_IMAGE008
As parameters of the generator network model, in addition to this,
Figure 765941DEST_PATH_IMAGE009
represents the first
Figure 307780DEST_PATH_IMAGE010
An image of a group of people is displayed,
Figure 213420DEST_PATH_IMAGE011
representing images of a group of people
Figure 965344DEST_PATH_IMAGE009
A map of the true label density of (c),
Figure 614631DEST_PATH_IMAGE012
expressed as the number of all training images;
4, designing a resistance loss function:
an additional penalty function is used to improve the quality of the generated density map, the penalty (additive loss) function is expressed as:
Figure 10977DEST_PATH_IMAGE013
wherein ,
Figure 304162DEST_PATH_IMAGE014
density map representing predictionsThe output of the discriminator is in the form of tensor according to the matching degree of the corresponding crowd image
Figure 606147DEST_PATH_IMAGE016
And the generated density map
Figure 43951DEST_PATH_IMAGE017
Or true density map labels
Figure 494655DEST_PATH_IMAGE018
The superposition is performed in the third dimension and finally the loss function for the generator is a weighted sum of the mean squared error and the penalty, which is shown below:
Figure 96537DEST_PATH_IMAGE019
through the effects of a large number of experiments, set up
Figure 307201DEST_PATH_IMAGE020
The proportion of two loss values is weighed, and the combination of two loss functions of the model enables the training of the network to be more stable and the prediction of the density map to be more accurate;
5, antagonistic network joint training:
adopting a conditional confrontation network model, aiming at estimating a high-quality crowd density graph, a joint training formula of a generator and a discriminator is as follows:
Figure 350244DEST_PATH_IMAGE021
wherein ,
Figure 96483DEST_PATH_IMAGE022
represented as a generator, a network of generators, and a crowd image
Figure 44716DEST_PATH_IMAGE016
The output of the network is a predicted population density map as input
Figure 432972DEST_PATH_IMAGE017
Figure 533783DEST_PATH_IMAGE023
Represented as a discriminator whose output is the probability that the crowd density image is a true density map, the purpose of the discriminator being to discriminate the density map generated by the generator
Figure 716503DEST_PATH_IMAGE017
And true label density map
Figure 916147DEST_PATH_IMAGE018
Simultaneous training generatorA high quality density map is produced that cannot be discerned by the discriminator.
CN201811356818.1A 2018-11-15 2018-11-15 Crowd counting method based on multiscale generation countermeasure network Active CN111191667B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811356818.1A CN111191667B (en) 2018-11-15 2018-11-15 Crowd counting method based on multiscale generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811356818.1A CN111191667B (en) 2018-11-15 2018-11-15 Crowd counting method based on multiscale generation countermeasure network

Publications (2)

Publication Number Publication Date
CN111191667A true CN111191667A (en) 2020-05-22
CN111191667B CN111191667B (en) 2023-08-18

Family

ID=70707024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811356818.1A Active CN111191667B (en) 2018-11-15 2018-11-15 Crowd counting method based on multiscale generation countermeasure network

Country Status (1)

Country Link
CN (1) CN111191667B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111832413A (en) * 2020-06-09 2020-10-27 天津大学 People flow density map estimation, positioning and tracking method based on space-time multi-scale network
CN111898903A (en) * 2020-07-28 2020-11-06 北京科技大学 Method and system for evaluating uniformity and comprehensive quality of steel product
CN112818944A (en) * 2021-03-08 2021-05-18 北方工业大学 Dense crowd counting method for subway station scene
CN112818945A (en) * 2021-03-08 2021-05-18 北方工业大学 Convolutional network construction method suitable for subway station crowd counting
CN113313118A (en) * 2021-06-25 2021-08-27 哈尔滨工程大学 Self-adaptive variable-proportion target detection method based on multi-scale feature fusion
CN113392779A (en) * 2021-06-17 2021-09-14 中国工商银行股份有限公司 Crowd monitoring method, device, equipment and medium based on generation of confrontation network
CN114463694A (en) * 2022-01-06 2022-05-10 中山大学 Semi-supervised crowd counting method and device based on pseudo label
CN114648724A (en) * 2022-05-18 2022-06-21 成都航空职业技术学院 Lightweight efficient target segmentation and counting method based on generation countermeasure network
CN114972111A (en) * 2022-06-16 2022-08-30 慧之安信息技术股份有限公司 Dense crowd counting method based on GAN image restoration
CN115983142A (en) * 2023-03-21 2023-04-18 之江实验室 Regional population evolution model construction method based on depth generation countermeasure network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105740945A (en) * 2016-02-04 2016-07-06 中山大学 People counting method based on video analysis
WO2016183766A1 (en) * 2015-05-18 2016-11-24 Xiaogang Wang Method and apparatus for generating predictive models
US20180075581A1 (en) * 2016-09-15 2018-03-15 Twitter, Inc. Super resolution using a generative adversarial network
CN107862261A (en) * 2017-10-25 2018-03-30 天津大学 Image people counting method based on multiple dimensioned convolutional neural networks
CN108764085A (en) * 2018-05-17 2018-11-06 上海交通大学 Based on the people counting method for generating confrontation network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016183766A1 (en) * 2015-05-18 2016-11-24 Xiaogang Wang Method and apparatus for generating predictive models
CN105740945A (en) * 2016-02-04 2016-07-06 中山大学 People counting method based on video analysis
US20180075581A1 (en) * 2016-09-15 2018-03-15 Twitter, Inc. Super resolution using a generative adversarial network
CN107862261A (en) * 2017-10-25 2018-03-30 天津大学 Image people counting method based on multiple dimensioned convolutional neural networks
CN108764085A (en) * 2018-05-17 2018-11-06 上海交通大学 Based on the people counting method for generating confrontation network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴淑窈;刘希庚;胡昌振;王忠策;: "基于卷积神经网络人群计数的研究与实现", 科教导刊(上旬刊), no. 09 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111832413A (en) * 2020-06-09 2020-10-27 天津大学 People flow density map estimation, positioning and tracking method based on space-time multi-scale network
CN111832413B (en) * 2020-06-09 2021-04-02 天津大学 People flow density map estimation, positioning and tracking method based on space-time multi-scale network
CN111898903A (en) * 2020-07-28 2020-11-06 北京科技大学 Method and system for evaluating uniformity and comprehensive quality of steel product
CN112818944A (en) * 2021-03-08 2021-05-18 北方工业大学 Dense crowd counting method for subway station scene
CN112818945A (en) * 2021-03-08 2021-05-18 北方工业大学 Convolutional network construction method suitable for subway station crowd counting
CN113392779A (en) * 2021-06-17 2021-09-14 中国工商银行股份有限公司 Crowd monitoring method, device, equipment and medium based on generation of confrontation network
CN113313118A (en) * 2021-06-25 2021-08-27 哈尔滨工程大学 Self-adaptive variable-proportion target detection method based on multi-scale feature fusion
CN114463694A (en) * 2022-01-06 2022-05-10 中山大学 Semi-supervised crowd counting method and device based on pseudo label
CN114463694B (en) * 2022-01-06 2024-04-05 中山大学 Pseudo-label-based semi-supervised crowd counting method and device
CN114648724A (en) * 2022-05-18 2022-06-21 成都航空职业技术学院 Lightweight efficient target segmentation and counting method based on generation countermeasure network
CN114648724B (en) * 2022-05-18 2022-08-12 成都航空职业技术学院 Lightweight efficient target segmentation and counting method based on generation countermeasure network
CN114972111A (en) * 2022-06-16 2022-08-30 慧之安信息技术股份有限公司 Dense crowd counting method based on GAN image restoration
CN115983142A (en) * 2023-03-21 2023-04-18 之江实验室 Regional population evolution model construction method based on depth generation countermeasure network
CN115983142B (en) * 2023-03-21 2023-08-29 之江实验室 Regional population evolution model construction method based on depth generation countermeasure network

Also Published As

Publication number Publication date
CN111191667B (en) 2023-08-18

Similar Documents

Publication Publication Date Title
CN111191667A (en) Crowd counting method for generating confrontation network based on multiple scales
CN108830252B (en) Convolutional neural network human body action recognition method fusing global space-time characteristics
CN108921051B (en) Pedestrian attribute identification network and technology based on cyclic neural network attention model
CN108416250B (en) People counting method and device
CN105022982B (en) Hand motion recognition method and apparatus
CN108764085B (en) Crowd counting method based on generation of confrontation network
CN111723693B (en) Crowd counting method based on small sample learning
CN104616316B (en) Personage's Activity recognition method based on threshold matrix and Fusion Features vision word
CN104680559B (en) The indoor pedestrian tracting method of various visual angles based on motor behavior pattern
CN111709300B (en) Crowd counting method based on video image
CN110298297A (en) Flame identification method and device
CN106815563B (en) Human body apparent structure-based crowd quantity prediction method
CN107863153A (en) A kind of human health characteristic modeling measuring method and platform based on intelligent big data
CN110909672A (en) Smoking action recognition method based on double-current convolutional neural network and SVM
Galčík et al. Real-time depth map based people counting
CN106056078A (en) Crowd density estimation method based on multi-feature regression ensemble learning
Saif et al. Moment features based violence action detection using optical flow
Kim et al. Estimation of crowd density in public areas based on neural network.
Waddenkery et al. Adam-Dingo optimized deep maxout network-based video surveillance system for stealing crime detection
Parsola et al. Automated system for road extraction and traffic volume estimation for traffic jam detection
Shreedarshan et al. Crowd recognition system based on optical flow along with SVM classifier
CN114943873A (en) Method and device for classifying abnormal behaviors of construction site personnel
CN112818945A (en) Convolutional network construction method suitable for subway station crowd counting
Ma et al. Crowd estimation using multi-scale local texture analysis and confidence-based soft classification
Khan et al. Multiple moving vehicle speed estimation using Blob analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant