CN108717528A - A kind of global population analysis method of more strategies based on depth network - Google Patents
A kind of global population analysis method of more strategies based on depth network Download PDFInfo
- Publication number
- CN108717528A CN108717528A CN201810461606.3A CN201810461606A CN108717528A CN 108717528 A CN108717528 A CN 108717528A CN 201810461606 A CN201810461606 A CN 201810461606A CN 108717528 A CN108717528 A CN 108717528A
- Authority
- CN
- China
- Prior art keywords
- network
- density
- sub
- map
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/53—Recognition of crowd images, e.g. recognition of crowd congestion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Abstract
The present invention provides a kind of based on the shifty global population analysis method of depth network.First, monitoring area models, including global map schematic diagram is drawn, and establishes figure layer and establishes figure layer to global figure corresponding direction and range in camera head monitor region, waits for the importing of crowd density data.Secondly, to the monitoring scene of each camera, by perspective transform, the visual angle of looking down of the spatial view mapping graph of the monitoring image presented, i.e. camera side view visual angle to ground maps.Characteristics of image is obtained by VGG16 transfer learning methods, by the pre- piecemeal of input picture, it is mapped to characteristic layer by stride, each piece of characteristics of image is judged by SWITCH, selection carries out density estimation to image by R1 density estimation network R2 pedestrian detections networks or pedestrian detection operates.Every piece of pedestrian detection or density estimation result is integrated into density map, and the density map of estimation is passed through perspective transform on being mapped to figure layer, convenient accurately to be supervised to global crowd's situation.
Description
Technical field
The present invention relates to a kind of crowd counting and density estimation method more particularly to a kind of mostly strategies based on depth network
Global population analysis method, belongs to machine vision field of artificial intelligence.
Background technology
As exponential population increases, urbanization degree is deepened, and movable number of gathering on a large scale and frequency drastically increase
Add, such as the Scene Tourist of public holiday, moves meeting, political rally, public exhibitions etc..Preferably to manage, it is ensured that Environmental security
And personal safety, analysis crowd are very necessary, it is current research emphasis that pedestrian detection and crowd, which count,.At present detection and
Method of counting mainly has:
1, the method based on individual statistics
By the vertical view of video camera, the number of people is detected, it is effectively anti-to block;The body of people detects, and passes through each human body
Detect;Head and shoulder model, according to " Ω " SHAPE DETECTION of head and shoulder.Basic skills has HOG extraction feature SVM for feature point
Class.Other common features also have haar features, and Hough transform, class loop truss, the special innovation used is generally in signature analysis
Middle addition various features.Research emphasis is to improve SVM, boosting graders in classification, or uses various graders
Combination.Such methods Hu Research Challenges are that light changes, and personal feature lacks when crowded.
2, the analysis based on crowd characteristic
Inaccurate situation is detected primarily directed to crowded individual in ontoanalysis based on the analysis of crowd characteristic, directly
Extraction human rights feature is connect, after feature extraction, carries out feature recurrence.The method of recurrence is generally SVR, Gauss returns, minimum two
Multiplication, ridge regression etc..Present research is concentrated on through feature extraction, carries out feature combination to different features or feature is poly-
Class is innovated on clustering method, and the innovation of regression process is mainly reflected in selects different and kernel function to different features
It returns.Light changes, and causes to count inaccurately under the high density stream of people or open scene, pedestrian detection processing time is longer;It takes the photograph
The perspective as caused by head " remote small close big " is the Research Challenges of such methods.
3, the demographics mode based on convolutional neural networks
Using the characteristics of image of depth e-learning, it is made to have stronger generalization, representative compared to traditional images feature
Property, more characteristic present target, calculation amount can be utilized larger.For example one Master's thesis of University of Anhui uses and includes three-layer coil
Product and one layer of full articulamentum, wherein be all followed by after every layer of convolution pondization operate and activation primitive then selects ReLU functions.
In monitoring scene, the sparse and intensive situation of crowd is simultaneous, and Crowds Distribute is in uneven trend, together
Different distribution situation is presented under one monitoring camera in different time.For these features of monitor video, preferably to carve
The Crowds Distribute for drawing monitoring scene is realized different for Same Scene in the form of expression of different time different zones crowd
Target (detection counts), the present invention provide a kind of global population analysis method of more strategies based on depth network.
Invention content
The technical problem to be solved in the present invention is, in view of the deficiencies of the prior art, provides a kind of based on depth network
More global population analysis methods of strategy the problems such as to overcome complex scene background interference and pedestrian to block, and then are realized pair
The accurate estimation of crowd density in scene.
In order to solve the above technical problems, the present invention adopts the following technical scheme that, a kind of more strategies based on depth network are complete
Office's population analysis method, includes the following steps:
Step S1, data preparation, including following sub-step,
S11, for crowd's picture under Same Scene, choose the monitor video interception of same camera shooting the first day largely comprising not
With the frame of crowd;
S12, choose one and walk through the successive frame of monitoring area, according to human body head target's center point, human height,
Road width and length information estimate perspective model, generate scene and have an X-rayed graph model;
S13, crowd's individual of every frame is marked, the method for acquisition is to carry out a mark in the head of people fixed point, right
In the sparse pedestrian head that recognizes, or the image of complete trunk is marked using indicia framing;
S14, label figure position is generated into density map;
Step S2, modelling and training stage, including following sub-step,
S21, training data are chosen, and randomly select n pictures and corresponding density map, are divided into k blocks not overlapping region
Subgraph, it is intensive that distance per capita, which is less than a1 meter or Per capita area less than a2 square metres of subpicture tag, remaining subpicture tag is
It is sparse;
S22, structure neural network model, include for obtaining the overlay network of Sub-Image Feature, it is close for subgraph to be divided into
Collection and the class density sorter network of sparse two class, are used the density estimation sub-network R1 for predicting close quarters crowd density
In the pedestrian detection sub-network R2 of detection sparse region pedestrian position;
S23, the training of class density sorter network, the range in region is divided according to image in S21, is mapped from top by stride
The feature that corresponding subgraph is extracted in the output of layer network, by the intensive of every piece of Sub-Image Feature and the character pair defined in S21
It is input in class density sorter network and is trained with sparse two classes label;
Intensive Sub-Image Feature and the corresponding crowd density figure of intensive subgraph are input to density estimation sub-network R1 by S24
In be trained;
Sparse Sub-Image Feature feature and the encirclement frame of the corresponding number of people and trunk are input to pedestrian detection subnet by S25
It is trained in network R2;
Step S3, in the model measurement stage, specific implementation is as follows;
For an input test image, image is divided into nonoverlapping k blocks subgraph, is mapped via stride and chooses top net
Correspond to the feature of subgraph in network output, the trained good global density rating sorter network of each block feature, by image it is intensive with
Sparse region separates, and the density image of crowd will be extracted in the close quarters input density estimation sub-network R1 of piecemeal, will be sparse
Region, which is input in sub-network R2, carries out human testing;Then the density map of density estimation sub-network R1 output result is spliced into
The top center point of the indicia framing detected in pedestrian detection sub-network R2 is marked the density map of original image, and will label
As a result it is added in splicing density map, scene number is expressed as each pixel accumulation result of density map;
S4, data analysis use, and specific implementation is as follows;;
Crowd density estimation is obtained according to density estimation, using the corresponding perspective model projection mapping of each scene, by each prison
The density map of control carries out transformation correction visual angle distortion, according to existing density rating sorting technique, density map is divided into it is extremely intensive,
It is intensive, it is medium, it is sparse, it is five kinds extremely sparse, scenic spot global map figure layer is created, it is long with m meters according to the coverage area of monitoring camera
Monitoring range is divided into several pieces by the distance of degree as segmentation, and each piece, in the corresponding region of map layer, calculates separately close
Collection grade is simultaneously characterized with different color, includes realizing the whole of global scenic spot in corresponding map road figure layer by test result
Body the crowd is dense situation distribution map.
Further, the Convolution density map of authentic signature figure and Gaussian kernel, calculation formula are used in step S14xiIndicate number of people mark position, δ (x-xi) indicate the impulse function of number of people position, N statements
Number sum, G is Gaussian kernel.
Further, overlay network described in step S22 includes using first 10 in VGG16 networks by transfer learning
The convolutional layer of preset parameter.
Further, the structure of class density sorter network described in step S22 is global average pond layer, full articulamentum
It is FC521, articulamentum FC3 and softmax layers complete.
Further, the network structure of density estimation sub-network R1 described in step S22 is Conv3-512-2, Conv3-
512-2, Conv3-512-2, conv3-256-2, conv3-128-2, Conv3-64-2, Conv1-1-1;Wherein Conv3-512-
2 be expressed as convolution kernel size be 3, filter quantity be 512, empty convolutional coding structure step-length be 2.
Further, the network structure of pedestrian detection sub-network R2 described in step S22 be Max-pool, Conv3-512,
Conv3-512, Conv3-512, Conv6-4096, Conv1-4096, Conv1-1000.
Further, initial parameter is initialized using the Gauss of 0.01 standard deviation in step S23, and using under stochastic gradient
Drop method trains class density sorter network.
Further, density estimation network R1 described in step S24 uses the Euclidean distance of real density figure as loss
Function trains network, obtains network parameter, and the expression formula of loss function isN is indicated
Training block size, Z (Xi;Θ) indicate the network output at network parameter Θ, XiIndicate input picture,It indicates according to mark
The density map remembered.
Further, pedestrian detection sub-network R2 trains network by intersecting entropy loss and marginal loss in step S25, returns
The target detected in region is returned, to same human body head, torso marker is 1 people.
The advantages of the present invention are as follows,
The present invention obtains subgraph spy by transfer learning using the convolutional layer of preceding 10 preset parameters in VGG16 networks
Image is divided into intensive estimation block and pedestrian detection block using class density sorter network, improved under Same Scene by sign, it is intensive with
The pedestrian counting of sparse situation and the robustness of density estimation.In addition by density map in the projection of real world, multi-cam
The method of figure layer splicing, realizes the linkage surveillance of large area.
Description of the drawings
Fig. 1 is flow chart of the embodiment of the present invention.
Fig. 2 is the empty convolution schematic diagram in density estimation sub-network of embodiment of the present invention R1.
Specific implementation mode
The present invention is described in more detail with reference to the accompanying drawings and examples.
S1, data preparation
For crowd's picture under Same Scene, it includes from 6 to intercept about 1500 frames from monitor video:30-17:00 does not share the same light
The picture of line different crowd quantity.Label is got ready in the pedestrian head of every frame, generate the true of every crowd position using tool
Point set data.Crowd's individual of every frame is marked, the method for acquisition is to carry out a mark in the head of people fixed point, for dilute
Thin recognizes pedestrian head, or the image of complete trunk is marked using indicia framing.Label figure position is given birth to by Gaussian convolution
At approximate density map.xiIndicate number of people mark position xi, δ (x-xi) indicate number of people position
The impulse function set, N state number sum, and G is Gaussian kernel.
S2, training stage
S21, training data prepare, and randomly select 1000 pictures and corresponding density map, are divided into 9 pieces of not overlapping regions
Subgraph.By 1 meter of the adjacent two people air line distance of personal comfort distance, scene areas is divided by 2 square metres of scene Per capita area
Intensive (distance is less than 1 meter per capita or Per capita area is less than 2 square metres) and sparse two class.
S22, structure neural network model, include for obtaining the overlay network of Sub-Image Feature, it is close for subgraph to be divided into
Collection and the class density sorter network of sparse two class, are used the density estimation sub-network R1 for predicting close quarters crowd density
In the pedestrian detection sub-network R2 of detection sparse region pedestrian position;Network top is to use VGG16 networks by transfer learning
In preceding 10 preset parameters convolutional layer.
S23, the 9 not overlapping regions divided according to image range, mapped by stride, from the output of VGG16 networks
The feature of middle extraction correspondence image block.Per block feature and the intensive and sparse two class label of the character pair defined in S21 is instructed
Practice class density sorter network.Class density sorter network is by the average pond layer of the overall situation, full articulamentum FC521, full articulamentum
FC3 and softmax layers of composition, initial parameter is initialized using the Gauss of 0.01 standard deviation, and uses stochastic gradient descent method
Training network.
S24, density estimation sub-network R1 network structure be Conv3-512-2, Conv3-512-2, Conv3-512-2,
It is 3 that Conv3-256-2, Conv3-128-2, Conv3-64-2, Conv1-1-1, Conv3-512-2, which are expressed as convolution kernel size,
Filter quantity is 512, and empty convolutional coding structure step-length is 2, as shown in Fig. 2, i.e., will be tight in the case where empty step-length is 2
3 × 3 structures gathered extend as with 16 5 × 5 empty structures.The input data of density estimation sub-network R1 is intensive subgraph
Feature and the corresponding crowd density figure of intensive subgraph, the Sub-Image Feature are that the corresponding VGG16 networks of intensive subgraph export feature.
Density estimation network R1 uses the Euclidean distance of real density figure as loss function
N indicates training block size, Z (Xi;Θ) indicate the network output at network parameter Θ, XiIndicate input picture,Indicate root
The density map obtained according to label.
The network structure of S25, pedestrian detection sub-network R2 training is Max-pool, Conv3-512, Conv3-512,
Conv3-512, Conv6-4096, Conv1-4096, Conv1-4096, Conv1-1000, the training data of input be the number of people and
The feature of the encirclement frame of trunk and sparse sub-image, Sub-Image Feature are the corresponding spies of sparse subgraph in the output of VGG16 networks
Sign, network initial parameter use the gaussian random of 0.01 standard deviation, and network is trained using stochastic gradient descent method.Pedestrian detection
Network R2 trains network by intersecting entropy loss and marginal loss, the target detected in return area, to same human body head,
Torso marker is 1 people.
S3, test phase
For an input test image, image is divided into nonoverlapping 9 pieces of subgraphs, input test image to VGG16 nets
In network, according to predefined piecemeal, the Sub-Image Feature for choosing piecemeal is mapped via stride, each piece is classified by global density rating
Network separates intensive in image and sparse region, will extract people in the close quarters input density estimation sub-network R1 of piecemeal
The density image of group, sparse region is input in sub-network R2 and carries out human testing, utilizes box recurrence and non-maxima suppression
To be modified.The density map output result of sub-network R1 is spliced into the density map of original image.Pedestrian detection sub-network R2 is returned
The target detected in region, to same human body head, trunk is denoted as 1 people.Testing result takes detection block top center point to mark,
And label result is added in splicing density map, scene number is expressed as each pixel accumulation result of density map.
S4, data analysis use
Obtain crowd density estimation according to density estimation, by perspective model projection calibration detect target because of camera perspective and
Distortion caused by perspective.Under multi-cam large-range monitoring, all monitoring images are changed into same visual angle and are spliced very
Difficulty, but by abstract crowd density and have detected that people's location information transform to look down visual angle can be real to a certain extent
Existing, we directly use the corresponding perspective model projection mapping of each scene, and the density map of each monitoring is carried out transformation correction visual angle
Distortion, according to generally acknowledged density rating sorting technique, density map is divided into it is extremely intensive, it is intensive, it is medium, it is sparse, it is five kinds extremely sparse,
Scenic spot global map figure layer is created, according to the coverage area of each monitoring camera, will be monitored by standard of actual 10 meters of distances
Range is divided into several pieces, and each piece, in the corresponding region of map layer, is set using that dark red correspondence is extremely intensive, and red correspondence is close
Collection, orange correspondence is medium, and green correspondence is sparse, and the crowd is dense feelings accordingly will shows on map layer according to experimental result
Condition realizes whole the crowd is dense the situation distribution map at global scenic spot.
Specific embodiment described herein is only an example for the spirit of the invention.Technology belonging to the present invention is led
The technical staff in domain can make various modifications or additions to the described embodiments or replace by a similar method
In generation, however, it does not deviate from the spirit of the invention or beyond the scope of the appended claims.
Claims (9)
1. a kind of global population analysis method of more strategies based on depth network, which is characterized in that include the following steps:
Step S1, data preparation, including following sub-step,
S11, for crowd's picture under Same Scene, the monitor video interception for choosing same camera shooting the first day includes largely different people
The frame of group;
S12, it chooses one and walks through the successive frame of monitoring area, according to human body head target's center point, human height, road
Width and length information estimate perspective model, generate scene and have an X-rayed graph model;
S13, crowd's individual of every frame is marked, the method for acquisition is to carry out a mark in the head of people fixed point, for dilute
Thin recognizes pedestrian head, or the image of complete trunk is marked using indicia framing;
S14, label figure position is generated into density map;
Step S2, modelling and training stage, including following sub-step,
S21, training data are chosen, and n pictures and corresponding density map are randomly selected, and are divided into the subgraph of k blocks not overlapping region,
It is intensive that distance per capita, which is less than a1 meter or Per capita area less than a2 square metres of subpicture tag, remaining subpicture tag is sparse;
S22, structure neural network model, include for obtain the overlay network of Sub-Image Feature, for by subgraph be divided into it is intensive and
The class density sorter network of sparse two class, the density estimation sub-network R1 for predicting close quarters crowd density, for examining
Survey the pedestrian detection sub-network R2 of sparse region pedestrian position;
S23, the training of class density sorter network, the range in region is divided according to image in S21, is mapped from top net by stride
The feature that corresponding subgraph is extracted in the output of network, by the intensive and dilute of every piece of Sub-Image Feature and the character pair defined in S21
Thin two class labels, which are input in class density sorter network, to be trained;
S24, by intensive Sub-Image Feature and the corresponding crowd density figure of intensive subgraph be input in density estimation sub-network R1 into
Row training;
Sparse Sub-Image Feature feature and the encirclement frame of the corresponding number of people and trunk are input to pedestrian detection sub-network R2 by S25
In be trained;
Step S3, in the model measurement stage, specific implementation is as follows;
For an input test image, image is divided into nonoverlapping k blocks subgraph, it is defeated to map selection overlay network via stride
Go out the feature of middle corresponding subgraph, the trained good global density rating sorter network of each block feature will be intensive in image and sparse
Region separates, and the density image of crowd will be extracted in the close quarters input density estimation sub-network R1 of piecemeal, by sparse region
It is input in sub-network R2 and carries out human testing;Then the density map of density estimation sub-network R1 output result is spliced into artwork
The top center point of the indicia framing detected in pedestrian detection sub-network R2 is marked, and will mark result by the density map of picture
It is added in splicing density map, scene number is expressed as each pixel accumulation result of density map;
S4, data analysis use, and specific implementation is as follows;;
Crowd density estimation is obtained according to density estimation, using the corresponding perspective model projection mapping of each scene, by each monitoring
Density map carries out transformation correction visual angle distortion, according to existing density rating sorting technique, density map is divided into it is extremely intensive, it is close
Collection, it is medium, it is sparse, it is five kinds extremely sparse, scenic spot global map figure layer is created, according to the coverage area of monitoring camera, with m meters of length
Distance monitoring range is divided into several pieces as segmentation, each piece, in the corresponding region of map layer, calculates separately intensive
Grade is simultaneously characterized with different color, includes realizing the entirety at global scenic spot in corresponding map road figure layer by test result
The crowd is dense situation distribution map.
2. a kind of global population analysis method of more strategies based on depth network as described in claim 1, it is characterised in that:Step
The Convolution density map of authentic signature figure and Gaussian kernel, calculation formula are used in rapid S14xiIndicate number of people mark position, δ (x-xi) indicate the impulse function of number of people position, N statements
Number sum, G is Gaussian kernel.
3. a kind of global population analysis method of more strategies based on depth network as described in claim 1, it is characterised in that:Step
Overlay network described in rapid S22 includes the convolutional layer using preceding 10 preset parameters in VGG16 networks by transfer learning.
4. a kind of global population analysis method of more strategies based on depth network as described in claim 1, it is characterised in that:Step
The structure of class density sorter network described in rapid S22 is global averagely pond layer, full articulamentum FC521, full articulamentum FC3,
And softmax layers.
5. a kind of global population analysis method of more strategies based on depth network as described in claim 1, it is characterised in that:Step
The network structure of density estimation sub-network R1 described in rapid S22 is Conv3-512-2, Conv3-512-2, Conv3-512-2,
Conv3-256-2, conv3-128-2, Conv3-64-2, Conv1-1-1;Wherein Conv3-512-2 is expressed as convolution kernel size
It is 3, filter quantity is 512, and empty convolutional coding structure step-length is 2.
6. a kind of global population analysis method of more strategies based on depth network as described in claim 1, it is characterised in that:Step
The network structure of pedestrian detection sub-network R2 described in rapid S22 is Max-pool, Conv3-512, Conv3-512, Conv3-
512, Conv6-4096, Conv1-4096, Conv1-1000.
7. a kind of global population analysis method of more strategies based on depth network as described in claim 1, it is characterised in that:Step
Initial parameter is initialized using the Gauss of 0.01 standard deviation in rapid S23, and uses stochastic gradient descent method training class density point
Class network.
8. a kind of global population analysis method of more strategies based on depth network as described in claim 1, it is characterised in that:Step
Density estimation network R1 described in rapid S24 uses the Euclidean distance of real density figure to train network as loss function, obtains net
The expression formula of network parameter, loss function isN indicates training block size, Z (Xi;Θ) table
Show the network output at network parameter Θ, XiIndicate input picture,Indicate the density map obtained according to label.
9. a kind of global population analysis method of more strategies based on depth network as described in claim 1, it is characterised in that:Step
Pedestrian detection sub-network R2 trains network, the mesh detected in return area by intersecting entropy loss and marginal loss in rapid S25
Mark, to same human body head, torso marker is 1 people.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810461606.3A CN108717528A (en) | 2018-05-15 | 2018-05-15 | A kind of global population analysis method of more strategies based on depth network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810461606.3A CN108717528A (en) | 2018-05-15 | 2018-05-15 | A kind of global population analysis method of more strategies based on depth network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108717528A true CN108717528A (en) | 2018-10-30 |
Family
ID=63900028
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810461606.3A Pending CN108717528A (en) | 2018-05-15 | 2018-05-15 | A kind of global population analysis method of more strategies based on depth network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108717528A (en) |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109447008A (en) * | 2018-11-02 | 2019-03-08 | 中山大学 | Population analysis method based on attention mechanism and deformable convolutional neural networks |
CN109635763A (en) * | 2018-12-19 | 2019-04-16 | 燕山大学 | A kind of crowd density estimation method |
CN109697435A (en) * | 2018-12-14 | 2019-04-30 | 重庆中科云从科技有限公司 | Stream of people's quantity monitoring method, device, storage medium and equipment |
CN109726658A (en) * | 2018-12-21 | 2019-05-07 | 上海科技大学 | Crowd counts and localization method, system, electric terminal and storage medium |
CN109948703A (en) * | 2019-03-20 | 2019-06-28 | 上海交通大学 | Gene image procossing estimation method, system, medium and equipment based on deep learning |
CN110263849A (en) * | 2019-06-19 | 2019-09-20 | 合肥工业大学 | A kind of crowd density estimation method based on multiple dimensioned attention mechanism |
CN110263643A (en) * | 2019-05-20 | 2019-09-20 | 上海兑观信息科技技术有限公司 | A kind of fast video people counting method based on sequential relationship |
CN110321869A (en) * | 2019-07-10 | 2019-10-11 | 应急管理部天津消防研究所 | Personnel's detection and extracting method based on Multiscale Fusion network |
CN110598558A (en) * | 2019-08-14 | 2019-12-20 | 浙江省北大信息技术高等研究院 | Crowd density estimation method, device, electronic equipment and medium |
CN110853025A (en) * | 2019-11-15 | 2020-02-28 | 苏州大学 | Crowd density prediction method based on multi-column residual error cavity convolutional neural network |
CN110991267A (en) * | 2019-11-13 | 2020-04-10 | 北京影谱科技股份有限公司 | Density map generation method and device based on image or video crowd counting |
CN111176279A (en) * | 2019-12-31 | 2020-05-19 | 北京四维图新科技股份有限公司 | Method, device, equipment and storage medium for determining vulnerable crowd area |
CN111191525A (en) * | 2019-12-13 | 2020-05-22 | 上海伯镭智能科技有限公司 | Open public place people flow density estimation method based on multi-rotor unmanned aerial vehicle |
CN111488794A (en) * | 2020-02-24 | 2020-08-04 | 华中科技大学 | Adaptive receptive wildman population density estimation method based on hole convolution |
CN111563447A (en) * | 2020-04-30 | 2020-08-21 | 南京邮电大学 | Crowd density analysis and detection positioning method based on density map |
CN111767881A (en) * | 2020-07-06 | 2020-10-13 | 中兴飞流信息科技有限公司 | Self-adaptive crowd density estimation device based on AI technology |
CN111783610A (en) * | 2020-06-23 | 2020-10-16 | 西北工业大学 | Cross-domain crowd counting method based on de-entangled image migration |
CN111914819A (en) * | 2020-09-30 | 2020-11-10 | 杭州未名信科科技有限公司 | Multi-camera fusion crowd density prediction method and device, storage medium and terminal |
CN112001274A (en) * | 2020-08-06 | 2020-11-27 | 腾讯科技(深圳)有限公司 | Crowd density determination method, device, storage medium and processor |
CN112115862A (en) * | 2020-09-18 | 2020-12-22 | 广东机场白云信息科技有限公司 | Crowded scene pedestrian detection method combined with density estimation |
CN112883768A (en) * | 2019-11-29 | 2021-06-01 | 华为技术有限公司 | Object counting method and device, equipment and storage medium |
CN112989952A (en) * | 2021-02-20 | 2021-06-18 | 复旦大学 | Crowd density estimation method and device based on mask guidance |
CN112989916A (en) * | 2020-12-17 | 2021-06-18 | 北京航空航天大学 | Crowd counting method combining density estimation and target detection |
CN114663830A (en) * | 2022-03-04 | 2022-06-24 | 山东巍然智能科技有限公司 | Method for calculating number of people in multi-camera scene based on graph structure matching |
WO2022188030A1 (en) * | 2021-03-09 | 2022-09-15 | 中国科学院深圳先进技术研究院 | Crowd density estimation method, electronic device and storage medium |
CN114663830B (en) * | 2022-03-04 | 2024-05-14 | 山东巍然智能科技有限公司 | Method for calculating number of people in multiphase airport scene based on graph structure matching |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101872431A (en) * | 2010-02-10 | 2010-10-27 | 杭州海康威视软件有限公司 | People flow rate statistical method and system applicable to multi-angle application scenes |
CN106326937A (en) * | 2016-08-31 | 2017-01-11 | 郑州金惠计算机***工程有限公司 | Convolutional neural network based crowd density distribution estimation method |
CN107679503A (en) * | 2017-10-12 | 2018-02-09 | 中科视拓(北京)科技有限公司 | A kind of crowd's counting algorithm based on deep learning |
CN107909044A (en) * | 2017-11-22 | 2018-04-13 | 天津大学 | A kind of demographic method of combination convolutional neural networks and trajectory predictions |
CN108009477A (en) * | 2017-11-10 | 2018-05-08 | 东软集团股份有限公司 | Stream of people's quantity detection method, device, storage medium and the electronic equipment of image |
-
2018
- 2018-05-15 CN CN201810461606.3A patent/CN108717528A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101872431A (en) * | 2010-02-10 | 2010-10-27 | 杭州海康威视软件有限公司 | People flow rate statistical method and system applicable to multi-angle application scenes |
CN106326937A (en) * | 2016-08-31 | 2017-01-11 | 郑州金惠计算机***工程有限公司 | Convolutional neural network based crowd density distribution estimation method |
CN107679503A (en) * | 2017-10-12 | 2018-02-09 | 中科视拓(北京)科技有限公司 | A kind of crowd's counting algorithm based on deep learning |
CN108009477A (en) * | 2017-11-10 | 2018-05-08 | 东软集团股份有限公司 | Stream of people's quantity detection method, device, storage medium and the electronic equipment of image |
CN107909044A (en) * | 2017-11-22 | 2018-04-13 | 天津大学 | A kind of demographic method of combination convolutional neural networks and trajectory predictions |
Non-Patent Citations (1)
Title |
---|
DEEPAK BABU SAM ETC: ""Switching Convolutional Neural Network for Crowd Counting"", 《IEEE》 * |
Cited By (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109447008A (en) * | 2018-11-02 | 2019-03-08 | 中山大学 | Population analysis method based on attention mechanism and deformable convolutional neural networks |
CN109697435A (en) * | 2018-12-14 | 2019-04-30 | 重庆中科云从科技有限公司 | Stream of people's quantity monitoring method, device, storage medium and equipment |
CN109697435B (en) * | 2018-12-14 | 2020-10-23 | 重庆中科云从科技有限公司 | People flow monitoring method and device, storage medium and equipment |
CN109635763B (en) * | 2018-12-19 | 2020-06-09 | 燕山大学 | Crowd density estimation method |
CN109635763A (en) * | 2018-12-19 | 2019-04-16 | 燕山大学 | A kind of crowd density estimation method |
CN109726658A (en) * | 2018-12-21 | 2019-05-07 | 上海科技大学 | Crowd counts and localization method, system, electric terminal and storage medium |
CN109726658B (en) * | 2018-12-21 | 2022-10-04 | 上海科技大学 | Crowd counting and positioning method and system, electronic terminal and storage medium |
CN109948703A (en) * | 2019-03-20 | 2019-06-28 | 上海交通大学 | Gene image procossing estimation method, system, medium and equipment based on deep learning |
CN110263643A (en) * | 2019-05-20 | 2019-09-20 | 上海兑观信息科技技术有限公司 | A kind of fast video people counting method based on sequential relationship |
CN110263643B (en) * | 2019-05-20 | 2023-05-16 | 上海兑观信息科技技术有限公司 | Quick video crowd counting method based on time sequence relation |
CN110263849A (en) * | 2019-06-19 | 2019-09-20 | 合肥工业大学 | A kind of crowd density estimation method based on multiple dimensioned attention mechanism |
CN110321869A (en) * | 2019-07-10 | 2019-10-11 | 应急管理部天津消防研究所 | Personnel's detection and extracting method based on Multiscale Fusion network |
CN110598558A (en) * | 2019-08-14 | 2019-12-20 | 浙江省北大信息技术高等研究院 | Crowd density estimation method, device, electronic equipment and medium |
CN110991267A (en) * | 2019-11-13 | 2020-04-10 | 北京影谱科技股份有限公司 | Density map generation method and device based on image or video crowd counting |
CN110853025A (en) * | 2019-11-15 | 2020-02-28 | 苏州大学 | Crowd density prediction method based on multi-column residual error cavity convolutional neural network |
CN112883768A (en) * | 2019-11-29 | 2021-06-01 | 华为技术有限公司 | Object counting method and device, equipment and storage medium |
CN112883768B (en) * | 2019-11-29 | 2024-02-09 | 华为云计算技术有限公司 | Object counting method and device, equipment and storage medium |
CN111191525A (en) * | 2019-12-13 | 2020-05-22 | 上海伯镭智能科技有限公司 | Open public place people flow density estimation method based on multi-rotor unmanned aerial vehicle |
CN111176279A (en) * | 2019-12-31 | 2020-05-19 | 北京四维图新科技股份有限公司 | Method, device, equipment and storage medium for determining vulnerable crowd area |
CN111176279B (en) * | 2019-12-31 | 2023-09-26 | 北京四维图新科技股份有限公司 | Determination method, device, equipment and storage medium for vulnerable crowd area |
CN111488794A (en) * | 2020-02-24 | 2020-08-04 | 华中科技大学 | Adaptive receptive wildman population density estimation method based on hole convolution |
CN111563447A (en) * | 2020-04-30 | 2020-08-21 | 南京邮电大学 | Crowd density analysis and detection positioning method based on density map |
CN111563447B (en) * | 2020-04-30 | 2022-07-22 | 南京邮电大学 | Crowd density analysis and detection positioning method based on density map |
CN111783610A (en) * | 2020-06-23 | 2020-10-16 | 西北工业大学 | Cross-domain crowd counting method based on de-entangled image migration |
CN111767881A (en) * | 2020-07-06 | 2020-10-13 | 中兴飞流信息科技有限公司 | Self-adaptive crowd density estimation device based on AI technology |
CN112001274A (en) * | 2020-08-06 | 2020-11-27 | 腾讯科技(深圳)有限公司 | Crowd density determination method, device, storage medium and processor |
CN112001274B (en) * | 2020-08-06 | 2023-11-17 | 腾讯科技(深圳)有限公司 | Crowd density determining method, device, storage medium and processor |
CN112115862A (en) * | 2020-09-18 | 2020-12-22 | 广东机场白云信息科技有限公司 | Crowded scene pedestrian detection method combined with density estimation |
CN112115862B (en) * | 2020-09-18 | 2023-08-29 | 广东机场白云信息科技有限公司 | Congestion scene pedestrian detection method combined with density estimation |
CN111914819A (en) * | 2020-09-30 | 2020-11-10 | 杭州未名信科科技有限公司 | Multi-camera fusion crowd density prediction method and device, storage medium and terminal |
CN112989916A (en) * | 2020-12-17 | 2021-06-18 | 北京航空航天大学 | Crowd counting method combining density estimation and target detection |
CN112989952A (en) * | 2021-02-20 | 2021-06-18 | 复旦大学 | Crowd density estimation method and device based on mask guidance |
WO2022188030A1 (en) * | 2021-03-09 | 2022-09-15 | 中国科学院深圳先进技术研究院 | Crowd density estimation method, electronic device and storage medium |
CN114663830A (en) * | 2022-03-04 | 2022-06-24 | 山东巍然智能科技有限公司 | Method for calculating number of people in multi-camera scene based on graph structure matching |
CN114663830B (en) * | 2022-03-04 | 2024-05-14 | 山东巍然智能科技有限公司 | Method for calculating number of people in multiphase airport scene based on graph structure matching |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108717528A (en) | A kind of global population analysis method of more strategies based on depth network | |
Tao et al. | Smoke detection based on deep convolutional neural networks | |
Siagian et al. | Rapid biologically-inspired scene classification using features shared with visual attention | |
CN106203260A (en) | Pedestrian's recognition and tracking method based on multiple-camera monitoring network | |
CN104063719B (en) | Pedestrian detection method and device based on depth convolutional network | |
CN105069472B (en) | A kind of vehicle checking method adaptive based on convolutional neural networks | |
CN104951775B (en) | Railway highway level crossing signal region security intelligent identification Method based on video technique | |
CN105027550B (en) | For handling visual information with the system and method for detecting event | |
CN109447169A (en) | The training method of image processing method and its model, device and electronic system | |
CN108764085A (en) | Based on the people counting method for generating confrontation network | |
CN109635875A (en) | A kind of end-to-end network interface detection method based on deep learning | |
CN109543695A (en) | General density people counting method based on multiple dimensioned deep learning | |
CN106778604A (en) | Pedestrian's recognition methods again based on matching convolutional neural networks | |
CN103268470B (en) | Object video real-time statistical method based on any scene | |
CN108154102A (en) | A kind of traffic sign recognition method | |
CN108710875A (en) | A kind of take photo by plane road vehicle method of counting and device based on deep learning | |
CN107169435A (en) | A kind of convolutional neural networks human action sorting technique based on radar simulation image | |
CN107085696A (en) | A kind of vehicle location and type identifier method based on bayonet socket image | |
CN107729799A (en) | Crowd's abnormal behaviour vision-based detection and analyzing and alarming system based on depth convolutional neural networks | |
CN109543632A (en) | A kind of deep layer network pedestrian detection method based on the guidance of shallow-layer Fusion Features | |
CN109886241A (en) | Driver fatigue detection based on shot and long term memory network | |
CN106683091A (en) | Target classification and attitude detection method based on depth convolution neural network | |
CN106971563A (en) | Intelligent traffic lamp control method and system | |
CN107423698A (en) | A kind of gesture method of estimation based on convolutional neural networks in parallel | |
CN107016357A (en) | A kind of video pedestrian detection method based on time-domain convolutional neural networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181030 |