CN112149457A - People flow statistical method, device, server and computer readable storage medium - Google Patents

People flow statistical method, device, server and computer readable storage medium Download PDF

Info

Publication number
CN112149457A
CN112149457A CN201910564795.1A CN201910564795A CN112149457A CN 112149457 A CN112149457 A CN 112149457A CN 201910564795 A CN201910564795 A CN 201910564795A CN 112149457 A CN112149457 A CN 112149457A
Authority
CN
China
Prior art keywords
group
people flow
frame images
flow direction
people
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910564795.1A
Other languages
Chinese (zh)
Inventor
刘若鹏
栾琳
季春霖
陈吉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Guangqi Intelligent Technology Co ltd
Original Assignee
Xi'an Guangqi Future Technology Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Guangqi Future Technology Research Institute filed Critical Xi'an Guangqi Future Technology Research Institute
Priority to CN201910564795.1A priority Critical patent/CN112149457A/en
Publication of CN112149457A publication Critical patent/CN112149457A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a people flow statistical method, a device, a server and a computer readable storage medium. The method comprises the following steps: acquiring a first group of frame images and a second group of frame images, wherein the first group of frame images are from a first group of camera devices arranged in a control area, the second group of frame images are from a second group of camera devices arranged in the control area, the shooting visual angle of the first group of camera devices faces to a first passenger flow direction, and the shooting visual angle of the second group of camera devices faces to a second passenger flow direction; respectively carrying out head portrait detection on the first group of frame images and the second group of frame images; structurally storing the head portrait information detected based on the first group of frame images and the head portrait information detected based on the second group of frame images; and calculating the people flow quantity in the first people flow direction and the people flow quantity in the second people flow direction according to the head portrait information of the first group of frame images and the head portrait information of the second group of frame images. Based on the invention, people flow statistics of each people flow direction in the distribution control area can be relatively simply realized.

Description

People flow statistical method, device, server and computer readable storage medium
Technical Field
The invention relates to the technical field of computers, in particular to a people flow statistical method, a device, a server and a computer readable storage medium.
Background
The method is mainly concerned by government departments in large-flow areas and places aiming at judging and preventing crowding and treading events by people flow, the prior prevention and advance deployment is a problem which is solved by the current government departments, how to accurately judge the real-time people flow in the important areas and places, people flow directions also have some methods and experiences at present, and the main people flow statistical mode is as follows: manual statistics and bayonet-type people flow statistics, and the quality analysis for the two modes is as follows.
1) Statistics of human beings
The manual statistics is traditional people flow statistical mode, deploys statistics personnel through entry and export, and the number of people that gets into and go out is counted in real time, then reports and calculates the personnel quantity in the current region, and this kind of mode need deploy a large amount of manpowers, and it is highly concentrated to need statistics personnel attention in addition, makes mistakes easily, can only make statistics of the number change, can't acquire the people flow direction condition in the concrete region, and it is comparatively difficult to implement in the more place of export entry, and it wastes time and energy to need cooperate facilities such as fender.
2) Bayonet people flow statistics
The number of people is automatically counted by the artificial intelligent device, the number of people is counted by the bayonet device arranged at the main entrance and the main exit, the number of people in real time in the current defense arrangement range is calculated through statistical data, and people stream dispersion is carried out according to the statistical data. The method has the advantages that the number information can be accurately counted, but the information such as the flow direction of specific personnel in the distribution control area cannot be accurately judged, and the people still need to be identified and dredged manually.
Therefore, the expected effect cannot be achieved by the existing people stream statistical methods.
Disclosure of Invention
In view of the above, the present invention provides a people flow statistics method, device, server and computer readable storage medium, so as to solve the problems existing in the current people flow statistics on a deployment and control area.
According to a first aspect of the present invention, there is provided a people stream statistics method, comprising the steps of:
acquiring a first group of frame images and a second group of frame images, wherein the first group of frame images are from a first group of camera devices arranged in the control area, the second group of frame images are from a second group of camera devices arranged in the control area, the shooting visual angle of the first group of camera devices faces to the first passenger flow direction, and the shooting visual angle of the second group of camera devices faces to the second passenger flow direction;
respectively carrying out head portrait detection on the first group of frame images and the second group of frame images;
structured storage of avatar information detected based on the first set of frame images and avatar information detected based on the second set of frame images; and
and calculating the people flow quantity in the first people flow direction and the people flow quantity in the second people flow direction according to the head portrait information of the first group of frame images and the head portrait information of the second group of frame images.
In an alternative embodiment, whether an overlapping area exists between the shooting range of the first group of camera devices and the shooting range of the second group of camera devices is judged;
if an overlapping area exists between the shooting range of the first group of camera devices and the shooting range of the second group of camera devices, calculating the area proportion of the overlapping area occupying a control area, and correcting the people flow number in the first people flow direction and the people flow number in the second people flow direction according to the area proportion.
In an optional embodiment, further comprising: and displaying the people flow quantity in the first people flow direction and the people flow quantity in the second people flow direction.
In an optional embodiment, further comprising: comparing the people flow quantity in the first people flow direction and the people flow quantity in the second people flow direction with a set threshold value respectively;
and if the people flow quantity in the first people flow direction exceeds the set threshold value, and/or the people flow quantity in the second people flow direction exceeds the set threshold value, carrying out corresponding early warning.
In an alternative embodiment, the performing of avatar detection and recognition on the first and second sets of frame images, respectively, comprises:
respectively outputting the human head images of the first group of frame images and the second group of frame images by adopting a YOLO model; and
and performing aggregation processing on the head image by adopting an MTCNN model to distinguish the head portrait information in the first people flow direction and the head portrait information in the second people flow direction.
In an alternative embodiment, the first group of frame images and the second group of frame images are respectively frame images acquired at the same time point.
In an alternative embodiment, the first and second directions of flow are opposite directions.
According to a second aspect of the present invention, there is provided an apparatus for counting people flow in a controlled area, comprising:
the image acquisition module is used for acquiring a first group of frame images and a second group of frame images, wherein the first group of frame images are from a first group of camera devices arranged in the control area, the second group of frame images are from a second group of camera devices arranged in the control area, the shooting visual angle of the first group of camera devices faces to the first passenger flow direction, and the shooting visual angle of the second group of camera devices faces to the second passenger flow direction;
the detection and identification module is used for respectively detecting and identifying the head portrait of the first group of frame images and the second group of frame images;
the structural storage module is used for structurally storing the head portrait information detected based on the first group of frame images and the head portrait information detected based on the second group of frame images; and
and the data statistics module is used for counting the people flow quantity in the first people flow direction and the people flow quantity in the second people flow direction based on the head portrait information of the first group of frame images and the second group of frame images.
In an optional embodiment, further comprising: and the data correction module is used for judging whether an overlapping area exists between the shooting range of the first group of camera devices and the shooting range of the second group of camera devices, calculating the area proportion of the overlapping area occupying the control area if the overlapping area exists between the shooting range of the first group of camera devices and the shooting range of the second group of camera devices, and correcting the people flow number in the first people flow direction and the people flow number in the second people flow direction according to the area proportion.
In an optional embodiment, further comprising: and the data display module is used for displaying the people flow quantity in the first people flow direction and the people flow quantity in the second people flow direction.
In an alternative embodiment, the method comprises: the early warning module is used for comparing the people flow quantity in the first people flow direction and the people flow quantity in the second people flow direction with a set threshold value respectively; and if the people flow quantity in the first people flow direction exceeds the set threshold value, and/or the people flow quantity in the second people flow direction exceeds the set threshold value, carrying out corresponding early warning.
In an optional embodiment, the detection and identification module comprises:
the first detection unit is used for respectively outputting the human head images of the first group of frame images and the second group of frame images by adopting a YOLO model;
and the image alignment unit is used for processing the human head image by adopting an MTCNN model so as to distinguish head portrait attributes, wherein the head portrait attributes comprise a front face, a side face and a back head.
According to a third aspect of the present invention, there is provided a server comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform any of the methods described above.
According to a fourth aspect of the present invention, there is provided a computer readable storage medium having stored thereon computer instructions which, when executed, implement the method of any one of the above.
The invention has the following effects: the camera device is arranged in different people flow directions, the frame images are collected, the people flow quantity statistics of each people flow direction is carried out according to the frame images, the people flow statistics of each people flow direction in the whole control area can be simply realized relatively, and the current situation that the current security department needs to report and dredge the flow direction by deploying a large amount of manpower is solved.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent by describing embodiments of the present invention with reference to the following drawings, in which:
FIG. 1 is a schematic illustration of an exemplary deployment area;
fig. 2 is a schematic diagram of a connection between the image pickup apparatus and the server;
FIG. 3 is a flowchart of a method for counting people in a controlled area according to an embodiment of the present invention;
FIG. 4 is a block diagram of an apparatus for people stream statistics of a controlled area according to an embodiment of the present invention;
fig. 5 is a structural diagram of a server for performing people flow statistics on a controlled area according to an embodiment of the present invention.
Detailed Description
The present invention will be described below based on examples, but the present invention is not limited to only these examples. In the following detailed description of the present invention, certain specific details are set forth. It will be apparent to one skilled in the art that the present invention may be practiced without these specific details. Well-known methods, procedures, and procedures have not been described in detail so as not to obscure the present invention. The figures are not necessarily drawn to scale.
The flowcharts and block diagrams in the figures and block diagrams illustrate the possible architectures, functions, and operations of the systems, methods, and apparatuses according to the embodiments of the present invention, and may represent a module, a program segment, or merely a code segment, which is an executable instruction for implementing a specified logical function. It should also be noted that the executable instructions that implement the specified logical functions may be recombined to create new modules and program segments. The blocks of the drawings, and the order of the blocks, are thus provided to better illustrate the processes and steps of the embodiments and should not be taken as limiting the invention itself.
FIG. 1 is a schematic illustration of an exemplary deployment area. As shown in fig. 1, the deployment area 100 is an S-shaped passage, in which columns 11-14 for arranging cameras are disposed at intervals, and a beam (not shown in the figure) may be disposed on the columns 11-14, so that the cameras may be arranged on the beam, or of course, the cameras 1-8 may be disposed directly on the columns 11-14, wherein the shooting angles of the cameras 1, 3, 5, and 7 are oriented in the first passenger flow direction, and the shooting angles of the cameras 2, 4, 6, and 8 are oriented in the second passenger flow direction. The image pickup devices are thus classified into a first group of image pickup devices and a second group of image pickup devices in different orientations of the photographing angle of view. As shown in fig. 2, the image pickup devices 1, 3, 5, and 7 are a first group of image pickup devices, and the image pickup devices 2, 4, 6, and 8 are a second group of image pickup devices. The cameras 1 to 8 are connected to the server 201 in a wired or wireless manner, respectively, and upload videos shot in real time to the server 201.
It is known that the shooting angle of view determines the shooting range of the image pickup apparatus. With continued reference to fig. 1, 30 shows the shooting range of the image pickup device 5 in the first group of image pickup devices, and the area 20 shows the shooting range of the image pickup device 4 in the second group of image pickup devices. As can be seen from the figure, there is no overlapping area between the areas 20 and 30, and substantially all individuals who are between the two cameras can be captured in the video. Among them, the individual traveling in the first flow direction is a back-head image in the frame image captured by the imaging device 5, and the individual traveling in the second flow direction is a front face or side face image in the frame image captured by the imaging device 5. And vice versa for the camera device 4. As can be seen from the figure, the number of the back-scoop images in the frame image of the imaging device 5 plus the number of the front face images in the imaging device 4 is exactly equal to the number of the people streams in the first direction of the flow between the imaging devices 4 and 5, and the number of the front face (side face) images in the frame image of the imaging device 5 plus the number of the back-scoop images in the imaging device 4 is exactly equal to the number of the people streams in the second direction of the flow between the imaging devices 4 and 5. This is of course the ideal case. In actual conditions, the spacing distance of the stand columns and the shooting angle of view of the camera device can be adjusted according to actual requirements, so that ideal conditions are achieved.
According to practices, the control area 100 is generally places with large pedestrian flow, such as stations, airports, shopping squares and tourist attractions, and in these places, the detection, judgment and prediction of the pedestrian flow direction and the pedestrian flow quantity are very important, and the prevention of the anti-crowding trampling event in advance can be realized through the accurate detection, judgment and prediction of the pedestrian flow direction and the pedestrian flow quantity, and a reliable basis is provided for pedestrian flow restriction and dispersion. Therefore, in the following embodiments, the present invention provides a method for counting people flow in a controlled area.
Fig. 3 is a flowchart of a method for counting people flow in a controlled area according to an embodiment of the present invention. The method specifically comprises the following steps.
In step S301, a first set of frame images and a second set of frame images are acquired.
The first group of frame images are from a first group of camera devices arranged in a control area, the second group of frame images are from a second group of camera devices arranged in the control area, the first group of camera devices set the orientation of the camera devices according to a first people flow direction, and the second group of camera devices set the orientation of the camera devices according to a second people flow direction. The first and second directions of flow are related to the flow path of the controlled area, for example, the first and second directions of flow in fig. 1 are opposite directions, i.e. 180 degrees therebetween. It should be noted that although in fig. 1, the first group of image pickup devices and the second group of image pickup devices each include four image pickup devices, the present invention is not limited thereto, and for example, the first group of image pickup devices and the second group of image pickup devices may each include only one image pickup device.
Referring to fig. 2, when the server receives video data captured by the first group of cameras and the second group of cameras in real time, frame images of a plurality of key frames are extracted from the video data captured by the first group of cameras as a first group of frame images, and frame images of a plurality of key frames are extracted from the video data captured by the second group of cameras as a second group of frame images. The video data may be in any media format, for example, a streaming media format established by a real-time streaming protocol (RTSP), and the extracted frame image is in any picture format.
In step S302, avatar detection is performed on the first group of frame images and the second group of frame images, respectively.
In this step, the extracted frame image is subjected to human head portrait detection based on the target detection model. The target detection model is a trained neural network model, and the head images contained in the first group of frame images and the second group of frame images are respectively marked through the target detection model so as to obtain head portrait information. The head image here includes a front face image, a side face image, and a back head image of the person.
In step S303, the avatar information detected based on the first group of frame images and the avatar information detected based on the second group of frame images are structurally stored.
In this step, a database and a data structure for storing avatar information are determined, and avatar information detected from the first group of frame images and avatar information detected from the second group of frame images are integrated and stored according thereto.
In an optional embodiment, a relational database is used for storing the avatar information, the table structure of the corresponding database table comprises a plurality of fields such as avatar ID, head attribute, device number, frame number and the like, values corresponding to the fields are extracted from the avatar information and are stored in the database table. The head attribute is, for example, one of three options, namely a front face, a side face and a back head. The equipment number is the unique number of the camera device, and the camera device is determined to belong to the first group or the second camera device according to the equipment number. The frame number is used to uniquely identify a frame image and can therefore be represented by a timestamp at the time the frame image was acquired. If the need for an avatar in a later step is anticipated, the avatar image may be stored in an image format (e.g., image) in the database table.
In another alternative embodiment, a document database is used to store avatar information, each document data including images such as frontal, lateral and posterior head images and metadata information such as summary information corresponding to the images including avatar IDs, head attributes, pixel characteristics, etc., for example.
In step S304, the number of people flowing in the first direction and the number of people flowing in the second direction are calculated from the avatar information of the first group of frame images and the avatar information of the second group of frame images.
In the step, statistical calculation is carried out based on the head portrait information to obtain the number of people in the first people stream direction and the number of people in the second people stream direction. The number of people streams can be expressed, for example, as an individual number in the range of one square kilometer. The formula of statistical calculation is related to the deployment mode and the shooting visual angle of the camera device. The arrangement of the camera device mentioned in fig. 1 is taken as an example. For example, 2000 (face + side face) +500 back head is obtained from the first group of frame images, 400 (face + side face) +1000 back head is obtained from the second group of frame images, the number of people streams in the first flow direction is equal to 2000 (face + side face) +1000 back head of the first group of frame images, and the number of people streams in the second flow direction is equal to 400 (face + side face) +500 back head of the first group of frame images, so that the number of people streams can be calculated according to the number of people streams.
Based on this embodiment, can realize the people flow statistics to each people flow direction in whole arrangement control region relatively simply to solve present security protection department and need carry out the current situation that flows to report and dredge through deploying a large amount of manpowers, security protection department can know the people flow condition of each people flow direction in present whole arrangement control region and make accurate flow control through people flow statistics result accurately, and carry out quick unified mediation deployment based on this, promote whole response efficiency.
It is understood that the present invention is limited to the arrangement of the image pickup device and the set photographing angle of view. Although the camera device can be adjusted manually to achieve the ideal situation, there may be a case where recording is repeated. For example. For example, if a front face of a person is included in one frame image and a side face of the person is included in another frame image, the person is counted twice. For another example, if an avatar image of a person is not included in all the current frame images, the person is left out of statistics. Therefore, an optional mode is to integrate the head portrait information, integrate the head portrait information representing the same person into one record, and then perform statistics according to the integrated head portrait information; another alternative way is to select the head portrait information at multiple time points and take the average as the statistical result. In addition, in order not to miss individual head portrait information, an alternative way is to slightly overlap the shooting ranges between two opposite cameras, i.e. there is an overlapping area in the area 20 and the area 30 in fig. 1. In order to ensure the accuracy of the number of people streams obtained through the final statistics, the number of people streams in the first people stream direction and the number of people streams in the second people stream direction obtained in the above embodiment may be corrected according to the area size of the overlapping region. Specifically, whether an overlapping area exists in the shooting ranges of a first group of camera images and a second group of camera images is counted, if the overlapping area exists, the area proportion of the overlapping area occupying a control area is calculated, and then the people flow number in the first people flow direction and the people flow number in the second people flow direction are corrected according to the area proportion; if there is no overlapping area, the number of people streams in the first and second directions obtained in step S304 does not need to be corrected.
In an optional embodiment, in step S302, the YOLO model is used to mark the head images included in the first group of frame images and the second group of frame images, and then the MTCNN model is used to perform alignment processing on the head images, so as to output the head attributes. The YOLO model is a trained target detection model for marking the position of the human head (including the frontal face, the lateral face and the back of the brain) in an image. The MTCNN model is a trained alignment network model and is used for marking the positions of five sense organs on a head image and further judging whether the head image is a front face image, a side face image and a back brain image. Of course, the implementation of the present invention is not dependent on these two neural network models, and other algorithms or models may be used to implement the present invention.
In the above embodiments, two directions of the flow of people are taken as examples for description. It should be understood at this time that the present invention also extends to a deployment area of, for example, four directions of people flow. For example, the four directions of east, south, west and north can obtain that 20% of the total number of people in the current region walks eastward, 30% walks westward, 30% walks southward and 20% walks northward according to the result, and then the security department can perform accurate pedestrian flow early warning and management and control at the entrance and exit according to the actual flow of the current pedestrians. For example, a people flow number threshold is set for a control area, when the people flow number in a certain direction or in certain directions exceeds the set people flow number threshold, corresponding early warning is performed, and if the people flow number in any direction does not exceed the people flow number threshold, the early warning is not performed. The people flow amount data can be presented through a display screen. Accordingly, the warning information can also be presented through the display screen.
Fig. 4 is a structural diagram of an apparatus for counting people flow in a controlled area according to an embodiment of the present invention. The device runs on a server as shown in fig. 2, and specifically includes the following modules.
The image acquisition module 401 is configured to acquire a first group of frame images and a second group of frame images, where the first group of frame images are from a first group of camera devices arranged in the control area, the second group of frame images are from a second group of camera devices arranged in the control area, a shooting angle of the first group of camera devices faces a first passenger flow direction, and a shooting angle of the second group of camera devices faces a second passenger flow direction.
A detection and identification module 402, configured to perform avatar detection on the first group of frame images and the second group of frame images, respectively.
A structured storage module 403, configured to store the detected avatar information based on the first set of frame images and the detected avatar information based on the second set of frame images in a structured manner.
And a data statistics module 404, configured to count a number of people flowing in a first people flowing direction and a number of people flowing in a second people flowing direction based on the avatar information of the first group of frame images and the second group of frame images.
Based on the device, can realize the people flow statistics to each people flow direction in whole arrangement control region relatively simply to solve present security protection department and need carry out the current situation that flows to the report and dredge through deploying a large amount of manpowers, security protection department can know the people flow condition of each people flow direction in present whole arrangement control region and make accurate flow control through people flow statistics result accurately, and carry out quick unified mediation deployment based on this, promote whole response efficiency.
In an optional embodiment, the apparatus further comprises: and the data correction module is used for determining whether to correct the people flow number in the first people flow direction and the people flow number in the second people flow direction according to whether an overlapping area exists between the shooting range of the first group of camera devices and the shooting range of the second group of camera devices.
In an optional embodiment, the apparatus further comprises: and the data display module is used for displaying the people flow quantity in the first people flow direction and the people flow quantity in the second people flow direction.
In an optional embodiment, the apparatus further comprises: and the early warning module is used for early warning according to the people flow quantity in the first people flow direction and the people flow quantity in the second people flow direction.
In an optional embodiment, the detection and identification module comprises:
the first detection unit is used for respectively outputting the human head images of the first group of frame images and the second group of frame images by adopting a YOLO model;
and the image alignment unit is used for processing the human head image by adopting an MTCNN model so as to distinguish head portrait attributes, wherein the head portrait attributes comprise a front face, a side face and a back head.
In an alternative embodiment, the first group of frame images and the second group of frame images are a plurality of frame images respectively acquired at the same time interval within the same set period.
In an alternative embodiment, the first and second directions of flow are opposite directions.
It is to be understood that embodiments of the apparatus and embodiments of the method of the invention correspond, and therefore, some of the repeated details are not described in detail in the embodiments of the apparatus.
Fig. 5 is a block diagram of a server according to an embodiment of the present invention. Referring to fig. 5, the server device 50 includes at least one processor 510, a memory 520, an input device 530, and an output device 540 connected by a bus. Memory 520 includes Read Only Memory (ROM) and Random Access Memory (RAM), and memory 520 stores various computer instructions and data needed to perform system functions, and processor 510 reads the various computer instructions from memory 520 to perform various appropriate actions and processes. An input portion 5 of an input/output device including a keyboard, a mouse, and the like; an output section including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section including a hard disk and the like; and a communication section including a network interface card such as a LAN card, a modem, or the like. The memory also stores specific computer instructions that processor 510 reads from memory 520 and executes to perform the following operations: acquiring a first group of frame images and a second group of frame images, wherein the first group of frame images are from a first group of camera devices arranged in the control area, the second group of frame images are from a second group of camera devices arranged in the control area, the shooting visual angle of the first group of camera devices faces to the first passenger flow direction, and the shooting visual angle of the second group of camera devices faces to the second passenger flow direction; respectively carrying out head portrait detection on the first group of frame images and the second group of frame images; structured storage of avatar information detected based on the first set of frame images and avatar information detected based on the second set of frame images; and calculating the people flow quantity in the first people flow direction and the people flow quantity in the second people flow direction according to the head portrait information of the first group of frame images and the head portrait information of the second group of frame images.
Accordingly, an embodiment of the present invention provides a computer-readable storage medium, which stores computer instructions that, when executed, implement the operations specified by the above-described data processing method.
The flowcharts and block diagrams in the figures and block diagrams illustrate the possible architectures, functions, and operations of the systems, methods, and apparatuses according to the embodiments of the present invention, and may represent a module, a program segment, or merely a code segment, which is an executable instruction for implementing a specified logical function. It should also be noted that the executable instructions that implement the specified logical functions may be recombined to create new modules and program segments. The blocks of the drawings, and the order of the blocks, are thus provided to better illustrate the processes and steps of the embodiments and should not be taken as limiting the invention itself.
The various modules or units of the system may be implemented in hardware, firmware or software. The software includes, for example, a code program formed using various programming languages such as JAVA, C/C + +/C #, SQL, and the like. Although the steps and sequence of steps of the embodiments of the present invention are presented in method and method diagrams, the executable instructions of the steps implementing the specified logical functions may be re-combined to create new steps. The sequence of the steps should not be limited to the sequence of the steps in the method and the method illustrations, and can be modified at any time according to the functional requirements. Such as performing some of the steps in parallel or in reverse order.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (14)

1. A people flow statistical method is characterized by comprising the following steps:
acquiring a first group of frame images and a second group of frame images, wherein the first group of frame images are from a first group of camera devices, the second group of frame images are from a second group of camera devices, the shooting visual angle of the first group of camera devices faces to the first passenger flow direction, and the shooting visual angle of the second group of camera devices faces to the second passenger flow direction;
respectively carrying out head portrait detection on the first group of frame images and the second group of frame images;
structured storage of avatar information detected based on the first set of frame images and avatar information detected based on the second set of frame images; and
and calculating the people flow quantity in the first people flow direction and the people flow quantity in the second people flow direction according to the head portrait information of the first group of frame images and the head portrait information of the second group of frame images.
2. The method of claim 1, further comprising:
judging whether an overlapping area exists between the shooting range of the first group of camera devices and the shooting range of the second group of camera devices;
if an overlapping area exists between the shooting range of the first group of camera devices and the shooting range of the second group of camera devices, calculating the area proportion of the overlapping area occupying a control area, and correcting the people flow number in the first people flow direction and the people flow number in the second people flow direction according to the area proportion.
3. The method of claim 1, further comprising: and displaying the people flow quantity in the first people flow direction and the people flow quantity in the second people flow direction.
4. The method of claim 1, further comprising:
comparing the people flow quantity in the first people flow direction and the people flow quantity in the second people flow direction with a set threshold value respectively;
and if the people flow quantity in the first people flow direction exceeds the set threshold value, and/or the people flow quantity in the second people flow direction exceeds the set threshold value, carrying out corresponding early warning.
5. The method of claim 1, wherein performing avatar detection and recognition on the first and second sets of frame images, respectively, comprises:
respectively outputting the human head images of the first group of frame images and the second group of frame images by adopting a YOLO model; and
and processing the human head image by adopting an MTCNN model to distinguish head portrait attributes, wherein the head portrait attributes comprise a front face, a side face and a back head.
6. The method of claim 1, wherein the first set of frame images and the second set of frame images are each frame images acquired at a same point in time.
7. The method according to any one of claims 1 to 6, wherein the first and second human flow directions are opposite directions, and the avatar information includes frontal avatar information, lateral avatar information, and hindbrain avatar information.
8. An apparatus for people stream statistics, comprising:
the image acquisition module is used for acquiring a first group of frame images and a second group of frame images, wherein the first group of frame images are from a first group of camera devices, the second group of frame images are from a second group of camera devices, the shooting visual angle of the first group of camera devices faces to the first people stream direction, and the shooting visual angle of the second group of camera devices faces to the second people stream direction;
the detection and identification module is used for respectively carrying out head portrait detection on the first group of frame images and the second group of frame images;
the structural storage module is used for structurally storing the head portrait information detected based on the first group of frame images and the head portrait information detected based on the second group of frame images; and
and the data statistics module is used for counting the people flow quantity in the first people flow direction and the people flow quantity in the second people flow direction based on the head portrait information of the first group of frame images and the second group of frame images.
9. The apparatus of claim 8, further comprising:
and the data correction module is used for judging whether an overlapping area exists between the shooting range of the first group of camera devices and the shooting range of the second group of camera devices, calculating the area proportion of the overlapping area occupying the control area if the overlapping area exists between the shooting range of the first group of camera devices and the shooting range of the second group of camera devices, and correcting the people flow number in the first people flow direction and the people flow number in the second people flow direction according to the area proportion.
10. The apparatus of claim 8, further comprising: and the data display module is used for displaying the people flow quantity in the first people flow direction and the people flow quantity in the second people flow direction.
11. The apparatus of claim 8, further comprising: the early warning module is used for comparing the people flow quantity in the first people flow direction and the people flow quantity in the second people flow direction with a set threshold value respectively; and if the people flow quantity in the first people flow direction exceeds the set threshold value, and/or the people flow quantity in the second people flow direction exceeds the set threshold value, carrying out corresponding early warning.
12. The apparatus of claim 8, wherein the detection and identification module comprises:
the first detection unit is used for respectively outputting the human head images of the first group of frame images and the second group of frame images by adopting a YOLO model;
and the image alignment unit is used for processing the human head image by adopting an MTCNN model so as to distinguish head portrait attributes, wherein the head portrait attributes comprise a front face, a side face and a back head.
13. A server, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the method of any of the preceding claims 1 to 7.
14. A computer-readable storage medium storing computer instructions which, when executed, implement the method of any one of claims 1 to 7.
CN201910564795.1A 2019-06-27 2019-06-27 People flow statistical method, device, server and computer readable storage medium Pending CN112149457A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910564795.1A CN112149457A (en) 2019-06-27 2019-06-27 People flow statistical method, device, server and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910564795.1A CN112149457A (en) 2019-06-27 2019-06-27 People flow statistical method, device, server and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN112149457A true CN112149457A (en) 2020-12-29

Family

ID=73868553

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910564795.1A Pending CN112149457A (en) 2019-06-27 2019-06-27 People flow statistical method, device, server and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112149457A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115063728A (en) * 2022-07-07 2022-09-16 中国兵器装备集团自动化研究所有限公司 Personnel access statistical method and system

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101303727A (en) * 2008-07-08 2008-11-12 北京中星微电子有限公司 Intelligent management method based on video human number Stat. and system thereof
CN105554451A (en) * 2015-12-10 2016-05-04 天津艾思科尔科技有限公司 Image sensor with flow statistical function
CN105763853A (en) * 2016-04-14 2016-07-13 北京中电万联科技股份有限公司 Emergency early warning method for stampede accident in public area
CN107644204A (en) * 2017-09-12 2018-01-30 南京凌深信息科技有限公司 A kind of human bioequivalence and tracking for safety-protection system
CN108206935A (en) * 2016-12-16 2018-06-26 北京迪科达科技有限公司 A kind of personnel amount statistical monitoring analysis system
CN108564774A (en) * 2018-06-01 2018-09-21 郑子哲 A kind of intelligent campus based on video people stream statistical technique is anti-to trample prior-warning device
CN108629230A (en) * 2017-03-16 2018-10-09 杭州海康威视数字技术股份有限公司 A kind of demographic method and device and elevator scheduling method and system
CN108932464A (en) * 2017-06-09 2018-12-04 北京猎户星空科技有限公司 Passenger flow volume statistical method and device
CN108986064A (en) * 2017-05-31 2018-12-11 杭州海康威视数字技术股份有限公司 A kind of people flow rate statistical method, equipment and system
CN109145127A (en) * 2018-06-20 2019-01-04 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN109145708A (en) * 2018-06-22 2019-01-04 南京大学 A kind of people flow rate statistical method based on the fusion of RGB and D information
CN109635675A (en) * 2018-11-22 2019-04-16 广州市保伦电子有限公司 Video static state demographic method, device and medium based on number of people detection
CN109784462A (en) * 2018-12-29 2019-05-21 浙江大华技术股份有限公司 A kind of passenger number statistical system, method and storage medium
CN109902551A (en) * 2018-11-09 2019-06-18 阿里巴巴集团控股有限公司 The real-time stream of people's statistical method and device of open scene

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101303727A (en) * 2008-07-08 2008-11-12 北京中星微电子有限公司 Intelligent management method based on video human number Stat. and system thereof
CN105554451A (en) * 2015-12-10 2016-05-04 天津艾思科尔科技有限公司 Image sensor with flow statistical function
CN105763853A (en) * 2016-04-14 2016-07-13 北京中电万联科技股份有限公司 Emergency early warning method for stampede accident in public area
CN108206935A (en) * 2016-12-16 2018-06-26 北京迪科达科技有限公司 A kind of personnel amount statistical monitoring analysis system
CN108629230A (en) * 2017-03-16 2018-10-09 杭州海康威视数字技术股份有限公司 A kind of demographic method and device and elevator scheduling method and system
CN108986064A (en) * 2017-05-31 2018-12-11 杭州海康威视数字技术股份有限公司 A kind of people flow rate statistical method, equipment and system
CN108932464A (en) * 2017-06-09 2018-12-04 北京猎户星空科技有限公司 Passenger flow volume statistical method and device
CN107644204A (en) * 2017-09-12 2018-01-30 南京凌深信息科技有限公司 A kind of human bioequivalence and tracking for safety-protection system
CN108564774A (en) * 2018-06-01 2018-09-21 郑子哲 A kind of intelligent campus based on video people stream statistical technique is anti-to trample prior-warning device
CN109145127A (en) * 2018-06-20 2019-01-04 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN109145708A (en) * 2018-06-22 2019-01-04 南京大学 A kind of people flow rate statistical method based on the fusion of RGB and D information
CN109902551A (en) * 2018-11-09 2019-06-18 阿里巴巴集团控股有限公司 The real-time stream of people's statistical method and device of open scene
CN109635675A (en) * 2018-11-22 2019-04-16 广州市保伦电子有限公司 Video static state demographic method, device and medium based on number of people detection
CN109784462A (en) * 2018-12-29 2019-05-21 浙江大华技术股份有限公司 A kind of passenger number statistical system, method and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115063728A (en) * 2022-07-07 2022-09-16 中国兵器装备集团自动化研究所有限公司 Personnel access statistical method and system

Similar Documents

Publication Publication Date Title
CN105141885B (en) Carry out the method and device of video monitoring
CN110136447B (en) Method for detecting lane change of driving and identifying illegal lane change
JP6549797B2 (en) Method and system for identifying head of passerby
CN105208325B (en) The land resources monitoring and early warning method captured and compare analysis is pinpointed based on image
CN111967393A (en) Helmet wearing detection method based on improved YOLOv4
CN105354563A (en) Depth and color image combined human face shielding detection early-warning device and implementation method
US11615620B2 (en) Systems and methods of enforcing distancing rules
CN110188807A (en) Tunnel pedestrian target detection method based on cascade super-resolution network and improvement Faster R-CNN
CN106534789A (en) Integrated intelligent security and protection video monitoring system
CN103986910A (en) Method and system for passenger flow statistics based on cameras with intelligent analysis function
CN106023253A (en) Urban target trajectory tracking method
CN106210634A (en) A kind of wisdom gold eyeball identification personnel fall down to the ground alarm method and device
CN104581081A (en) Passenger-flow analysis method based on video information
CN104820995A (en) Large public place-oriented people stream density monitoring and early warning method
CN106060452A (en) Method and apparatus for controlling a surveillance system
CN106600643A (en) People counting method based on trajectory analysis
CN111325048A (en) Personnel gathering detection method and device
CN106570440A (en) People counting method and people counting device based on image analysis
CN106599776A (en) People counting method based on trajectory analysis
CN106372566A (en) Digital signage-based emergency evacuation system and method
Wang et al. Traffic camera anomaly detection
CN102867214B (en) Counting management method for people within area range
CN113052125B (en) Construction site violation image recognition and alarm method
CN108362382B (en) A kind of thermal imaging monitoring method and its monitoring system
CN112149457A (en) People flow statistical method, device, server and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20221207

Address after: 710000 second floor, building B3, yunhuigu, No. 156, Tiangu 8th Road, software new town, high tech Zone, Xi'an, Shaanxi

Applicant after: Xi'an Guangqi Intelligent Technology Co.,Ltd.

Address before: 710000 second floor, building B3, yunhuigu, 156 Tiangu 8th Road, software new town, Xi'an high tech Zone, Xi'an City, Shaanxi Province

Applicant before: Xi'an Guangqi Future Technology Research Institute

TA01 Transfer of patent application right