CN115063728A - Personnel access statistical method and system - Google Patents

Personnel access statistical method and system Download PDF

Info

Publication number
CN115063728A
CN115063728A CN202210802395.1A CN202210802395A CN115063728A CN 115063728 A CN115063728 A CN 115063728A CN 202210802395 A CN202210802395 A CN 202210802395A CN 115063728 A CN115063728 A CN 115063728A
Authority
CN
China
Prior art keywords
personnel
entering
information
detection
people
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210802395.1A
Other languages
Chinese (zh)
Inventor
王鑫
陈昌金
牟俊杰
蔡华闽
乐晋昆
李小兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China South Industries Group Automation Research Institute
Original Assignee
China South Industries Group Automation Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China South Industries Group Automation Research Institute filed Critical China South Industries Group Automation Research Institute
Priority to CN202210802395.1A priority Critical patent/CN115063728A/en
Publication of CN115063728A publication Critical patent/CN115063728A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Time Recorders, Dirve Recorders, Access Control (AREA)

Abstract

The invention discloses a personnel access statistical method and a system, wherein the method comprises the steps of receiving a video frame to be detected of a target area acquired by image acquisition equipment; acquiring personnel information in the video frame to be detected by adopting a target detection model, wherein the personnel information comprises personnel positions and personnel in-out states; and judging whether the personnel position is located at a preset counting position. The method does not depend on a tracking model, but uses a target detection model to detect the head of the person in real time, thereby realizing the real-time classification of the person entering and exiting information. And setting an electronic fence for counting entering personnel and an electronic fence for counting leaving personnel, and respectively counting the entering personnel and the leaving personnel by utilizing the mode that the personnel break into the electronic fence. The method can realize real-time statistics on the embedded equipment, can cope with the scenes that a large number of people go in and out, and has almost no reduction in the statistical speed. The method can cope with scenes with complex backgrounds, and is not easy to misjudge and miss.

Description

Personnel access statistical method and system
Technical Field
The invention relates to the technical field of personnel access data statistics, in particular to a personnel access statistical method and a personnel access statistical system which are suitable for embedded equipment and can realize real-time large-flow statistics.
Background
The personnel access statistic problem refers to counting the number of personnel entering and leaving a certain area in a certain time period. The algorithm that solves such problems is called a people entry and exit statistical algorithm.
In the prior art, a personnel access statistical algorithm is usually developed based on a tracking algorithm, and the tracking algorithm has the processes of characteristic matching and the like, so that the data calculation amount is large, and the complexity of the tracking algorithm and the number of tracked personnel form an exponential relationship, so that the tracking algorithm is usually used on a server with sufficient calculation resources.
In the post-PC era of rapid development of digital information technology and network technology, embedded systems have penetrated into various fields such as industry, agriculture, education, national defense, scientific research and daily life due to their advantages such as small size, high reliability, strong function, flexibility and convenience, and have played an extremely important promoting role in technical improvement, product updating, accelerating automation process, improving productivity and the like of various industries. However, because the embedded system is low in computing power, sufficient computing resources cannot be provided to directly apply the personnel access statistical algorithm developed based on the tracking algorithm in the prior art. In order to implement statistics of people entering and exiting by using an embedded system, a simpler pedestrian detection and tracking algorithm is often used in the prior art, for example, a machine learning algorithm (HOG feature, etc.) is used to extract pedestrian features, and an embedded system is used to perform rapid pedestrian detection.
However, although the personnel entry and exit statistical algorithm based on the traditional machine learning algorithm can be used on the embedded device, the traditional machine learning algorithm needs a large amount of parameter adjustment work to obtain a better effect, so that the use threshold is greatly improved, and meanwhile, under the condition that the background is complex, the traditional machine learning algorithm has more false reports and missing reports and has a poorer effect. In addition, under the condition of large-flow personnel, even if a simpler pedestrian detection and tracking algorithm is adopted, the running speed of the embedded system is still very low when the personnel enter and exit statistics is carried out by the embedded system, and the real-time statistics of the entrance and exit of the large-flow personnel can not be carried out.
Disclosure of Invention
In view of the above, the present invention provides a people entry statistical method and system for overcoming the above problems or at least partially solving the above problems. Instead of using a tracking algorithm to distinguish between entering and leaving persons, the entering and leaving information of the persons is provided directly by the object detection model. And after the information of the entering and leaving personnel is obtained, the personnel is correspondingly counted.
The invention provides the following scheme:
a people entry and exit statistical method comprising:
receiving a video frame to be detected of a target area acquired by image acquisition equipment;
acquiring personnel information in the video frame to be detected by adopting a target detection model, wherein the personnel information comprises personnel positions and personnel in-out states;
and judging whether the personnel position is located at a preset counting position, and if so, triggering counting so as to correspondingly add the personnel entering and exiting statistics according to the personnel state.
Preferably: the target detection model comprises yolov4-tiny target detection model and tensorrt reasoning framework.
Preferably: the detection object of the yolov4-tiny target detection model comprises a human head; the personnel position acquisition method comprises the following steps:
acquiring the position information of a detection frame corresponding to the head of a person in the video frame to be detected;
and determining the position of the person according to the position information of the detection frame.
Preferably: the position information of the detection frame comprises the horizontal and vertical coordinates of the top left corner vertex of the detection frame and the horizontal and vertical coordinates of the bottom right corner vertex of the detection frame.
Preferably: the camera of the image acquisition equipment is over against the direction of the entrance and the exit from inside to outside; the personnel entry and exit state acquisition method comprises the following steps:
acquiring detection frame category information corresponding to the heads of the persons in the video frame to be detected, wherein the detection frame category information comprises a front face and a back head;
and determining the personnel entering and exiting state according to the detection frame type information.
Preferably: the determining of the personnel entry and exit state according to the detection frame type information comprises the following steps:
when the detection frame type information is a front face, determining that the person entering and exiting state is entering;
and when the detection frame type information is hindbrain, determining that the person enters and exits as leaving.
Preferably: the counting positions include an entry into the electronic column and an exit from the electronic column.
Preferably: the corresponding addition of the statistics of the entering and exiting personnel according to the personnel state comprises the following steps:
determining that the personnel entering and exiting state is entering and the personnel position is located in the entering electronic bar, adding 1 to the entering personnel statistics;
and if the personnel entering and exiting state is determined to be leaving and the personnel position is located in the leaving electronic bar, adding 1 to the leaving personnel statistics.
A people entry and exit statistical system, the system comprising:
the video frame acquisition module is used for receiving a video frame to be detected of a target area acquired by the image acquisition equipment;
the personnel information detection module is used for acquiring personnel information in the video frame to be detected by adopting a target detection model, and the personnel information comprises personnel positions and personnel in-out states;
and the personnel counting module is used for judging whether the personnel position is positioned at a preset counting position or not, and if so, counting is triggered so as to carry out corresponding addition on the statistics of the entering and exiting personnel according to the personnel state.
Preferably: the personnel information detection module comprises a Yolov4-tiny detection submodule and a personnel information extraction submodule.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
according to the method and the system for counting the person entering and exiting, the head of the person is detected in real time by using the target detection model without depending on the tracking model, and therefore the person entering and exiting information is classified in real time. And setting an electronic fence for counting entering personnel and an electronic fence for counting leaving personnel, and respectively counting the entering personnel and the leaving personnel by utilizing the mode that the personnel break into the electronic fence. The method can realize real-time statistics on the embedded equipment, can cope with the scenes that a large number of people go in and out, and has almost no reduction in the statistical speed. The method can cope with scenes with complex backgrounds, and is not easy to misjudge and miss.
Of course, it is not necessary for any product in which the invention is practiced to achieve all of the above-described advantages at the same time.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
FIG. 1 is a flow chart of a statistical method for people entering and exiting according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a personnel entry and exit statistical system according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be derived from the embodiments of the present invention by a person skilled in the art, are within the scope of the present invention.
Referring to fig. 1, a statistical method for people entering and exiting provided by an embodiment of the present invention, as shown in fig. 1, may include:
s101, receiving a video frame to be detected of a target area acquired by image acquisition equipment; the image acquisition equipment can adopt a commonly used monitoring camera, and the image acquisition range can be set as required for acquiring images in a target range. For example, a corridor connecting a doorway of a building with an indoor space may be used as the target detection range.
S102, acquiring personnel information in the video frame to be detected by adopting a target detection model, wherein the personnel information comprises personnel positions and personnel in-out states;
s103, judging whether the personnel position is at a preset counting position, and if so, triggering counting so as to correspondingly add the statistics of the entering and exiting personnel according to the personnel state.
The people entering and leaving statistical algorithm needs to count the number of people entering and leaving, and it is most important to distinguish the people entering and leaving and count correctly. Because the calculation bottleneck of the existing personnel entry and exit statistical algorithm is the tracking algorithm, the personnel entry and exit statistical method provided by the embodiment of the application does not use the tracking algorithm to distinguish entering personnel from leaving personnel, but directly provides the entry and leaving information of the personnel through the target detection model. And after the information of the entering and leaving personnel is obtained, the personnel is correspondingly counted.
In practical applications, the target detection model may take various forms, for example, in one implementation, the present application embodiments may provide that the target detection model includes yolov4-tiny target detection model and tensorrt inference framework.
The detection object of the yolov4-tiny target detection model comprises a human head; the personnel position acquisition method comprises the following steps:
acquiring the position information of a detection frame corresponding to the head of a person in the video frame to be detected;
and determining the position of the person according to the position information of the detection frame.
The yolov4-tiny target detection model generates a rectangular detection frame around the head of a person after the head of the person is detected, and the position information of the obtained detection frame is used as the position information of the actual position of the person. So that a judgment can be made as to where the person is.
Specifically, the position information of the detection frame includes a horizontal coordinate and a vertical coordinate of a vertex at the upper left corner of the detection frame and a horizontal coordinate and a vertical coordinate of a vertex at the lower right corner of the detection frame. The detailed method and principle for the detection frame position information determination will be described in detail later.
By acquiring the position information of the detection frame generated by the head of the person, the current position of the person can be determined, and whether the current position meets the requirement of addition and subtraction of statistics of the person entering and exiting the device can be determined subsequently. In order to judge the current in-and-out state of a person, the embodiment of the application can also provide that a camera of the image acquisition equipment is over against the direction of the entrance and the exit from inside to outside; the personnel entry and exit state acquisition method comprises the following steps:
acquiring detection frame category information corresponding to the heads of the persons in the video frame to be detected, wherein the detection frame category information comprises a front face and a back head;
and determining the personnel entering and exiting state according to the detection frame type information.
Through the analysis to the personnel head image in the detection frame, can confirm that personnel are present the front face just to the camera or the back head is just to the camera, combine the installation direction of camera, can confirm that personnel are present in the entering state or leave the state. Specifically, the determining the person entering and exiting state according to the detection frame category information includes:
when the detection frame type information is a front face, determining that the person entering and exiting state is entering;
and when the detection frame type information is hindbrain, determining that the person enters and exits as leaving.
In order to prevent over-counting, embodiments of the present application may further provide that the counting positions include entering the electronic column and leaving the electronic column.
Specifically, the performing the corresponding addition on the statistics of the entering and exiting persons according to the person status includes:
determining that the personnel entering and exiting state is entering and the personnel position is located in the entering electronic bar, adding 1 to the entering personnel statistics;
and if the personnel entering and exiting state is determined to be leaving and the personnel position is located in the leaving electronic bar, adding 1 to the leaving personnel statistics.
In short, the method for counting the person entering and exiting provided by the embodiment of the application does not depend on the tracking model, but uses the target detection model to detect the head of the person in real time, so that the person entering and exiting information is classified in real time. And setting an electronic fence for counting entering personnel and an electronic fence for counting leaving personnel, and respectively counting the entering personnel and the leaving personnel by utilizing the mode that the personnel break into the electronic fence. The method can realize real-time statistics on the embedded equipment (jetsonnano), can cope with the scene of entering and exiting of a large number of people, and the statistical speed is hardly reduced. The method can cope with scenes with complex backgrounds, and is not easy to misjudge and miss.
Referring to fig. 2, corresponding to the method for counting people entering and exiting provided by the embodiment of the present application, as shown in fig. 2, an embodiment of the present application further provides a system for counting people entering and exiting, where the system may specifically include:
the video frame acquiring module 201 is configured to receive a to-be-detected video frame of a target area acquired by an image acquiring device;
the personnel information detection module 202 is configured to acquire personnel information in the video frame to be detected by using a target detection model, where the personnel information includes personnel positions and personnel in-and-out states;
and the personnel counting module 203 is used for judging whether the personnel position is positioned at a preset counting position or not, and if so, triggering counting so as to perform corresponding addition on the statistics of the entering and exiting personnel according to the personnel state.
Specifically, the personnel information detection module comprises a Yolov4-tiny detection submodule and a personnel information extraction submodule.
The following describes in detail a statistical method and system for people entering and exiting provided by the embodiments of the present application.
According to the method provided by the embodiment of the application, the video frame is received from the outside and is transmitted to the personnel information detection module. In the personnel information detection module, the video frame is processed by a specific target detection algorithm to obtain personnel information in the video frame, wherein the information comprises personnel position, personnel entering or leaving information and the like. The processed personnel information is sent to a personnel counting module, in the module, in order to avoid the personnel being counted for many times, the counting can be triggered only when the personnel position meets the specific requirement, corresponding addition is carried out on the in-out statistics according to the personnel in-out information, and finally the statistical result is output in real time.
The modules will be described in detail below.
(1) Personnel information detection module
And in the personnel information detection module, a video frame to be detected is input from the outside, and required personnel information is obtained after processing. Specifically, the module comprises a Yolov4-tiny detection submodule and a personnel information extraction submodule. In a Yolov4-tiny detection submodule, a video frame is subjected to a specially trained Yolov4-tiny detection model to obtain a detection result; in the personnel information extraction submodule, the detection result is processed to obtain the required personnel information.
Two sub-modules will be described separately below.
In a Yolov4-tiny detection submodule, in order to obtain the fastest speed and better detection effect on an embedded device (here, jetson nano is selected), a Yolov4-tiny target detection model is selected by the method provided by the application, and a tensorrt reasoning framework is used for acceleration. In order to give the person position and person entry and exit information using the yolov4-tiny object detection model, the model needs to be retrained.
The training process is as follows:
a. training data: 700 pictures shot by cameras of left and right entrances and exits or corridors. The picture must here satisfy that the person in the figure is head-featured, either going outside or entering. The pictures do not include the human side face as much as possible or the entering and exiting directions are difficult to judge.
b. Data annotation: the label categories are two categories of the front face (front) and the back of the brain (back) of the human head. When marking, attention should be paid to avoid marking the human head which is too far away or the pixel size is too small. For example, in a 1280 × 720 size picture, the label minimum size should not be less than 50 × 50.
c. Training data: using the darknet training data, setting the number of turns to 10000 turns, respectively decreasing the learning rate to 0.1 in the 8000 th turn and 9000 th turn, and setting the picture input to 640 to improve the small target training effect as much as possible.
The yolov4-tiny detection model is obtained through the training of the process, and the detection accuracy can reach more than 99%. The detection model may output detection box information (x1, y1, x2, y2) and category information id of each person in the picture frame, where (x1, y1) is the horizontal and vertical coordinates of the vertex at the upper left corner of the detection box, (x2, y2) is the horizontal and vertical coordinates of the vertex at the lower right corner of the detection box, and id is a category (front face front or back brain).
In summary, the input and output of the Yolov4-tiny detection sub-module are shown in table 1.
TABLE 1Yolov4-tiny detection submodule input/output table
Figure BDA0003734500550000071
Figure BDA0003734500550000081
In the personnel information extraction submodule, required personnel information can be extracted according to a detection result output by the Yolov4-tiny detection submodule. The personal information here includes personal position information and personal entry and exit information.
The specific extraction steps are as follows:
a. the person position information is extracted from the detection frame position information. Because the detection frame is a human head part and is relatively small, the midpoint position of the detection frame is adopted to represent the human head position and further represent the position of the person. Therefore, the conversion formula from the detection frame position information (x1, y1, x2, y2) to the person position information (centerx, centery) is:
centerx=(x1+x2)/2
centery=(y1+y2)/2
b. and extracting the personnel category information from the detection frame category information. The detection frame category information includes an anterior face front or a posterior brain back. In a normal case, the camera faces the direction of the entrance from inside to outside, the head of the person entering is the front face, and the head of the person leaving is the back head scoop.
Therefore, the application can stipulate that the person enters when the detection frame type information is the front face front; when the detection frame type information is a hindbrain back, the person is away.
In summary, the input and output of the person information extraction sub-module are shown in table 2.
TABLE 2 input/output table for person information extraction submodule
Figure BDA0003734500550000082
In summary, in the staff information detection module, the picture frame is processed by the Yolov4-tiny detection submodule and the staff information extraction submodule to obtain the required staff information, including the staff position information and the staff in-out information.
(2) Personnel counting and counting module
In the personnel counting and counting module, personnel information is input from the outside, and a required counting result is obtained after a series of judgment and processing.
The method comprises the following specific steps:
a. before the judgment and processing, an electronic field In and an electronic field Out are made for counting the entering person and the leaving person respectively. When the position of the person entering the system (the detection type is front) is positioned In the electronic bar In, counting the entering person by adding 1; when the position of the person whose entering and exiting information is leaving (the detection category is back) is positioned in the electronic field Out, adding 1 to the leaving person statistics. Attention is paid here to setting the width and height of the electronic bar to prevent over-counting.
b. And judging whether the current personnel enter or exit according to personnel entering and exiting information in the externally input personnel information. If so, judging whether the person is In the electronic bar In or not according to the position information In the person information, if so, adding 1 to the person statistics, and if not, keeping the statistics unchanged. If the current person is judged to be Out, judging whether the current person is in the electronic bar Out according to the position information in the person information, if so, adding 1 to the leaving person statistics, and if not, keeping the statistics unchanged. The input and output table of the people counting statistical module is shown in table 3.
TABLE 3 input/output table of personnel counting statistical module
Figure BDA0003734500550000091
In summary, the input and output of the people counting module are shown in table 3. Personnel information passes through the module and can generate statistical results.
Therefore, after the real-time large-flow personnel access statistical algorithm receives the picture frame and is processed by the personnel information detection module and the personnel counting statistical module, personnel access statistical results can be output in real time.
It should be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present application.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, the system or system embodiments are substantially similar to the method embodiments and therefore are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for related points. The above-described system and system embodiments are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (10)

1. A people entry and exit statistical method, the method comprising:
receiving a video frame to be detected of a target area acquired by image acquisition equipment;
acquiring personnel information in the video frame to be detected by adopting a target detection model, wherein the personnel information comprises personnel positions and personnel in-out states;
and judging whether the personnel position is located at a preset counting position, and if so, triggering counting so as to correspondingly add the personnel entering and exiting statistics according to the personnel state.
2. The people entry and exit statistical method of claim 1, characterized in that said target detection model comprises yolov4-tiny target detection model and tensorrt inference framework.
3. The statistical method for people entry and exit according to claim 2, wherein the detection object of the yolov4-tiny target detection model comprises a head of a person; the personnel position acquisition method comprises the following steps:
acquiring the position information of a detection frame corresponding to the head of a person in the video frame to be detected;
and determining the position of the person according to the position information of the detection frame.
4. The people entry and exit statistical method of claim 3, wherein the detection box position information comprises the horizontal and vertical coordinates of the top left corner vertex of the detection box and the horizontal and vertical coordinates of the bottom right corner vertex of the detection box.
5. The statistical method for people entering and exiting according to claim 2, wherein the camera of the image acquisition device faces the entrance and exit direction from inside to outside; the personnel entry and exit state acquisition method comprises the following steps:
acquiring detection frame category information corresponding to the heads of the persons in the video frame to be detected, wherein the detection frame category information comprises a front face and a back head;
and determining the personnel entering and exiting state according to the detection frame type information.
6. The people entry and exit statistical method according to claim 5, wherein the determining the people entry and exit state according to the detection box category information comprises:
when the detection frame type information is a front face, determining that the person entering and exiting state is entering;
and when the detection frame type information is hindbrain, determining that the person enters and exits as leaving.
7. The statistical method of people ingress and egress according to claim 6, wherein said counting positions comprise an ingress electronic column and an egress electronic column.
8. The people entry and exit statistical method of claim 7, wherein said correspondingly adding people entry and exit statistics based on said people status comprises:
determining that the personnel entering and exiting state is entering and the personnel position is located in the entering electronic bar, adding 1 to the entering personnel statistics;
and if the personnel entering and exiting state is determined to be leaving and the personnel position is located in the leaving electronic bar, adding 1 to the leaving personnel statistics.
9. A people entry and exit statistical system, the system comprising:
the video frame acquisition module is used for receiving a video frame to be detected of a target area acquired by the image acquisition equipment;
the personnel information detection module is used for acquiring personnel information in the video frame to be detected by adopting a target detection model, and the personnel information comprises personnel positions and personnel in-out states;
and the personnel counting module is used for judging whether the personnel position is positioned at a preset counting position or not, and if so, counting is triggered so as to carry out corresponding addition on the statistics of the entering and exiting personnel according to the personnel state.
10. The people entry and exit statistical system according to claim 9, wherein the people information detection module comprises a Yolov4-tiny detection submodule and a people information extraction submodule.
CN202210802395.1A 2022-07-07 2022-07-07 Personnel access statistical method and system Pending CN115063728A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210802395.1A CN115063728A (en) 2022-07-07 2022-07-07 Personnel access statistical method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210802395.1A CN115063728A (en) 2022-07-07 2022-07-07 Personnel access statistical method and system

Publications (1)

Publication Number Publication Date
CN115063728A true CN115063728A (en) 2022-09-16

Family

ID=83206058

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210802395.1A Pending CN115063728A (en) 2022-07-07 2022-07-07 Personnel access statistical method and system

Country Status (1)

Country Link
CN (1) CN115063728A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108090452A (en) * 2017-12-20 2018-05-29 贵阳宏益房地产开发有限公司 Personnel statistical method and device
CN110222637A (en) * 2019-06-04 2019-09-10 深圳市基鸿运科技有限公司 A kind of passenger flow statistical method and its system based on 3D rendering Head recognition
CN110309717A (en) * 2019-05-23 2019-10-08 南京熊猫电子股份有限公司 A kind of pedestrian counting method based on deep neural network
CN112149457A (en) * 2019-06-27 2020-12-29 西安光启未来技术研究院 People flow statistical method, device, server and computer readable storage medium
CN112668525A (en) * 2020-12-31 2021-04-16 深圳云天励飞技术股份有限公司 People flow counting method and device, electronic equipment and storage medium
CN113762169A (en) * 2021-09-09 2021-12-07 北京市商汤科技开发有限公司 People flow statistical method and device, electronic equipment and storage medium
CN114092956A (en) * 2020-07-29 2022-02-25 顺丰科技有限公司 Store passenger flow statistical method and device, computer equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108090452A (en) * 2017-12-20 2018-05-29 贵阳宏益房地产开发有限公司 Personnel statistical method and device
CN110309717A (en) * 2019-05-23 2019-10-08 南京熊猫电子股份有限公司 A kind of pedestrian counting method based on deep neural network
CN110222637A (en) * 2019-06-04 2019-09-10 深圳市基鸿运科技有限公司 A kind of passenger flow statistical method and its system based on 3D rendering Head recognition
CN112149457A (en) * 2019-06-27 2020-12-29 西安光启未来技术研究院 People flow statistical method, device, server and computer readable storage medium
CN114092956A (en) * 2020-07-29 2022-02-25 顺丰科技有限公司 Store passenger flow statistical method and device, computer equipment and storage medium
CN112668525A (en) * 2020-12-31 2021-04-16 深圳云天励飞技术股份有限公司 People flow counting method and device, electronic equipment and storage medium
CN113762169A (en) * 2021-09-09 2021-12-07 北京市商汤科技开发有限公司 People flow statistical method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN104123544B (en) Anomaly detection method and system based on video analysis
CN106412501B (en) A kind of the construction safety behavior intelligent monitor system and its monitoring method of video
CN108596028B (en) Abnormal behavior detection algorithm based on video recording
CN103679215B (en) The video frequency monitoring method of the groupment behavior analysiss that view-based access control model big data drives
CN111353338B (en) Energy efficiency improvement method based on business hall video monitoring
Gong et al. Local distinguishability aggrandizing network for human anomaly detection
CN112487891B (en) Visual intelligent dynamic identification model construction method applied to electric power operation site
CN111325051A (en) Face recognition method and device based on face image ROI selection
CN110458198B (en) Multi-resolution target identification method and device
CN107483894A (en) Judge to realize the high ferro station video monitoring system of passenger transportation management based on scene
CN107358163A (en) Visitor's line trace statistical method, electronic equipment and storage medium based on recognition of face
CN111914653A (en) Personnel marking method and device
CN110557628A (en) Method and device for detecting shielding of camera and electronic equipment
Byeon et al. A surveillance system using CNN for face recognition with object, human and face detection
CN111950507B (en) Data processing and model training method, device, equipment and medium
CN113920585A (en) Behavior recognition method and device, equipment and storage medium
CN105678268B (en) Subway station scene pedestrian counting implementation method based on double-region learning
CN111144260A (en) Detection method, device and system of crossing gate
CN115063728A (en) Personnel access statistical method and system
CN116187634A (en) Intelligent queuing system and prediction method for same
CN113128414A (en) Personnel tracking method and device, computer readable storage medium and electronic equipment
CN108288261A (en) The screening technique and face recognition of facial photo
CN104504401B (en) A kind of target identification system based on more monitoring probes
Wan Research on Intelligent Video Analysis Technology in Smart Campus Security Scenario
Fauzan et al. Implementation of object detection method for intelligent surveillance systems at the faculty of engineering, Universitas Sebelas Maret (UNS) Surakarta

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination