CN110334569A - The volume of the flow of passengers passes in and out recognition methods, device, equipment and storage medium - Google Patents

The volume of the flow of passengers passes in and out recognition methods, device, equipment and storage medium Download PDF

Info

Publication number
CN110334569A
CN110334569A CN201910253943.8A CN201910253943A CN110334569A CN 110334569 A CN110334569 A CN 110334569A CN 201910253943 A CN201910253943 A CN 201910253943A CN 110334569 A CN110334569 A CN 110334569A
Authority
CN
China
Prior art keywords
client
image
flow
passengers
volume
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910253943.8A
Other languages
Chinese (zh)
Other versions
CN110334569B (en
Inventor
丁晓刚
陈潘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHENZHEN XIAOZHOU TECHNOLOGY Co Ltd
Original Assignee
SHENZHEN XIAOZHOU TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHENZHEN XIAOZHOU TECHNOLOGY Co Ltd filed Critical SHENZHEN XIAOZHOU TECHNOLOGY Co Ltd
Priority to CN201910253943.8A priority Critical patent/CN110334569B/en
Publication of CN110334569A publication Critical patent/CN110334569A/en
Application granted granted Critical
Publication of CN110334569B publication Critical patent/CN110334569B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to the technical fields of image recognition, and more particularly, to a kind of volume of the flow of passengers disengaging recognition methods, device, equipment and storage medium, it includes: S10 that the volume of the flow of passengers, which passes in and out recognition methods: obtaining identify video recording in shop in real time;S20: sub-frame processing is carried out to the identification video recording acquired, obtains framing image sequentially in time;S30: using number of people detection model in such a way that the framing image detection goes out number of people feature, head image is obtained;S40: according to the head image in the framing image of continuous several frames, client's motion profile is obtained;S50: client's motion profile is analyzed, if client's motion profile meets preset angular range, then obtain client's direction of motion of client's motion profile, and client is obtained according to client's direction of motion and passes in and out situation, and client disengaging situation is sent and is written in volume of the flow of passengers analytical database.The present invention has the effect of improving volume of the flow of passengers analysis accuracy rate.

Description

The volume of the flow of passengers passes in and out recognition methods, device, equipment and storage medium
Technical field
The present invention relates to the technical fields of image recognition, more particularly, to a kind of volume of the flow of passengers disengaging recognition methods, device, set Standby and storage medium.
Background technique
Currently, the volume of the flow of passengers is the data that the place such as supermarket, market is managed and decision is important.Wherein guest flow statistics Mostly use be image recognition data statistical approach.In existing technology, by being provided with taking the photograph for volume of the flow of passengers analysis in the storefront As head, the head zone of the guest of disengaging is identified and to the corresponding motion trail analysis of the head zone of guest, from And the corresponding volume of the flow of passengers is obtained, but in the prior art, there can be the camera that the guest to hover in shop enters volume of the flow of passengers analysis It is interior, the volume of the flow of passengers can be analyzed and be impacted.
Summary of the invention
The object of the present invention is to provide it is a kind of raising the volume of the flow of passengers analysis accuracy rate the volume of the flow of passengers disengaging recognition methods, device, Equipment and storage medium.
Foregoing invention purpose one of the invention has the technical scheme that
A kind of volume of the flow of passengers disengaging recognition methods, the volume of the flow of passengers disengaging recognition methods include:
S10: it obtains identify video recording in shop in real time;
S20: sub-frame processing is carried out to the identification video recording acquired, obtains framing image sequentially in time;
S30: using number of people detection model in such a way that the framing image detection goes out number of people feature, head image is obtained;
S40: according to the head image in the framing image of continuous several frames, client's motion profile is obtained;
S50: analyzing client's motion profile, if client's motion profile meets preset angular range, obtains Client's direction of motion of client's motion profile is taken, and client is obtained according to client's direction of motion and passes in and out situation, and will The client passes in and out situation and sends and be written in volume of the flow of passengers analytical database.
By using above-mentioned technical proposal, when analyzing the volume of the flow of passengers, by being provided with camera shooting above shop door mouth Device, and the guest into shop is shot using photographic device, to count the volume of the flow of passengers;When counting the volume of the flow of passengers, from identification Head image starts out, and the guest can be judged by recording client's motion profile if client's motion profile meets certain angle It is to have been come into shop from doorway, rather than hover near doorway, in this way, can will not be into shop or out point, and It is that the customer to hover in shop door mouth excludes outside the volume of the flow of passengers, to improve the really degree of volume of the flow of passengers analysis, and then improves The accuracy of volume of the flow of passengers analysis.
The present invention is further arranged to: before the step S30, the volume of the flow of passengers passes in and out recognition methods further include:
S301: the background picture of identification video recording in the shop is obtained, and using the background picture as comparison picture;
S302: several human body head region pictures are obtained, and extract head region in the picture of the human body head region respectively Characteristic value, construction feature vector;
S303: being trained the comparison picture and described eigenvector using deep learning, obtains the number of people detection mould Type.
By using above-mentioned technical proposal, identified to the customer into shop, before obtaining the head image of client, First by the way of deep learning, number of people detection model is trained, is identified convenient for server and counts the volume of the flow of passengers.
The present invention is further arranged to: stating step S30 includes:
S31: the framing image and the similarity compared between image are successively calculated sequentially in time, and is chosen similar Degree is less than the framing image of preset threshold value, as identification image;
S32: the identification image is detected using the number of people detection model, if detecting institute in the identification image Number of people feature is stated, then using the identification image as the head image.
By using above-mentioned technical proposal, when identifying head image, by calculating between adjacent two framing images Similarity chooses similarity less than the framing image of preset threshold value and is used as identification image, can be by point of not motion change Frame image is excluded, and can be reduced radix when server identification head image, be improved the efficiency of identification, while also reducing The quantity for storing photo, alleviates the memory space of server.
The present invention is further arranged to: the step S40 includes:
S41: coordinate is divided according to pixel for the framing image, and obtains head image institute described in each frame framing image Coordinate points;
S42: the distance between the coordinate points where head image described in adjacent two frame are judged, if between the coordinate points Distance then determines that the head image of two adjacent two frames is the same person between 20-30 pixel;
S43: obtaining the head image that continuous several frames are determined as same people, and by head image pair described in several frames The coordinate points answered are put into the coordinate, and are linked to be client's motion profile sequentially in time.
By using above-mentioned technical proposal, by that can judge two neighboring framing image to the distance between coordinate points In head image whether be same customer, consequently facilitating obtain client's motion profile of the customer, convenient for being carried out to the volume of the flow of passengers Statistics.
The present invention is further arranged to: the step S50 includes:
S51: center line is set in the coordinate;
S52: it if the beginning of client's motion profile and end are located at the center line both ends, obtain the client and moves rail The angle of mark and the center line;
S53: corresponding with end acquisition according to client's motion profile beginning if the angle meets the angular range Client's direction of motion.
By using above-mentioned technical proposal, by the setting of center line, and to the client's motion profile for crossing the center line The angle of the angle between client's motion profile and center line is judged, so as to not count the customer to hover in shop door mouth In the volume of the flow of passengers, to realize the accuracy for improving guest flow statistics.
Foregoing invention purpose two of the invention has the technical scheme that
A kind of volume of the flow of passengers disengaging identification device, the volume of the flow of passengers disengaging identification device include:
Video recording obtains module, and video recording is identified in shop for obtaining in real time;
Framing module obtains framing sequentially in time for carrying out sub-frame processing to the identification video recording acquired Image;
Feature recognition module, in such a way that the framing image detection goes out number of people feature, being obtained using number of people detection model Head image;
Track generation module obtains client's fortune for the head image in the framing image according to continuous several frames Dynamic rail mark;
Analysis and memory module, for analyzing client's motion profile, if client's motion profile meet it is default Angular range, then obtain client's direction of motion of client's motion profile, and visitor is obtained according to client's direction of motion Family passes in and out situation, and client disengaging situation is sent and is written in volume of the flow of passengers analytical database.
By using above-mentioned technical proposal, when analyzing the volume of the flow of passengers, by being provided with camera shooting above shop door mouth Device, and the guest into shop is shot using photographic device, to count the volume of the flow of passengers;When counting the volume of the flow of passengers, from identification Head image starts out, and the guest can be judged by recording client's motion profile if client's motion profile meets certain angle It is to have been come into shop from doorway, rather than hover near doorway, in this way, can will not be into shop or out point, and It is that the customer to hover in shop door mouth excludes outside the volume of the flow of passengers, to improve the really degree of volume of the flow of passengers analysis, and then improves The accuracy of volume of the flow of passengers analysis.
Foregoing invention purpose three of the invention has the technical scheme that
A kind of computer equipment, including memory, processor and storage are in the memory and can be on the processor The computer program of operation, the processor realize the step of the above-mentioned volume of the flow of passengers disengaging recognition methods when executing the computer program Suddenly.
Foregoing invention purpose four of the invention has the technical scheme that
A kind of computer readable storage medium, the computer-readable recording medium storage have computer program, the computer The step of above-mentioned volume of the flow of passengers disengaging recognition methods is realized when program is executed by processor.
In conclusion advantageous effects of the invention are as follows:
When analyzing the volume of the flow of passengers, by being provided with photographic device above shop door mouth, and using photographic device into shop Guest shoot, to count the volume of the flow of passengers;When counting the volume of the flow of passengers, since identifying head image, record client's fortune Dynamic rail mark, if client's motion profile meets certain angle, can judge the guest be come into shop from doorway, rather than It hovers near doorway, in this way, can will not be into shop or out point, but the customer to hover in shop door mouth excludes to exist Outside the volume of the flow of passengers, to improve the really degree of volume of the flow of passengers analysis, and then the accuracy of volume of the flow of passengers analysis is improved.
Detailed description of the invention
Fig. 1 is a flow chart of volume of the flow of passengers disengaging recognition methods in one embodiment of the invention;
Fig. 2 is another implementation flow chart of volume of the flow of passengers disengaging recognition methods in one embodiment of the invention;
Fig. 3 is the implementation flow chart of step S30 in volume of the flow of passengers disengaging recognition methods in one embodiment of the invention;
Fig. 4 is the implementation flow chart of step S40 in volume of the flow of passengers disengaging recognition methods in one embodiment of the invention;
Fig. 5 is the implementation flow chart of step S50 in volume of the flow of passengers disengaging recognition methods in one embodiment of the invention;
Fig. 6 is a functional block diagram of volume of the flow of passengers disengaging identification device in one embodiment of the invention;
Fig. 7 is a schematic diagram of computer equipment in one embodiment of the invention.
Specific embodiment
Below in conjunction with attached drawing, invention is further described in detail.
Embodiment one:
In one embodiment, as shown in Figure 1, the invention discloses a kind of volumes of the flow of passengers to pass in and out recognition methods, following step is specifically included It is rapid:
S10: it obtains identify video recording in shop in real time.
In the present embodiment, identification video recording refers to situation about taking inside the places such as shop or supermarket, and needs from this The video recording of customer is identified in the case where inside.
Specifically, it is provided with recording apparatus, such as camera above the doorway in shop, to the Gu by the doorway in shop Visitor records a video, to obtain identification video recording.
S20: sub-frame processing is carried out to the identification video recording acquired, obtains framing image sequentially in time.
In the present embodiment, framing image refers to each frame image in identification video recording.
Specifically, the time sequencing played according to identification video recording obtains identification video recording using existing sub-frame processing method In each frame image, as framing image.
S30: using number of people detection model in such a way that framing image detection goes out number of people feature, head image is obtained.
In the present embodiment, number of people detection model refers to trains in advance, can be in the framing image in identification video recording It identifies client's head zone, and then determines the model of customer's individual according to the head zone.
Specifically, number of people detection model is trained in advance, the identification of number of people feature is carried out in framing image, if in framing Number of people feature is identified in image, using in the region for identifying number of people feature as the head image.
S40: according to the head image in the framing image of continuous several frames, client's motion profile is obtained.
In the present embodiment, client's motion profile refers to the track that customer moves in identification video recording.
Specifically, since the photographic device in shop is fixed, situation in the shop shot by photographic device It is also fixed.Therefore each by several frames, such as the framing image of 50 frames using the background of situation in shop as base map In the framing image of frame, after the head image of same customer is sequentially written in base map sequentially in time, by all head images It is linked to be client's motion profile.
S50: analyzing client's motion profile, if client's motion profile meets preset angular range, obtains visitor Client's direction of motion of family motion profile, and client is obtained according to client's direction of motion and passes in and out situation, and client is passed in and out into situation It sends and is written in volume of the flow of passengers analytical database.
In the present embodiment, client's direction of motion refers to client in shop along the direction that client's motion profile moves.
Specifically, line of reference is first pre-set, and client's motion profile is compared with line of reference.If client transports Angle is formed between dynamic rail mark and the line of reference, i.e. client's motion profile and line of reference has intersection point in identification video recording, then passes through The angle of the angle is calculated, and the angle takes acute angle or right angle.If the angle is in 45 ° -90 °, obtains client and move rail Client's direction of motion of mark, and client is obtained according to client's direction of motion and passes in and out situation, i.e., using doorway as object of reference, if the client The direction of motion is gradually distance from the doorway, then determines that customer is into shop;If client's direction of motion is to move closer to the doorway , then determine customer for departure.
Further, client's disengaging situation is sent and is written in volume of the flow of passengers analytical database.
In the present embodiment, when analyzing the volume of the flow of passengers, by being provided with photographic device above shop door mouth, and make The guest into shop is shot with photographic device, to count the volume of the flow of passengers;When counting the volume of the flow of passengers, from identifying head image Start, recording client's motion profile if client's motion profile meets certain angle can judge that the guest is walked from doorway Into in shop, rather than hover near doorway, in this way, can will not be into shop or out point, but in shop door mouth The customer to hover excludes outside the volume of the flow of passengers, to improve the really degree of volume of the flow of passengers analysis, and then improves volume of the flow of passengers analysis Accuracy.
In one embodiment, as shown in Fig. 2, before step step S30, the volume of the flow of passengers passes in and out recognition methods further include:
S301: the background picture of identification video recording in shop is obtained, and using background picture as comparison picture.
Specifically, after the photographic device in shop installs, the back in the shop taken in the state of no one is obtained Scape picture, using the background picture as comparison picture.
S302: obtaining several human body head region pictures, and extracts head region in the picture of human body head region respectively Characteristic value, construction feature vector.
Specifically, the picture for having human body head zone can be obtained from different channels, and can pass through existing edge Detection technique carries out identifying human body head region.Edge detecting technology is Digital Image Processing, pattern-recognition, computer view One of important foundation of feel can satisfy the implementation of this embodiment.
Further, after going out the picture recognition in human body head region, existing convolutional neural networks can be passed through (Convolutional Neural Networks, hereinafter referred to as CNN).Extract head region characteristic value, construction feature to Amount.
S303: comparison picture and feature vector are trained using deep learning, obtain number of people detection model.
Specifically, the corresponding feature vector of all head regions got is put into and is compared in picture, and pass through CNN- LSTM model carries out deep learning, obtains number of people detection model, and the number of people detection model is enable to identify in comparing picture Head image.
In one embodiment, as shown in figure 3, in step s 30, i.e., being gone out using number of people detection model in framing image detection The mode of number of people feature, obtains head image, specifically comprises the following steps:
S31: framing image is successively calculated sequentially in time and compares the similarity between image, and chooses similarity and is less than in advance If threshold value framing image, as identification image.
Specifically, if there is no motion change between the framing image of two adjacent frames, illustrate do not have in the identification image Have that customer passes through or customer is in stationary state.
Further, gray proces are carried out by the framing image to adjacent two frame, and is taken according to the result of gray proces Difference value, and using the difference value as the similarity.If the similarity is less than preset threshold value, such as 0.05, then illustrate adjacent Two frames framing image between there are motion changes, then using this two frame, there are the framing images of motion change as identification figure Picture.
S32: detecting identification image using number of people detection model, if detecting number of people feature in identification image, It will then identify image as head image.
Specifically, by way of step S30, identification image is detected using number of people detection model, if identifying Number of people feature is detected in image, then image will be identified as head image.
In one embodiment, as shown in figure 4, in step s 40, i.e., according to the head in the framing image of continuous several frames Image obtains client's motion profile, specifically comprises the following steps:
S41: coordinate is divided according to pixel for framing image, and obtains the coordinate in each frame framing image where head image Point.
Specifically, coordinate system is established in each framing image, and using the framing image lower left corner as origin, i.e., entire framing Image is in the first quartile in coordinate system.Further, as unit of every one 10 pixels, x-axis in the coordinate system With coordinate is divided in y-axis.
Further, the coordinate points in each frame framing image where head image are obtained.
S42: the distance between the coordinate points where adjacent two frames head image are judged, if the distance between coordinate points exist Between 20-30 pixel, then determine that the head image of two adjacent two frames is the same person.
Specifically, by judging the distance between the coordinate points where adjacent two frames head image, for example, for the i-th frame Head image and the i-th+2 frame head image, head image A and head image B are existed simultaneously, if the head figure of the i-th frame As the distance between head image A ' in A and i+1 frame is in 25 pixels, then this head image A and head image are determined A ' is the same customer.
S43: obtaining the head image that continuous several frames are determined as same people, and by the corresponding coordinate of several frame head images Point is put into coordinate, and is linked to be client's motion profile sequentially in time.
Specifically, in the framing image of continuous several frames, the head image of same people is obtained.When obtaining, in order to protect Accuracy is demonstrate,proved, can be got always until disappearing from identification video recording since when the head image appears in identification video recording.
Further, the head image for being determined as same people is not write according in the successful time write-in coordinate of judgement Enter once, and the last head image that the left side is written is linked to be elder generation, and then obtains client's motion profile.
In one embodiment, as shown in figure 5, i.e. client's motion profile is analyzed, if client's motion profile meet it is pre- If angular range, then obtain client's direction of motion of client's motion profile, and client's disengaging is obtained according to client's direction of motion Situation, and client's disengaging situation is sent and is written in volume of the flow of passengers analytical database, specifically comprise the following steps:
S51: center line is set in coordinate.
Specifically, in the position close to 10 pixels in doorway, the parallel line of reference in the doorway being set in shop, by the ginseng According to line centered on line.
S52: if the beginning of client's motion profile and end are located at center line both ends, client's motion profile and center are obtained The angle of line.
Specifically, in client's motion profile, the corresponding coordinate points of head image of coordinate are added as this using first The corresponding coordinate points of head image of coordinate are added as client's motion profile in the last one by the beginning of client's motion profile End.If client's motion profile is located at the both ends of center line, i.e., client's motion profile intersects with center line and forms folder Angle.
Further, the angle of the angle and center line is obtained.
S53: if angle meets angular range, according to client's motion profile beginning, client corresponding with end acquisition is moved Direction.
Specifically, if the angle is between 45 ° to 90 °, such as 87 °, then illustrate the client may be into shop either from Shop.If the beginning of client's motion profile relative to end close to doorway, determine client's direction of motion be into shop, otherwise be Departure.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present invention constitutes any limit It is fixed.
Embodiment two:
In one embodiment, a kind of volume of the flow of passengers disengaging identification device is provided, which passes in and out identification device and above-described embodiment Middle volume of the flow of passengers disengaging recognition methods corresponds.As shown in fig. 6, volume of the flow of passengers disengaging identification device includes that video recording obtains module 10, framing module 20, feature recognition module 30, track generation module 40 and analysis and memory module 50.Each functional module is detailed It is described as follows:
Video recording obtains module 10, and video recording is identified in shop for obtaining in real time;
Framing module 20 obtains framing figure sequentially in time for carrying out sub-frame processing to the identification video recording acquired Picture;
Feature recognition module 30, in such a way that framing image detection goes out number of people feature, being obtained to the end using number of people detection model Portion's image;
Track generation module 40 obtains client's motion profile for the head image in the framing image according to continuous several frames;
Analysis and memory module 50, for analyzing client's motion profile, if client's motion profile meets preset angle Range then obtains client's direction of motion of client's motion profile, and obtains client according to client's direction of motion and pass in and out situation, and will Client passes in and out situation and sends and be written in volume of the flow of passengers analytical database.
Preferably, the volume of the flow of passengers passes in and out identification device further include:
It compares picture and obtains module 301, for obtaining the background picture for identifying video recording in shop, and using background picture as comparison chart Piece;
Feature vector constructs module 302, for obtaining several human body head region pictures, and extracts human body head region respectively The characteristic value of head region in picture, construction feature vector;
Deep learning module 303 obtains number of people detection for being trained using deep learning to comparison picture and feature vector Model.
Preferably, feature recognition module 30 includes:
Similarity calculation submodule 31, for sequentially in time successively calculate framing image to compare it is similar between image Degree, and the framing image that similarity is less than preset threshold value is chosen, as identification image;
Detection sub-module 32, for being detected using number of people detection model to identification image, if being detected in identification image Number of people feature will then identify image as head image.
Preferably, track generation module 40 includes:
Coordinate setting up submodule 41 for dividing coordinate according to pixel for framing image, and obtains in each frame framing image Coordinate points where head image;
Distance Judgment submodule 42, for judging the distance between the coordinate points where adjacent two frames head image, if coordinate points The distance between 20-30 pixel, then determine two adjacent two frames head image be the same person;
Track generates submodule 43, the head image for being determined as same people for obtaining continuous several frames, and by several frame head portions The corresponding coordinate points of image are put into coordinate, and are linked to be client's motion profile sequentially in time.
Preferably, it analyzes with memory module 50 and includes:
Submodule 51 is arranged in center line, for center line to be arranged in coordinate;
Angle acquisition submodule 52 obtains client if being located at center line both ends for the beginning of client's motion profile and end The angle of motion profile and center line;
Direction of motion acquisition submodule 53, if meeting angular range for angle, according to client's motion profile beginning and end Obtain corresponding client's direction of motion.
Specific restriction about volume of the flow of passengers disengaging identification device may refer to pass in and out recognition methods above for the volume of the flow of passengers Restriction, details are not described herein.Modules in above-mentioned volume of the flow of passengers disengaging identification device can be fully or partially through software, hard Part and combinations thereof is realized.Above-mentioned each module can be embedded in the form of hardware or independently of in the processor in computer equipment, It can also be stored in a software form in the memory in computer equipment, execute the above modules in order to which processor calls Corresponding operation.
Embodiment three:
In one embodiment, a kind of computer equipment is provided, which can be server, internal structure chart It can be as shown in Figure 7.The computer equipment includes processor, memory, network interface and the data connected by system bus Library.Wherein, the processor of the computer equipment is for providing calculating and control ability.The memory of the computer equipment includes non- Volatile storage medium, built-in storage.The non-volatile memory medium is stored with operating system, computer program and database. The built-in storage provides environment for the operation of operating system and computer program in non-volatile memory medium.The computer is set Standby database passes in and out situation for storing client.The network interface of the computer equipment is used to pass through network with external terminal Connection communication.To realize a kind of volume of the flow of passengers disengaging recognition methods when computer program is executed by processor.
In one embodiment, a kind of computer equipment is provided, including memory, processor and storage are on a memory And the computer program that can be run on a processor, processor perform the steps of when executing computer program
S10: it obtains identify video recording in shop in real time;
S20: sub-frame processing is carried out to the identification video recording acquired, obtains framing image sequentially in time;
S30: using number of people detection model in such a way that framing image detection goes out number of people feature, head image is obtained;
S40: according to the head image in the framing image of continuous several frames, client's motion profile is obtained;
S50: analyzing client's motion profile, if client's motion profile meets preset angular range, obtains client's fortune Client's direction of motion of dynamic rail mark, and client is obtained according to client's direction of motion and passes in and out situation, and client is passed in and out into situation and is sent And it is written in volume of the flow of passengers analytical database.
Example IV:
In one embodiment, a kind of computer readable storage medium is provided, computer program, computer journey are stored thereon with It is performed the steps of when sequence is executed by processor
S10: it obtains identify video recording in shop in real time;
S20: sub-frame processing is carried out to the identification video recording acquired, obtains framing image sequentially in time;
S30: using number of people detection model in such a way that framing image detection goes out number of people feature, head image is obtained;
S40: according to the head image in the framing image of continuous several frames, client's motion profile is obtained;
S50: analyzing client's motion profile, if client's motion profile meets preset angular range, obtains client's fortune Client's direction of motion of dynamic rail mark, and client is obtained according to client's direction of motion and passes in and out situation, and client is passed in and out into situation and is sent And it is written in volume of the flow of passengers analytical database.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with Relevant hardware is instructed to complete by computer program, the computer program can be stored in a non-volatile computer In read/write memory medium, the computer program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, To any reference of memory, storage, database or other media used in each embodiment provided herein, Including non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM (PROM), electrically programmable ROM(EPROM), electrically erasable ROM(EEPROM) or flash memory.Volatile memory may include Random-access memory (ram) or external cache.By way of illustration and not limitation, RAM is available in many forms, Such as static state RAM(SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhancing Type SDRAM(ESDRAM), synchronization link (Synchlink) DRAM(SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each function Can unit, module division progress for example, in practical application, can according to need and by above-mentioned function distribution by different Functional unit, module are completed, i.e., the internal structure of described device is divided into different functional unit or module, more than completing The all or part of function of description.
Embodiment described above is merely illustrative of the technical solution of the present invention, rather than its limitations;Although referring to aforementioned reality Applying example, invention is explained in detail, those skilled in the art should understand that: it still can be to aforementioned each Technical solution documented by embodiment is modified or equivalent replacement of some of the technical features;And these are modified Or replacement, the spirit and scope for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution should all It is included within protection scope of the present invention.

Claims (10)

1. a kind of volume of the flow of passengers passes in and out recognition methods, which is characterized in that the volume of the flow of passengers passes in and out recognition methods and includes:
S10: it obtains identify video recording in shop in real time;
S20: sub-frame processing is carried out to the identification video recording acquired, obtains framing image sequentially in time;
S30: using number of people detection model in such a way that the framing image detection goes out number of people feature, head image is obtained;
S40: according to the head image in the framing image of continuous several frames, client's motion profile is obtained;
S50: analyzing client's motion profile, if client's motion profile meets preset angular range, obtains Client's direction of motion of client's motion profile is taken, and client is obtained according to client's direction of motion and passes in and out situation, and will The client passes in and out situation and sends and be written in volume of the flow of passengers analytical database.
2. the volume of the flow of passengers as described in claim 1 passes in and out recognition methods, which is characterized in that before the step S30, the visitor Flow passes in and out recognition methods further include:
S301: the background picture of identification video recording in the shop is obtained, and using the background picture as comparison picture;
S302: several human body head region pictures are obtained, and extract head region in the picture of the human body head region respectively Characteristic value, construction feature vector;
S303: being trained the comparison picture and described eigenvector using deep learning, obtains the number of people detection mould Type.
3. the volume of the flow of passengers as described in claim 1 passes in and out recognition methods, which is characterized in that the step S30 includes:
S31: the framing image and the similarity compared between image are successively calculated sequentially in time, and is chosen similar Degree is less than the framing image of preset threshold value, as identification image;
S32: the identification image is detected using the number of people detection model, if detecting institute in the identification image Number of people feature is stated, then using the identification image as the head image.
4. the volume of the flow of passengers as described in claim 1 passes in and out recognition methods, which is characterized in that the step S40 includes:
S41: coordinate is divided according to pixel for the framing image, and obtains head image institute described in each frame framing image Coordinate points;
S42: the distance between the coordinate points where head image described in adjacent two frame are judged, if between the coordinate points Distance then determines that the head image of two adjacent two frames is the same person between 20-30 pixel;
S43: obtaining the head image that continuous several frames are determined as same people, and by head image pair described in several frames The coordinate points answered are put into the coordinate, and are linked to be client's motion profile sequentially in time.
5. the volume of the flow of passengers as described in claim 1 passes in and out recognition methods, which is characterized in that the step S50 includes:
S51: center line is set in the coordinate;
S52: it if the beginning of client's motion profile and end are located at the center line both ends, obtain the client and moves rail The angle of mark and the center line;
S53: corresponding with end acquisition according to client's motion profile beginning if the angle meets the angular range Client's direction of motion.
6. a kind of volume of the flow of passengers passes in and out identification device, which is characterized in that the volume of the flow of passengers passes in and out identification device and includes:
Video recording obtains module, and video recording is identified in shop for obtaining in real time;
Framing module obtains framing sequentially in time for carrying out sub-frame processing to the identification video recording acquired Image;
Feature recognition module, in such a way that the framing image detection goes out number of people feature, being obtained using number of people detection model Head image;
Track generation module obtains client's fortune for the head image in the framing image according to continuous several frames Dynamic rail mark;
Analysis and memory module, for analyzing client's motion profile, if client's motion profile meet it is default Angular range, then obtain client's direction of motion of client's motion profile, and visitor is obtained according to client's direction of motion Family passes in and out situation, and client disengaging situation is sent and is written in volume of the flow of passengers analytical database.
7. the volume of the flow of passengers as claimed in claim 6 passes in and out identification device, which is characterized in that the volume of the flow of passengers disengaging identification device is also Include:
Compare picture and obtain module, for obtaining the background picture for identifying video recording in the shop, and using the background picture as Compare picture;
Feature vector constructs module, for obtaining several human body head region pictures, and extracts the human body head area respectively The characteristic value of head region in the picture of domain, construction feature vector;
Deep learning module obtains institute for being trained using deep learning to the comparison picture and described eigenvector State number of people detection model.
8. the volume of the flow of passengers as claimed in claim 6 passes in and out identification device, which is characterized in that the feature recognition module includes:
Similarity calculation submodule, for successively calculating the framing image sequentially in time and described comparing between image Similarity, and the framing image that similarity is less than preset threshold value is chosen, as identification image;
Detection sub-module, for being detected using the number of people detection model to the identification image, if scheming in the identification The number of people feature is detected as in, then using the identification image as the head image.
9. a kind of computer equipment, including memory, processor and storage are in the memory and can be in the processor The computer program of upper operation, which is characterized in that the processor realized when executing the computer program as claim 1 to The step of any one of 5 volume of the flow of passengers disengaging recognition methods.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists In realization volume of the flow of passengers as described in any one of claim 1 to 5 passes in and out recognition methods when the computer program is executed by processor The step of.
CN201910253943.8A 2019-03-30 2019-03-30 Passenger flow volume in-out identification method, device, equipment and storage medium Active CN110334569B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910253943.8A CN110334569B (en) 2019-03-30 2019-03-30 Passenger flow volume in-out identification method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910253943.8A CN110334569B (en) 2019-03-30 2019-03-30 Passenger flow volume in-out identification method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110334569A true CN110334569A (en) 2019-10-15
CN110334569B CN110334569B (en) 2022-09-16

Family

ID=68139237

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910253943.8A Active CN110334569B (en) 2019-03-30 2019-03-30 Passenger flow volume in-out identification method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110334569B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110713082A (en) * 2019-10-22 2020-01-21 日立楼宇技术(广州)有限公司 Elevator control method, system, device and storage medium
CN111091057A (en) * 2019-11-15 2020-05-01 腾讯科技(深圳)有限公司 Information processing method and device and computer readable storage medium
CN111160243A (en) * 2019-12-27 2020-05-15 深圳云天励飞技术有限公司 Passenger flow volume statistical method and related product
CN111738134A (en) * 2020-06-18 2020-10-02 北京市商汤科技开发有限公司 Method, device, equipment and medium for acquiring passenger flow data
CN112232262A (en) * 2020-10-27 2021-01-15 上海依图网络科技有限公司 Passenger flow volume statistical method and device
CN112668525A (en) * 2020-12-31 2021-04-16 深圳云天励飞技术股份有限公司 People flow counting method and device, electronic equipment and storage medium
CN113095209A (en) * 2021-04-07 2021-07-09 深圳海智创科技有限公司 Crowd identification method and system for passenger flow and electronic equipment
CN113469075A (en) * 2021-07-07 2021-10-01 上海商汤智能科技有限公司 Method, device and equipment for determining traffic flow index and storage medium
CN114220141A (en) * 2021-11-23 2022-03-22 慧之安信息技术股份有限公司 Shop frequent visitor identification method based on face identification
WO2022057808A1 (en) * 2020-09-16 2022-03-24 青岛维感科技有限公司 Passenger flow monitoring method, apparatus and system, channel, and storage medium
CN114937241A (en) * 2022-06-01 2022-08-23 北京凯利时科技有限公司 Transition zone based passenger flow statistics method and system and computer program product
CN117690094A (en) * 2024-02-01 2024-03-12 通用电梯股份有限公司 Elevator passenger flow volume data statistics method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070122058A1 (en) * 2005-11-28 2007-05-31 Fujitsu Limited Method and apparatus for analyzing image, and computer product
US20110142282A1 (en) * 2009-12-14 2011-06-16 Indian Institute Of Technology Bombay Visual object tracking with scale and orientation adaptation
CN104637058A (en) * 2015-02-06 2015-05-20 武汉科技大学 Image information-based client flow volume identification statistic method
CN108932464A (en) * 2017-06-09 2018-12-04 北京猎户星空科技有限公司 Passenger flow volume statistical method and device
CN109272347A (en) * 2018-08-16 2019-01-25 苏宁易购集团股份有限公司 A kind of statistical analysis technique and system of shops's volume of the flow of passengers

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070122058A1 (en) * 2005-11-28 2007-05-31 Fujitsu Limited Method and apparatus for analyzing image, and computer product
US20110142282A1 (en) * 2009-12-14 2011-06-16 Indian Institute Of Technology Bombay Visual object tracking with scale and orientation adaptation
CN104637058A (en) * 2015-02-06 2015-05-20 武汉科技大学 Image information-based client flow volume identification statistic method
CN108932464A (en) * 2017-06-09 2018-12-04 北京猎户星空科技有限公司 Passenger flow volume statistical method and device
CN109272347A (en) * 2018-08-16 2019-01-25 苏宁易购集团股份有限公司 A kind of statistical analysis technique and system of shops's volume of the flow of passengers

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
宋晓冰: "基于轮廓线的三维人脸特征提取与识别", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110713082A (en) * 2019-10-22 2020-01-21 日立楼宇技术(广州)有限公司 Elevator control method, system, device and storage medium
CN111091057A (en) * 2019-11-15 2020-05-01 腾讯科技(深圳)有限公司 Information processing method and device and computer readable storage medium
CN111160243A (en) * 2019-12-27 2020-05-15 深圳云天励飞技术有限公司 Passenger flow volume statistical method and related product
CN111738134A (en) * 2020-06-18 2020-10-02 北京市商汤科技开发有限公司 Method, device, equipment and medium for acquiring passenger flow data
WO2022057808A1 (en) * 2020-09-16 2022-03-24 青岛维感科技有限公司 Passenger flow monitoring method, apparatus and system, channel, and storage medium
CN112232262A (en) * 2020-10-27 2021-01-15 上海依图网络科技有限公司 Passenger flow volume statistical method and device
CN112668525B (en) * 2020-12-31 2024-05-07 深圳云天励飞技术股份有限公司 People flow counting method and device, electronic equipment and storage medium
CN112668525A (en) * 2020-12-31 2021-04-16 深圳云天励飞技术股份有限公司 People flow counting method and device, electronic equipment and storage medium
CN113095209A (en) * 2021-04-07 2021-07-09 深圳海智创科技有限公司 Crowd identification method and system for passenger flow and electronic equipment
CN113095209B (en) * 2021-04-07 2024-05-31 深圳海智创科技有限公司 Crowd identification method and system for passenger flow and electronic equipment
CN113469075A (en) * 2021-07-07 2021-10-01 上海商汤智能科技有限公司 Method, device and equipment for determining traffic flow index and storage medium
CN114220141A (en) * 2021-11-23 2022-03-22 慧之安信息技术股份有限公司 Shop frequent visitor identification method based on face identification
CN114937241B (en) * 2022-06-01 2024-03-26 北京凯利时科技有限公司 Transition zone-based passenger flow statistics method and system and computer program product
CN114937241A (en) * 2022-06-01 2022-08-23 北京凯利时科技有限公司 Transition zone based passenger flow statistics method and system and computer program product
CN117690094B (en) * 2024-02-01 2024-04-09 通用电梯股份有限公司 Elevator passenger flow volume data statistics method and device, electronic equipment and storage medium
CN117690094A (en) * 2024-02-01 2024-03-12 通用电梯股份有限公司 Elevator passenger flow volume data statistics method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN110334569B (en) 2022-09-16

Similar Documents

Publication Publication Date Title
CN110334569A (en) The volume of the flow of passengers passes in and out recognition methods, device, equipment and storage medium
JP6646124B2 (en) Method for obtaining bounding box corresponding to an object on an image using CNN (Convolutional Neural Network) including tracking network and apparatus using the same
CN110533925B (en) Vehicle illegal video processing method and device, computer equipment and storage medium
CN109190508B (en) Multi-camera data fusion method based on space coordinate system
US10140508B2 (en) Method and apparatus for annotating a video stream comprising a sequence of frames
CN109919977B (en) Video motion person tracking and identity recognition method based on time characteristics
CN110379050A (en) A kind of gate control method, apparatus and system
US20160321507A1 (en) Person counting method and device for same
US20120093362A1 (en) Device and method for detecting specific object in sequence of images and video camera device
CN105518744A (en) Pedestrian re-identification method and equipment
JP7246104B2 (en) License plate identification method based on text line identification
US20150302240A1 (en) Method and device for locating feature points on human face and storage medium
CN106169071A (en) A kind of Work attendance method based on dynamic human face and chest card recognition and system
CN107330371A (en) Acquisition methods, device and the storage device of the countenance of 3D facial models
CN104318263A (en) Real-time high-precision people stream counting method
CN111160172A (en) Parking space detection method and device, computer equipment and storage medium
CN103093212A (en) Method and device for clipping facial images based on face detection and face tracking
CN110298268A (en) Method, apparatus, storage medium and the camera of the single-lens two-way passenger flow of identification
CN112651291A (en) Video-based posture estimation method, device, medium and electronic equipment
CN110399835A (en) A kind of analysis method of personnel's residence time, apparatus and system
CN109297489A (en) A kind of indoor navigation method based on user characteristics, electronic equipment and storage medium
CN112017212B (en) Training and tracking method and system of face key point tracking model
CN110705366A (en) Real-time human head detection method based on stair scene
CN110334568B (en) Track generation and monitoring method, device, equipment and storage medium
CN116912880A (en) Bird recognition quality assessment method and system based on bird key point detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant