CN109784162B - Pedestrian behavior recognition and trajectory tracking method - Google Patents

Pedestrian behavior recognition and trajectory tracking method Download PDF

Info

Publication number
CN109784162B
CN109784162B CN201811516551.8A CN201811516551A CN109784162B CN 109784162 B CN109784162 B CN 109784162B CN 201811516551 A CN201811516551 A CN 201811516551A CN 109784162 B CN109784162 B CN 109784162B
Authority
CN
China
Prior art keywords
customer
track
pedestrian
data
analysis algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811516551.8A
Other languages
Chinese (zh)
Other versions
CN109784162A (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Shuzhilian Technology Co Ltd
Original Assignee
Chengdu Shuzhilian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Shuzhilian Technology Co Ltd filed Critical Chengdu Shuzhilian Technology Co Ltd
Priority to CN201811516551.8A priority Critical patent/CN109784162B/en
Publication of CN109784162A publication Critical patent/CN109784162A/en
Application granted granted Critical
Publication of CN109784162B publication Critical patent/CN109784162B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a pedestrian behavior recognition and track tracking method, which comprises an image analysis algorithm and a track analysis algorithm, wherein collected video data sequentially passes through an image analysis algorithm module and a track analysis algorithm module; and (3) image analysis algorithm: the method comprises the steps of data preprocessing, single-camera pedestrian trajectory tracking and cross-camera association, and camera abnormity identification, face detection, pedestrian detection and feature extraction are realized; and (3) a track analysis algorithm: by computationally analyzing trajectory data of pedestrians from the acquired images, activity behaviors of customers and sales are recognized. The pedestrian behavior track monitoring system can monitor and recognize the pedestrian behavior track in real time, intelligently judge the behavior state of the pedestrian, conveniently and intensively manage the behavior track of customers and sales, and provide more optimized and intelligent service for the customers.

Description

Pedestrian behavior recognition and trajectory tracking method
Technical Field
The invention belongs to the technical field of video monitoring, and particularly relates to a pedestrian behavior recognition and trajectory tracking method.
Background
The 4S store integrates automobile sales, maintenance, accessories and information service, improves the service level of the 4S store, is beneficial to brand publicity, and accordingly increases sales volume.
The existing monitoring management system in the 4S store is usually used for security monitoring, only video data of personnel in the store can be collected, the behavior track information of customers cannot be monitored and identified in real time, and the behavior tracks of the customers and sales cannot be managed in a centralized manner, so that the comprehensive management efficiency in the 4S store and the car purchasing experience of the customers are greatly reduced.
Disclosure of Invention
In order to solve the problems, the invention provides a pedestrian behavior identification and trajectory tracking method, which can monitor and identify the behavior trajectory information of a customer in real time, intelligently judge the behavior state of the customer, conveniently and intensively manage the behavior trajectories of the customer and sales, and provide more optimized and more intelligent shopping experience for the customer.
In order to achieve the purpose, the invention adopts the technical scheme that: a pedestrian behavior recognition and trajectory tracking method comprises an image analysis algorithm and a trajectory analysis algorithm, wherein collected video data are processed sequentially through the image analysis algorithm and the trajectory analysis algorithm;
the image analysis algorithm: the method comprises the steps of data preprocessing, single-camera pedestrian trajectory tracking and cross-camera association, and the acquisition and analysis data comprise camera abnormity identification, face detection, pedestrian detection and feature extraction data;
the trajectory analysis algorithm: by statistically analyzing the analysis data, activities of the customer and the sales person are identified.
Further, the data preprocessing of the image analysis algorithm comprises video stream taking and frame cutting, pedestrian detection and feature extraction, the monitoring video is obtained and cut into frame images, and the pedestrian detection and the feature extraction are carried out through the image recognition algorithm.
Further, the single-camera pedestrian trajectory tracking of the image analysis algorithm can realize multi-target tracking, and comprises the following steps:
carrying out feature extraction and matching on pedestrian frames detected in each frame of image through a tracking algorithm to obtain a track; the pedestrian features are calculated by using the similarity, an 8-dimensional vector is used for describing the state of a track at a certain moment, and the central position, the aspect ratio, the height and the corresponding speed information in an image coordinate of a pedestrian frame are respectively represented;
predicting an updating track by using a Kalman filter, wherein the Kalman filter adopts a constant speed model and a linear observation model; and recording the time from the successful matching of the last time to the current time of each track, and terminating the track when the time is greater than a set time threshold. And inputting a customer detection result box file, a customer feature file and a JPG or PNG picture obtained by segmenting a video stream when the system runs. And outputting a customer detection result box file under each camera and adding the corresponding customer temp _ ID after algorithm analysis.
Further, adjacent cameras are mounted such that there is an overlap region between each two, and cross-camera correlation in the image analysis algorithm includes the steps of:
performing external reference calibration on the overlapping area;
establishing an incidence matrix through an external reference calibration result;
and judging whether the tracking targets under the two cameras are the same customer or not by using the incidence matrix mapping relation. And when the system runs, a single-camera tracking result file is input, and after algorithm operation, a customer detection result box file under each camera is output and a corresponding customer real ID is added.
Further, the trajectory analysis algorithm comprises the steps of:
data preprocessing, namely reading track data from an original pedestrian tracking track;
counting the customer front stay, and counting all track points of the customer stay around the vehicle; counting the behavior states of the customers including car watching, getting on the bus and sale following by means of the obtained track points;
dividing group images, dividing groups by analyzing track data, and counting batch information and people flow information;
and finally, writing the statistical result into a database.
Further, the data preprocessing of the trajectory analysis algorithm is used for preprocessing the original pedestrian tracking trajectory, and comprises the following steps:
reading track data under each camera from a storage path of an original pedestrian tracking track, storing the track data under each camera in a text file in a CSV format, and storing the track data under each camera in a memory in a data frame mode;
and extracting the mapping relations between the temporary ID and the real ID of the customer, between the temporary ID and the store-entering time, between the temporary ID and the store-entering frame number, and storing the mapping relations in a memory in a key value pair mode. The data are extracted by means of the DataFrame structure in the pandas package in Python.
Further, the customer vehicle forward stop statistical analysis of the trajectory analysis algorithm includes the steps of:
counting the track points of the customer staying around the vehicle;
reading track data after data preprocessing, and calculating the distance between each track point of a customer and the vehicle, wherein the distance measurement mode is to calculate the overlapping area of a rectangular frame for calibrating the customer and a rectangular frame for calibrating the vehicle, and when the overlapping area is larger than a set overlapping threshold value, the customer is judged to stop in front of the vehicle.
Outputting all the statistical track points of the customer staying in front of the vehicle in a python dictionary: { camera ID: { vehicle ID: [ track point where customer stays ] } } indicates track points where all customers stay around different vehicles under different cameras.
Further, the method for counting the behavior state of the customer and judging the number and the ID of the number of the customers who see and get on the bus according to the stop statistical analysis comprises the following steps:
recognizing the car watching state: whether the customer sees the car or not is judged by judging the time length of the customer staying in the car, the total number of track points of each customer staying in the car is counted, and the time length of the customer staying in the car can be indirectly counted because 4 frames are equal to 1 second; when the length of time that the customer stays in the front of the car is larger than the set threshold value, the customer is considered as a car watching customer. The output form is a python dictionary: { camera ID: [ vehicle seeing customer ID ] }.
And (3) identification of getting-on state: judging whether the customer gets on the vehicle or not according to the disappearing time length and the disappearing position of the customer around the vehicle; comparing all front and rear track points of the customer staying in the front of the vehicle, and if the difference value of the frame numbers of the front and rear track points is greater than a set threshold value and the distance between the two track points and the vehicle is less than the set threshold value, considering that the customer gets on the vehicle, and the track is a complete track of the customer getting on, trial run and getting off;
under most conditions, a customer gets on the bus and gets off the bus, and then the ID jumps, so that the last frame track point where the customer stays in the bus is obtained first, and then the customer with a new ID appearing at the same position is obtained, the appearance time of the customer is later than that of a disappearing target customer, and if the distance between the two track points and the bus is less than a set threshold value and the frame number difference of the two track points is within a set threshold value range, the customer is judged to get on the bus; the output form is a python dictionary: { camera ID: [ boarding customer ID ] } vehicle ID: [ boarding customer ID ] }.
The judgment of the sales follow-up comprises a scene one and a scene two; the first scene is that no sales follow exists when a customer enters a store, and the input of the first scene is customer track data analyzed from a door camera through data preprocessing; and the second scenario is whether the customer has sales following while watching the car, and the input of the scenario is customer track data of the stop in the front of the car.
Further, in the first scenario, m frames of track data of a store customer are taken, then a track with the number of frames of each sale starting and ending within the range of the number of frames of the customer is taken, the similarity between the customer track and the sale track is calculated based on the time sequence, and when the similarity of the tracks is smaller than a set threshold, the sale is considered to follow the customer;
in the second scene, finding out the frame number of the first track point when the customer starts to watch the car, taking the complete track of the customer in a certain range before and after the track point, and selling the track in the same range; and calculating the similarity between the customer track and the sales track based on the time sequence, and when the track similarity is smaller than a set threshold, considering that the sales follows the customer.
And finally, performing aggregation on sales following customers in the two scenes.
The output form is a python dictionary: { customer ID: sales ID }, is the mapping between the customer and the sales that follow him.
Further, the group image division analysis is used for counting the number of groups in the customers entering the store, and comprises the following steps: finding all customers who enter the store together; and then, taking the first n frames of continuous tracks of the customers entering the store together, and respectively calculating the similarity between the two customers, wherein the customers entering the store together with the similarity smaller than a set threshold are divided into a group.
Wherein, judge that the customer enters the shop simultaneously, include the step: finding out a first frame track point of each customer, calculating the distance from the track point of the first frame to a doorway, and if the distance is smaller than a set threshold value, determining that the customer is a store-entering customer; and mapping the temporary ID and the store-entering time calculated by all store-entering customers according to the data preprocessing module to finally obtain the customers entering the store at the same time.
Generally, the group image is divided and input into the mapping of the temporary ID and the store-entering time calculated by the preprocessing module and the customer batch data under the door camera, and the output form is python dictionary: { master ID: [ customer ID1, customer ID 2. ], [ ].
The beneficial effects of the technical scheme are as follows:
the method can find the dynamic position of the target from the panoramic video, and realize multi-target cross-camera tracking;
the invention can monitor and identify the behavior track information of the customer in real time, intelligently judge the behavior state of the customer, conveniently and intensively manage the behavior tracks of the customer and the sales, and provide more optimized and more intelligent shopping experience for the customer;
the invention combines the parking information of the vehicle at the same time, realizes the service supervision in the vehicle purchasing process of the client and ensures that the client enjoys the high-quality vehicle purchasing service; the system can effectively improve the patrol management capability of the 4S store, improve the monitoring informatization management level of the 4S store, construct an analysis platform for finding whether sales follow customers and passenger flow conditions in time, and improve the service quality.
Drawings
FIG. 1 is a flow chart of a pedestrian behavior recognition and trajectory tracking method according to the present invention;
FIG. 2 is a flow chart of a method of an image analysis algorithm in an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a method of a trajectory analysis algorithm according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described with reference to the accompanying drawings.
In this embodiment, referring to fig. 1, the present invention provides a pedestrian behavior recognition and trajectory tracking method, including an image analysis algorithm and a trajectory analysis algorithm, where collected video data is processed sequentially through the image analysis algorithm and the trajectory analysis algorithm;
the image analysis algorithm: the method comprises the steps of data preprocessing, single-camera pedestrian trajectory tracking and cross-camera association, and the acquisition and analysis data comprise camera abnormity identification, face detection, pedestrian detection and feature extraction data;
the trajectory analysis algorithm: by statistically analyzing the analysis data, activities of the customer and the sales person are identified.
As an optimization scheme of the above embodiment, as shown in fig. 2, the data preprocessing of the image analysis algorithm includes video streaming frame segmentation, pedestrian detection and feature extraction, the surveillance video is obtained and segmented into a frame image, and the pedestrian detection and the feature extraction are performed through an image recognition algorithm.
The single-camera pedestrian trajectory tracking of the image analysis algorithm can realize multi-target tracking, and comprises the following steps:
carrying out feature extraction and matching on pedestrian frames detected in each frame of image through a tracking algorithm to obtain a track; the pedestrian features are calculated by using the similarity, and the state of a track at a certain moment is described by using an 8-dimensional vector, so that the central position, the aspect ratio, the height and the corresponding speed information in the image coordinate of the pedestrian frame are respectively represented.
Predicting an updating track by using a Kalman filter, wherein the Kalman filter adopts a constant speed model and a linear observation model; and recording the time from the successful matching of the last time to the current time of each track, and terminating the track when the time is greater than a set time threshold. And inputting a customer detection result box file, a customer feature file and a JPG or PNG picture obtained by segmenting a video stream when the system runs. And outputting a customer detection result box file under each camera and adding the corresponding customer temp _ ID after algorithm analysis.
The installed adjacent cameras meet that an overlapping area exists between every two cameras, and cross-camera association in the image analysis algorithm comprises the following steps:
performing external reference calibration on the overlapping area;
establishing an incidence matrix through an external reference calibration result;
and judging whether the tracking targets under the two cameras are the same customer or not by using the incidence matrix mapping relation. And when the system runs, a single-camera tracking result file is input, and after algorithm operation, a customer detection result box file under each camera is output and a corresponding customer real ID is added.
As an optimization scheme of the above embodiment, as shown in fig. 3, the trajectory analysis algorithm includes the steps of:
data preprocessing, namely reading track data from an original pedestrian tracking track;
counting the customer front stay, and counting all track points of the customer stay around the vehicle; counting the behavior states of the customers including car watching, getting on the bus and sale following by means of the obtained track points;
dividing group images, dividing groups by analyzing track data, and counting batch information and people flow information;
and finally, writing the statistical result into a database.
The data preprocessing of the trajectory analysis algorithm is used for preprocessing the original pedestrian tracking trajectory, and comprises the following steps:
reading track data under each camera from a storage path of an original pedestrian tracking track, storing the track data under each camera in a text file in a CSV format, and storing the track data under each camera in a memory in a data frame mode;
and extracting the mapping relations between the temporary ID and the real ID of the customer, between the temporary ID and the store-entering time, between the temporary ID and the store-entering frame number, and storing the mapping relations in a memory in a key value pair mode. The data are extracted by means of the DataFrame structure in the pandas package in Python.
The customer vehicle forward stopping statistical analysis of the track analysis algorithm comprises the following steps:
counting the track points of the customer staying around the vehicle;
reading track data after data preprocessing, and calculating the distance between each track point of a customer and the vehicle, wherein the distance measurement mode is to calculate the overlapping area of a rectangular frame for calibrating the customer and a rectangular frame for calibrating the vehicle, and when the overlapping area is larger than a set overlapping threshold value, the customer is judged to stop in front of the vehicle.
Outputting all the statistical track points of the customer staying in front of the vehicle in a python dictionary: { camera ID: { vehicle ID: [ track point where customer stays ] } } indicates track points where all customers stay around different vehicles under different cameras.
The statistical customer behavior state judges the number of the customers who see and get on the bus and the ID according to the stop statistical analysis, and comprises the following steps:
recognizing the car watching state: whether the customer sees the car or not is judged by judging the time length of the customer staying in the car, the total number of track points of each customer staying in the car is counted, and the time length of the customer staying in the car can be indirectly counted because 4 frames are equal to 1 second; when the length of time that the customer stays in the front of the car is larger than the set threshold value, the customer is considered as a car watching customer. The output form is a python dictionary: { camera ID: [ vehicle seeing customer ID ] }.
And (3) identification of getting-on state: judging whether the customer gets on the vehicle or not according to the disappearing time length and the disappearing position of the customer around the vehicle; comparing all front and rear track points of the customer staying in the front of the vehicle, and if the difference value of the frame numbers of the front and rear track points is greater than a set threshold value and the distance between the two track points and the vehicle is less than the set threshold value, considering that the customer gets on the vehicle, and the track is a complete track of the customer getting on, trial run and getting off;
under most conditions, a customer gets on the bus and gets off the bus, and then the ID jumps, so that the last frame track point where the customer stays in the bus is obtained first, and then the customer with a new ID appearing at the same position is obtained, the appearance time of the customer is later than that of a disappearing target customer, and if the distance between the two track points and the bus is less than a set threshold value and the frame number difference of the two track points is within a set threshold value range, the customer is judged to get on the bus; the output form is a python dictionary: { camera ID: [ boarding customer ID ] } vehicle ID: [ boarding customer ID ] }.
The judgment of the sales follow-up comprises a scene one and a scene two; the first scene is that no sales follow exists when a customer enters a store, and the input of the first scene is customer track data analyzed from a door camera through data preprocessing; and the second scenario is whether the customer has sales following while watching the car, and the input of the scenario is customer track data of the stop in the front of the car.
In the first scene, m frames of track data of a store customer are taken, then a track with the number of frames of each sale starting and ending within the range of the number of frames of the customer is taken, the similarity between the customer track and the sale track is calculated based on a time sequence, and when the similarity of the tracks is smaller than a set threshold value, the sale is considered to follow the customer;
in the second scene, finding out the frame number of the first track point when the customer starts to watch the car, taking the complete track of the customer in a certain range before and after the track point, and selling the track in the same range; and calculating the similarity between the customer track and the sales track based on the time sequence, and when the track similarity is smaller than a set threshold, considering that the sales follows the customer.
And finally, performing aggregation on sales following customers in the two scenes.
The output form is a python dictionary: { customer ID: sales ID }, is the mapping between the customer and the sales that follow him.
The group image dividing analysis is used for counting the number of groups in the customers entering the store, and comprises the following steps: finding all customers who enter the store together; and then, taking the first n frames of continuous tracks of the customers entering the store together, and respectively calculating the similarity between the two customers, wherein the customers entering the store together with the similarity smaller than a set threshold are divided into a group.
Wherein, judge that the customer enters the shop simultaneously, include the step: finding out a first frame track point of each customer, calculating the distance from the track point of the first frame to a doorway, and if the distance is smaller than a set threshold value, determining that the customer is a store-entering customer; and mapping the temporary ID and the store-entering time calculated by all store-entering customers according to the data preprocessing module to finally obtain the customers entering the store at the same time.
Generally, the group image is divided and input into the mapping of the temporary ID and the store-entering time calculated by the preprocessing module and the customer batch data under the door camera, and the output form is python dictionary: { master ID: [ customer ID1, customer ID 2. ], [ ].
The foregoing shows and describes the general principles and broad features of the present invention and advantages thereof. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (9)

1. A pedestrian behavior recognition and trajectory tracking method is characterized by comprising an image analysis algorithm and a trajectory analysis algorithm, wherein collected video data are processed sequentially through the image analysis algorithm and the trajectory analysis algorithm;
the image analysis algorithm: the method comprises the steps of data preprocessing, single-camera pedestrian trajectory tracking and cross-camera association, and the acquisition and analysis data comprise camera abnormity identification, face detection, pedestrian detection and feature extraction data;
the trajectory analysis algorithm: identifying activity behaviors of the customer and the salesperson by statistically analyzing the analysis data;
the judgment of the sales follow-up comprises a scene one and a scene two; the first scene is that no sales follow exists when a customer enters a store, and the input of the first scene is customer track data analyzed from a door camera through data preprocessing; the second scenario is that whether the customer follows the sale or not when seeing the car, and the input of the scenario is customer track data staying in the front of the car;
in the first scene, m frames of track data of a store customer are taken, then a track with the number of frames of each sale starting and ending within the range of the number of frames of the customer is taken, the similarity between the customer track and the sale track is calculated based on a time sequence, and when the similarity of the tracks is smaller than a set threshold value, the sale is considered to follow the customer;
in the second scene, finding out the frame number of the first track point when the customer starts to watch the car, taking the complete track of the customer in a certain range before and after the track point, and selling the track in the same range; calculating the similarity between the customer track and the sales track based on the time sequence, and considering that the sales follows the customer when the track similarity is smaller than a set threshold;
and finally, performing aggregation on sales following customers in the two scenes.
2. The pedestrian behavior identification and trajectory tracking method according to claim 1, wherein the data preprocessing of the image analysis algorithm comprises video streaming frame segmentation, pedestrian detection and feature extraction, the surveillance video is obtained and segmented into frame images, and the pedestrian detection and feature extraction are performed through the image identification algorithm.
3. The pedestrian behavior recognition and trajectory tracking method according to claim 2, wherein the single-camera pedestrian trajectory tracking of the image analysis algorithm can realize multi-target tracking, comprising the steps of:
carrying out feature extraction and matching on pedestrian frames detected in each frame of image through a tracking algorithm to obtain a track; the pedestrian features are calculated by using the similarity, an 8-dimensional vector is used for describing the state of a track at a certain moment, and the central position, the aspect ratio, the height and the corresponding speed information in an image coordinate of a pedestrian frame are respectively represented;
predicting an updating track by using a Kalman filter, wherein the Kalman filter adopts a constant speed model and a linear observation model; and recording the time from the successful matching of the last time to the current time of each track, and terminating the track when the time is greater than a set time threshold.
4. The pedestrian behavior identification and trajectory tracking method according to claim 3, wherein adjacent cameras are mounted such that there is an overlap region between each two, the cross-camera association in the image analysis algorithm comprising the steps of:
performing external reference calibration on the overlapping area;
establishing an incidence matrix through an external reference calibration result;
and judging whether the tracking targets under the two cameras are the same customer or not by using the incidence matrix mapping relation.
5. The pedestrian behavior recognition and trajectory tracking method according to claim 1 or 4, wherein the trajectory analysis algorithm comprises the steps of:
data preprocessing, namely reading track data from a pedestrian tracking track;
counting the customer front stay, and counting all track points of the customer stay around the vehicle; counting the behavior states of the customers including car watching, getting on the bus and sale following by means of the obtained track points;
dividing group images, dividing groups by analyzing track data, and counting batch information and people flow information;
and finally, writing the statistical result into a database.
6. The pedestrian behavior recognition and trajectory tracking method according to claim 5, wherein the data preprocessing of the trajectory analysis algorithm preprocesses an original pedestrian tracking trajectory, comprising the steps of:
reading track data under each camera from a storage path of an original pedestrian tracking track, storing the track data under each camera in a text file in a CSV format, and storing the track data under each camera in a memory in a data frame mode;
and extracting the mapping relations between the temporary ID and the real ID of the customer, between the temporary ID and the store-entering time, between the temporary ID and the store-entering frame number, and storing the mapping relations in a memory in a key value pair mode.
7. The pedestrian behavior recognition and trajectory tracking method according to claim 6, wherein the statistical analysis of the customer's forward stops by the trajectory analysis algorithm comprises the steps of:
counting the track points of the customer staying around the vehicle;
reading track data after data preprocessing, and calculating the distance between each track point of a customer and the vehicle, wherein the distance measurement mode is to calculate the overlapping area of a rectangular frame for calibrating the customer and a rectangular frame for calibrating the vehicle, and when the overlapping area is larger than a set overlapping threshold value, the customer is judged to stop in front of the vehicle.
8. The pedestrian behavior recognition and trajectory tracking method according to claim 7, wherein the statistical customer behavior state determines the number of people and the ID of the customers who see and get on the car according to the statistical analysis of the stay, comprising the steps of:
recognizing the car watching state: judging whether the customer sees the car or not by judging the time length of the customer staying in the car, and counting the total number of track points of each customer staying in the car; when the length of time that the customer stays in the front of the car is greater than a set threshold value, the customer is considered as a car watching customer;
and (3) identification of getting-on state: judging whether the customer gets on the vehicle or not according to the disappearing time length and the disappearing position of the customer around the vehicle; comparing all front and rear track points of the customer staying in the front of the vehicle, and if the difference value of the frame numbers of the front and rear track points is greater than a set threshold value and the distance between the two track points and the vehicle is less than the set threshold value, considering that the customer gets on the vehicle, and the track is a complete track of the customer getting on, trial run and getting off;
the judgment of the sales follow-up comprises a scene one and a scene two; the first scene is that no sales follow exists when a customer enters a store, and the input of the first scene is customer track data analyzed from a door camera through data preprocessing; and the second scenario is whether the customer has sales following while watching the car, and the input of the scenario is customer track data of the stop in the front of the car.
9. The pedestrian behavior recognition and trajectory tracking method according to claim 1, wherein the number of groups among customers entering a store is counted according to group image partition analysis, comprising the steps of: finding all customers who enter the store together; then, the first n frames of continuous tracks of the customers entering the store together are taken, the estimated similarity between the two customers is calculated respectively, and the customers entering the store together with the similarity smaller than a set threshold are divided into a group;
wherein, judge that the customer enters the shop simultaneously, include the step: finding out a first frame track point of each customer, calculating the distance from the track point of the first frame to a doorway, and if the distance is smaller than a set threshold value, determining that the customer is a store-entering customer; and mapping the temporary ID and the store-entering time calculated by all store-entering customers according to the data preprocessing module to finally obtain the customers entering the store at the same time.
CN201811516551.8A 2018-12-12 2018-12-12 Pedestrian behavior recognition and trajectory tracking method Active CN109784162B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811516551.8A CN109784162B (en) 2018-12-12 2018-12-12 Pedestrian behavior recognition and trajectory tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811516551.8A CN109784162B (en) 2018-12-12 2018-12-12 Pedestrian behavior recognition and trajectory tracking method

Publications (2)

Publication Number Publication Date
CN109784162A CN109784162A (en) 2019-05-21
CN109784162B true CN109784162B (en) 2021-04-13

Family

ID=66496857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811516551.8A Active CN109784162B (en) 2018-12-12 2018-12-12 Pedestrian behavior recognition and trajectory tracking method

Country Status (1)

Country Link
CN (1) CN109784162B (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110309716A (en) * 2019-05-22 2019-10-08 深圳壹账通智能科技有限公司 Service tracks method, apparatus, equipment and storage medium based on face and posture
CN110378931A (en) * 2019-07-10 2019-10-25 成都数之联科技有限公司 A kind of pedestrian target motion track acquisition methods and system based on multi-cam
CN110334674A (en) * 2019-07-10 2019-10-15 哈尔滨理工大学 A kind of tracking of plane free body track identification and prediction technique
CN110473016A (en) * 2019-08-14 2019-11-19 北京市商汤科技开发有限公司 Data processing method, device and storage medium
CN110418114B (en) * 2019-08-20 2021-11-16 京东方科技集团股份有限公司 Object tracking method and device, electronic equipment and storage medium
CN110796040B (en) * 2019-10-15 2022-07-05 武汉大学 Pedestrian identity recognition method based on multivariate spatial trajectory correlation
CN110909765B (en) * 2019-10-24 2023-06-20 中电海康集团有限公司 Pedestrian behavior pattern classification method for big track data
CN110796494B (en) * 2019-10-30 2022-09-27 北京爱笔科技有限公司 Passenger group identification method and device
CN112906439A (en) * 2019-12-04 2021-06-04 上海稻知信息科技有限公司 Passenger flow analysis method and system based on target tracking and behavior detection
CN111582983A (en) * 2020-05-07 2020-08-25 悠尼客(上海)企业管理有限公司 Personalized control method based on face recognition and customer behaviors
CN111597999A (en) * 2020-05-18 2020-08-28 常州工业职业技术学院 4S shop sales service management method and system based on video detection
CN111612821A (en) * 2020-05-20 2020-09-01 北京海月水母科技有限公司 Human-shaped track technology based on multi-frame filtering analysis in three-dimensional space
CN111885354B (en) * 2020-07-15 2022-04-29 中国工商银行股份有限公司 Service improvement discrimination method and device for bank outlets
CN112132048A (en) * 2020-09-24 2020-12-25 天津锋物科技有限公司 Community patrol analysis method and system based on computer vision
CN112257660B (en) * 2020-11-11 2023-11-17 汇纳科技股份有限公司 Method, system, equipment and computer readable storage medium for removing invalid passenger flow
CN112733719B (en) * 2021-01-11 2022-08-02 西南交通大学 Cross-border pedestrian track detection method integrating human face and human body features
CN113034548B (en) * 2021-04-25 2023-05-26 安徽科大擎天科技有限公司 Multi-target tracking method and system suitable for embedded terminal
CN113688679B (en) * 2021-07-22 2024-03-08 南京视察者智能科技有限公司 Method for preventing and controlling early warning of key personnel
CN114937241B (en) * 2022-06-01 2024-03-26 北京凯利时科技有限公司 Transition zone-based passenger flow statistics method and system and computer program product
CN114821487B (en) * 2022-06-29 2022-10-04 珠海视熙科技有限公司 Passenger flow statistical method, device, intelligent terminal, system, equipment and medium
CN115661208B (en) * 2022-12-26 2023-04-07 合肥疆程技术有限公司 Camera posture and stain detection method and device and automobile

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1965335A (en) * 2004-03-15 2007-05-16 阿比特隆公司 Methods and systems for gathering market research data within commercial establishments
CN102855475A (en) * 2012-09-17 2013-01-02 广州杰赛科技股份有限公司 School bus monitoring method and school bus monitoring system
CN103971264A (en) * 2013-02-01 2014-08-06 松下电器产业株式会社 Customer behavior analysis device, customer behavior analysis system and customer behavior analysis method
CN108629791A (en) * 2017-03-17 2018-10-09 北京旷视科技有限公司 Pedestrian tracting method and device and across camera pedestrian tracting method and device
CN108805252A (en) * 2017-04-28 2018-11-13 西门子(中国)有限公司 A kind of passenger's method of counting, device and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108596659A (en) * 2018-04-16 2018-09-28 上海小蚁科技有限公司 The forming method and device, storage medium, terminal of objective group's portrait

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1965335A (en) * 2004-03-15 2007-05-16 阿比特隆公司 Methods and systems for gathering market research data within commercial establishments
CN102855475A (en) * 2012-09-17 2013-01-02 广州杰赛科技股份有限公司 School bus monitoring method and school bus monitoring system
CN103971264A (en) * 2013-02-01 2014-08-06 松下电器产业株式会社 Customer behavior analysis device, customer behavior analysis system and customer behavior analysis method
CN108629791A (en) * 2017-03-17 2018-10-09 北京旷视科技有限公司 Pedestrian tracting method and device and across camera pedestrian tracting method and device
CN108805252A (en) * 2017-04-28 2018-11-13 西门子(中国)有限公司 A kind of passenger's method of counting, device and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"汽车4S店顾客及行为识别算法的研究与实现";张京娟;《中国优秀硕士学位论文全文数据库 信息科技辑》;20160115(第01期);论文第1.4,3.3,5.2.4节 *
张京娟."汽车4S店顾客及行为识别算法的研究与实现".《中国优秀硕士学位论文全文数据库 信息科技辑》.2016,(第01期), *

Also Published As

Publication number Publication date
CN109784162A (en) 2019-05-21

Similar Documents

Publication Publication Date Title
CN109784162B (en) Pedestrian behavior recognition and trajectory tracking method
CN105844234B (en) Method and equipment for counting people based on head and shoulder detection
CN106354816B (en) video image processing method and device
US7796780B2 (en) Target detection and tracking from overhead video streams
US8682036B2 (en) System and method for street-parking-vehicle identification through license plate capturing
Bas et al. Automatic vehicle counting from video for traffic flow analysis
US9940633B2 (en) System and method for video-based detection of drive-arounds in a retail setting
US10552687B2 (en) Visual monitoring of queues using auxillary devices
WO2017122258A1 (en) Congestion-state-monitoring system
CN112434566B (en) Passenger flow statistics method and device, electronic equipment and storage medium
US10210392B2 (en) System and method for detecting potential drive-up drug deal activity via trajectory-based analysis
US10262328B2 (en) System and method for video-based detection of drive-offs and walk-offs in vehicular and pedestrian queues
Hampapur et al. Searching surveillance video
CN115909223A (en) Method and system for matching WIM system information with monitoring video data
KR20140132140A (en) Method and apparatus for video surveillance based on detecting abnormal behavior using extraction of trajectories from crowd in images
CN113920585A (en) Behavior recognition method and device, equipment and storage medium
CN116311166A (en) Traffic obstacle recognition method and device and electronic equipment
Yu et al. Length-based vehicle classification in multi-lane traffic flow
Sridevi et al. Automatic generation of traffic signal based on traffic volume
KR101766467B1 (en) Alarming apparatus and methd for event occurrence, and providing method of event occurrence determination model
CN111062294B (en) Passenger flow queuing time detection method, device and system
KR102133045B1 (en) Method and system for data processing using CCTV images
Nicolas et al. Video traffic analysis using scene and vehicle models
Maaloul Video-based algorithms for accident detections
Schuster et al. Multi-cue learning and visualization of unusual events

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 610000 No. 270, floor 2, No. 8, Jinxiu street, Wuhou District, Chengdu, Sichuan

Patentee after: Chengdu shuzhilian Technology Co.,Ltd.

Address before: No.2, 4th floor, building 1, Jule road crossing, Section 1, West 1st ring road, Chengdu, Sichuan 610000

Patentee before: CHENGDU SHUZHILIAN TECHNOLOGY Co.,Ltd.

CP03 Change of name, title or address