WO2022127181A1 - 客流的监测方法、装置、电子设备及存储介质 - Google Patents

客流的监测方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2022127181A1
WO2022127181A1 PCT/CN2021/114965 CN2021114965W WO2022127181A1 WO 2022127181 A1 WO2022127181 A1 WO 2022127181A1 CN 2021114965 W CN2021114965 W CN 2021114965W WO 2022127181 A1 WO2022127181 A1 WO 2022127181A1
Authority
WO
WIPO (PCT)
Prior art keywords
sequence
human head
target
dimensional
head frame
Prior art date
Application number
PCT/CN2021/114965
Other languages
English (en)
French (fr)
Inventor
郝凯旋
黄哲
王孝宇
胡文泽
Original Assignee
深圳云天励飞技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳云天励飞技术股份有限公司 filed Critical 深圳云天励飞技术股份有限公司
Publication of WO2022127181A1 publication Critical patent/WO2022127181A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/48Matching video sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Definitions

  • the invention relates to the field of artificial intelligence, and in particular, to a method, device, electronic device and storage medium for monitoring passenger flow.
  • the embodiment of the present invention provides a method for monitoring passenger flow, which can improve the counting accuracy of passenger flow, thereby improving the monitoring effect of passenger flow.
  • an embodiment of the present invention provides a method for monitoring passenger flow, the method comprising:
  • the first head frame and the second head frame are matched to obtain a paired head frame sequence for each target person, and the
  • the paired head frame sequence includes at least one paired head frame of the target person;
  • three-dimensional reconstruction of the human head is performed in a preset three-dimensional space to obtain a three-dimensional human head sequence of the target person, and the three-dimensional human head sequence includes the three-dimensional human head of the target person;
  • passenger flow monitoring is performed on the target area.
  • the first target image sequence is acquired by a first camera
  • the second target image sequence is acquired by a second camera
  • the method further includes:
  • the three-dimensional space is constructed and obtained.
  • performing ground calibration in the coordinate system of the first camera or the second camera to obtain the calibrated ground including:
  • the calibration object information is calibration object information in the coordinate system of the first camera or the second camera;
  • the ground is calibrated according to the calibration object information, and the calibrated ground is obtained.
  • performing ground calibration in the coordinate system of the first camera or the second camera to obtain the calibrated ground including:
  • performing three-dimensional reconstruction of the human head on the paired human head frame sequence in a preset three-dimensional space to obtain a three-dimensional human head sequence of the target person including:
  • three-dimensional reconstruction of the human head is performed in a preset three-dimensional space to obtain a three-dimensional human head of the current frame;
  • a three-dimensional human head sequence of the target person is obtained.
  • the calculating the effective disparity map of the first head frame and the second head frame in the paired head frame in the current frame includes:
  • the disparity map falls within the effective disparity interval, it is determined that the disparity map is an effective disparity map.
  • performing three-dimensional reconstruction of a human head in a preset three-dimensional space through the effective disparity map to obtain a three-dimensional human head of the current frame including:
  • three-dimensional reconstruction of the human head is performed in the preset three-dimensional space to obtain the three-dimensional human head of the current frame.
  • the three-dimensional space includes a demarcated ground
  • the monitoring of passenger flow in the target area according to the three-dimensional head sequence includes:
  • passenger flow monitoring is performed on the target area.
  • the demarcated ground includes a target demarcation area corresponding to the target area
  • the monitoring of passenger flow in the target area according to the projection trajectory includes:
  • the passenger flow monitoring is performed on the target area.
  • an embodiment of the present invention further provides a device for monitoring passenger flow, the device comprising:
  • a first acquisition module configured to acquire a first target image sequence and a second target image sequence of the target area, where the first target image sequence and the second target sequence are acquired at the same moment and from different angles;
  • a processing module configured to perform human head detection on the first target image sequence and the second target image sequence respectively, to obtain a first human head frame sequence and a second human head frame sequence, the first human head frame sequence including at least one target a first head frame of a person, the sequence of second head frames includes a second head frame of at least one target person;
  • the matching module is used to match the first human head frame and the second human head frame according to the time sequence relationship between the first human head frame sequence and the second human head frame sequence to obtain the paired head of each target person a sequence of frames, the sequence of matched head frames comprising at least one matched head frame of a target person;
  • a three-dimensional reconstruction module configured to perform three-dimensional reconstruction of the human head in a preset three-dimensional space according to the paired human head frame sequence to obtain a three-dimensional human head sequence of the target person, where the three-dimensional human head sequence includes the three-dimensional human head of the target person;
  • the monitoring module is used for monitoring the passenger flow of the target area according to the three-dimensional human head sequence.
  • an embodiment of the present invention provides an electronic device, including: a memory, a processor, and a computer program stored on the memory and executable on the processor, when the processor executes the computer program The steps in the method for monitoring passenger flow provided by the embodiment of the present invention are implemented.
  • an embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the method for monitoring passenger flow provided by the embodiment of the present invention is implemented. A step of.
  • the first target image sequence and the second target image sequence of the target area are acquired, and the first target image sequence and the second target image sequence are acquired at the same moment and from different angles;
  • the target image sequence and the second target image sequence are subjected to human head detection, and a first human head frame sequence and a second human head frame sequence are obtained, wherein the first human head frame sequence includes the first head frame of at least one target person, and the first human head frame sequence includes at least one target person.
  • the two-person head frame sequence includes at least one second head frame of the target person; according to the time sequence relationship between the first head frame sequence and the second head frame sequence, the first head frame and the second head frame are performed.
  • the paired head frame sequence includes at least one paired head frame of the target person; according to the paired head frame sequence, perform three-dimensional reconstruction of the head in a preset three-dimensional space, and obtain A three-dimensional head sequence of the target person, the three-dimensional head sequence includes the three-dimensional head of the target person; according to the three-dimensional head sequence, passenger flow monitoring is performed on the target area.
  • the target area Through the head images of different angles of the target person, more accurate target head information is extracted for 3D reconstruction, so that the position of the 3D target head in the 3D space is more accurate, thereby improving the counting accuracy of passenger flow, thereby improving passenger flow. Monitor the effect.
  • FIG. 1 is a flowchart of a method for monitoring passenger flow provided by an embodiment of the present invention
  • FIG. 2 is a flowchart of a method for constructing a three-dimensional space provided by an embodiment of the present invention
  • FIG. 3 is a schematic diagram of a three-dimensional human head reconstruction method provided by an embodiment of the present invention.
  • FIG. 4a is a schematic diagram of a relationship between a region and a state according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of a device for monitoring passenger flow provided by an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of another passenger flow monitoring device provided by an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of a calibration module provided by an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of another calibration module provided by an embodiment of the present invention.
  • FIG. 9 is a schematic structural diagram of a three-dimensional reconstruction module provided by an embodiment of the present invention.
  • FIG. 10 is a schematic structural diagram of a second computing submodule provided by an embodiment of the present invention.
  • FIG. 11 is a schematic structural diagram of a reconstruction submodule provided by an embodiment of the present invention.
  • FIG. 12 is a schematic structural diagram of a monitoring module provided by an embodiment of the present invention.
  • FIG. 13 is a schematic structural diagram of a monitoring submodule provided by an embodiment of the present invention.
  • FIG. 14 is a schematic structural diagram of an electronic device provided by an embodiment of the present invention.
  • FIG. 1 is a flowchart of a method for monitoring passenger flow provided by an embodiment of the present invention. As shown in FIG. 1, the method is used to monitor passenger flow regularly or in real time, and includes the following steps:
  • the above-mentioned first target image sequence and the above-mentioned second target image sequence are acquired at the same moment and from different angles.
  • the above-mentioned first target image sequence and second target image sequence include at least one target person.
  • the first target image sequence and the second target image sequence can be collected by two cameras with different shooting angles, and the two cameras can be calibrated and associated during installation, so that the two cameras process the same coordinate system for shooting, and Allows two cameras to shoot at the same time.
  • the first target image sequence and the second target image sequence may also be collected through a calibrated binocular camera.
  • the above-mentioned first target image sequence and second target image sequence may be continuous frame image sequences (video stream images) collected in real time by the binocular camera, or may be continuous frame images collected historically by the binocular camera.
  • the first head frame sequence includes a first head frame of at least one target person
  • the second head frame sequence includes a second head frame of at least one target person.
  • the first head frame corresponds to the first target image sequence
  • the second head frame corresponds to the second target image sequence.
  • the first target image sequence and the second target image sequence can be detected by the human head detection model, respectively, to obtain the first human head frame sequence and the second human head frame sequence.
  • the human head detection model may be used to perform human head detection on the first target image sequence and the second target image sequence frame by frame, so as to obtain a human head frame corresponding to each frame of image.
  • head frame tracking may be performed on the first head frame sequence through a head tracking model to obtain a first head frame sequence tracked according to the first head frame corresponding to each frame of the first target image.
  • first human head frame sequence the first human head frame detected in each frame of the first target image is included.
  • second head frame sequence it is also possible to perform head frame tracking on the second head frame sequence through a head tracking model, and obtain a second head frame sequence to be tracked according to the second head frame corresponding to each frame of the second target image.
  • head frame tracking on the first human head frame sequence through the human head tracking model, and at the same time, perform human head frame tracking on the second human head frame sequence through the human head tracking model, to obtain the first target image corresponding to each frame.
  • the first head frame sequence may be a first head frame sequence corresponding to multiple target persons
  • the second head frame sequence may be a second head frame sequence corresponding to multiple target persons
  • the first head frame may be a plurality of first head frames, corresponding to a plurality of target persons
  • the second head frame may also be a plurality of second head frames, corresponding to a plurality of target persons.
  • the head frame of each target person can be ID assigned according to the head frame tracking algorithm, that is, the head frame of each target person corresponds to a head frame ID, and the head frame ID is used to identify whether the corresponding head frame belongs to the same head frame.
  • the same head frame ID corresponds to the head frame of the same target person.
  • different head frame IDs can be assigned to different target persons, and then the first head frame sequences of different target persons can be obtained according to the different head frame IDs.
  • a sequence of second head frames of different target persons can be obtained according to the similarity between the first head frame and the second head frame.
  • the first head frame sequence and the second head frame sequence match the first head frame with the second head frame to obtain a paired head frame sequence for each target person.
  • the above-mentioned paired head frame sequence includes at least one paired head frame of the target person.
  • the timing relationship between the first human head frame sequence and the second human head frame sequence may be determined by the timing of each image frame in the first target image sequence and the second target image sequence.
  • the obtained head frame is specifically the same time sequence as the target image sequence.
  • the frame images corresponding to the first target image sequence and the second target image sequence are synchronized, so the first target image sequence and the second target image sequence are synchronized.
  • the first human head frame sequence and the second human head frame sequence extracted from the image sequence also have the property of synchronization. Therefore, the synchronized first head frame and the second head frame can be matched for similarity to obtain a paired head frame sequence for each target person.
  • the above-mentioned paired head frame sequence includes a paired head frame corresponding to each frame of the first target image and the second target image, and each paired head frame includes the first head frame of the same target person and the Second head frame.
  • the first head frame and the second head frame of the same target person can be used to calculate the head depth information of the target person in the first head frame and the second head frame.
  • the corresponding first human head image can be extracted from the first target image according to the first human head frame
  • the corresponding second human head image can be extracted from the second target image according to the second human head frame
  • the first human head image is the same as the
  • the second head image is the head image of the same target person at the same moment.
  • the depth of field information of the target person's head can be calculated through the first head image and the second head image of the same target person.
  • the 3D reconstruction of the head in the preset 3D space is performed to obtain the 3D head of the target person.
  • the three-dimensional reconstruction of the head of each paired head frame in the sequence of matched head frames of the target person then the three-dimensional head sequence of the target person can be obtained.
  • the position of the reconstructed 3D head of the target person in the 3D space will also change correspondingly, thereby forming the movement trajectory of the 3D head in the 3D space.
  • the movement trajectory of the target person can be obtained according to the movement trajectory of the 3D human head in the 3D space.
  • the movement trajectory of the three-dimensional head in the three-dimensional space can be mapped to the real space, and the movement trajectory of the target person in the real space can be obtained to monitor the passenger flow in the target area.
  • the position of the target area in the three-dimensional space can also be mapped in the three-dimensional space, and the passenger flow of the target area can be monitored according to the movement trajectory of the three-dimensional head in the three-dimensional space and the position of the target area in the three-dimensional space.
  • the above-mentioned monitoring of the passenger flow in the target area may be the statistical monitoring of the passenger flow, for example, the statistical monitoring of the passenger flow in and out of the target area, or the monitoring of the passenger flow trajectory.
  • the first target image sequence and the second target image sequence of the target area are acquired, and the first target image sequence and the second target image sequence are acquired at the same moment and from different angles;
  • the above-mentioned first target image sequence is acquired by a first camera
  • the above-mentioned second target image sequence is acquired by a second camera, which may be installed according to the settings of the first camera and the second camera. Construct a 3D space suitable for 3D human head reconstruction.
  • FIG. 2 is a flowchart of a method for constructing a three-dimensional space provided by an embodiment of the present invention. As shown in FIG. 2, the following steps are included:
  • the first camera and the second camera may be a left-eye camera and a right-eye camera in a binocular camera, respectively.
  • the internal and external parameters of the first camera and the second camera are calibrated, and the ground is calibrated to obtain the calibrated ground.
  • the above-mentioned calibration of the internal and external parameters of the first camera and the second camera may adopt a checkerboard ideal calibration method.
  • the checkerboard is used for single-camera calibration and dual-camera calibration, respectively, to obtain the internal parameters of the first camera, the internal parameters of the second camera, and the external parameters between the first camera and the second camera.
  • the above-mentioned calibrated ground can be obtained based on the coordinate system of the first camera, or can be obtained based on the coordinate system of the second camera, and the image collected by the first camera can be compared with the image collected by the second camera.
  • Coordinate transformation specifically, the coordinates of the image collected by the first camera can be transformed to the coordinates of the second camera through the internal parameters of the first camera, the internal parameters of the second camera, and the external parameters between the first camera and the second camera. in the department. Therefore, the calibrated ground can be calibrated by selecting the coordinate system of any one of the first camera or the second camera.
  • the calibration object information associated with the target area may be acquired, and the calibration object information is information of the calibration objects on the same plane as the ground.
  • the above-mentioned calibration object information is the calibration object information in the coordinate system of the above-mentioned first camera or the second camera; the ground is calibrated according to the above-mentioned calibration object information, and the calibrated ground is obtained.
  • the above-mentioned calibration object information includes corner point information, and the PnP (Perspective-n-Point) algorithm can be used to calculate the camera pose according to the corner point information, so as to obtain ground parameters, and the calibrated ground can be obtained according to the ground parameters.
  • PnP Perspective-n-Point
  • a two-dimensional code can be set on the ground in the target area or near the target area, and the two-dimensional code can be associated with the target area as specific calibration information.
  • the four corner points of the QR code are sub-pixel refined, and the camera pose is obtained through the four corner points PnP algorithm, thereby obtaining the calibrated ground.
  • the camera pose can be calculated through the known camera internal parameters, the coordinates of the four corner points in the world coordinate system, and the coordinates in the image coordinate system, so as to obtain the ground parameters.
  • the ground parameters Get the calibrated ground Get the calibrated ground.
  • the ground feature points corresponding to the first camera or the second camera may be calculated, and the ground feature points may be triangulated to obtain three-dimensional space points corresponding to the ground feature points. ; Fit the plane parameters to the three-dimensional space points to obtain the calibrated ground. Specifically, after calculating and extracting the ground feature points, the ground feature points can be triangulated to obtain the point cloud coordinates of the feature points in the world coordinate system.
  • the Hough transform point cloud plane detection algorithm and the ransac random consistency sampling algorithm fit the plane parameters to obtain the ground parameters, and the calibrated ground is obtained according to the ground parameters.
  • the above two ground calibration methods may be combined, for example, through feature extraction, ground calibration object information is extracted, and the ground calibration object information is a calibration object on the same plane as the ground.
  • the feature points of the ground calibration object are extracted by calculation, the feature points of the ground calibration object are triangulated to obtain the point cloud coordinates of the feature points of the ground calibration object in the world coordinate system.
  • the Hough transform point cloud plane detection algorithm and the ransac random consistency sampling algorithm fit the plane parameters to obtain the ground parameters, and the calibrated ground is obtained according to the ground parameters.
  • the calibrated ground is obtained based on the coordinate system of the first camera or the coordinate system of the second camera. Therefore, the constructed three-dimensional space is also based on the first camera or the three-dimensional space based on the second camera. Since the three-dimensional space is established based on the camera coordinate system, more accurate three-dimensional head information can be obtained by reconstructing the three-dimensional human head through the three-dimensional space.
  • FIG. 3 is a flowchart of a 3D human head reconstruction method provided by an embodiment of the present invention. As shown in FIG. 3, the method includes the following steps:
  • the above-mentioned disparity map refers to the distance between two images, and the same disparity means that the corresponding objects have the same distance from the camera.
  • the effective parallax interval can be obtained by calculation according to the preset prior parallax; and the parallax map of the first head frame and the second head frame in the paired head frame in the current frame is calculated, and it is judged whether the parallax map falls within the effective parallax interval; if the disparity map falls within the effective disparity interval, it is determined that the disparity map is an effective disparity map.
  • the effective disparity interval can be calculated by the preset disparity threshold. For example, if the prior disparity is 64 and the threshold is 10, the effective disparity interval is 54-74, that is, when the value corresponding to the disparity map of the image is within the effective disparity interval. 54-74, the disparity map of the image is the effective disparity map.
  • SGBM Semi-Global Block Matching
  • first head frame includes the head image of the target person in the first target image
  • second head frame includes the head image of the target person in the second target image
  • first head frame is connected to the first head frame.
  • the disparity map of the two-person head frame can be understood as the disparity between the head image of the target person in the first target image and the head image of the target person in the second target image.
  • the human head depth information (also referred to as human-to-field depth information) of the target person can be obtained by calculating the parallax map, and then three-dimensional reconstruction of the human head is performed according to the human head depth information.
  • the final disparity value can be calculated according to the effective disparity map; the head depth information of the target person can be calculated according to the preset first camera internal parameter or the second camera internal parameter; according to the human head depth information, in the preset three-dimensional space Perform three-dimensional reconstruction of the human head to obtain the three-dimensional human head of the current frame.
  • the head depth information of the target person can be calculated according to the internal parameters of the first camera.
  • the camera intrinsic parameter calculates the head depth information of the target person.
  • the above-mentioned final disparity value may be an average value of the effective disparity maps.
  • Camera intrinsic parameters can include camera focal length and baseline length.
  • coordinate transformation of the human head depth information may be performed to obtain the human head depth information in the coordinate system of the first camera or the second camera, so as to perform three-dimensional reconstruction of the human head in a three-dimensional space.
  • the three-dimensional head reconstruction of the target person in each frame of the first target image and the second target image is performed, so that the three-dimensional head sequence of the target person can be obtained.
  • FIG. 4 is a flowchart of another method for monitoring passenger flow provided by the embodiment of the present invention.
  • the three-dimensional space includes the demarcated ground, and the monitoring of passenger flow in the target area may be performed. It is the statistical monitoring of the number of passenger flows, such as the number of passenger flows entering and leaving the target area. As shown in Figure 4, it includes the following steps:
  • the three-dimensional head of each target person can be projected onto the ground demarcated in the three-dimensional space, so as to obtain the projected trajectory of each target person.
  • the above-mentioned monitoring of the passenger flow in the target area may be the quantitative monitoring of the passenger flow.
  • the camera coordinate system and the world coordinate system can also be converted to further convert the projected trajectory of the target person. It is converted into the trajectory of the target person in the world coordinate system, so that according to the trajectory of the target person in the world coordinate system, the quantitative monitoring of the passenger flow in the target area is carried out.
  • the ground to be calibrated in the three-dimensional space includes a target calibration area corresponding to the target area, and the state information of the projection trajectory and the target calibration area at each time sequence point can be calculated to obtain the difference between the projection trajectory and the target calibration area.
  • State sequence according to the state sequence, monitor the passenger flow of the target area.
  • the state sequence of the projected trajectory and the target calibration area includes the state information of each time sequence point.
  • the above state information can be divided into valid area status and invalid area status.
  • the above-mentioned target area can be the area on the left and right sides of the store door, that is, the above-mentioned target calibration area can be the fixed area on the left and right sides of the store door in the demarcated ground, and the above-mentioned effective area status can also be divided into:
  • a Outside the store door, it can indicate that the human head is projected on the outer side of the target calibration area at this time, and further indicates that the target person is in the outer area on both sides of the store door area.
  • c Outside the store, it can indicate that the human head is projected on the outside of the target calibration area at this time, and further indicates that the target person is outside the outside area of the store door, and the inside of the outside area of the store door is the inside area on both sides of the store door area corresponding to state b.
  • d Inside the store, it can indicate that the human head is projected on the inside of the target calibration area at this time, and further indicates that the target person is inside the inside area of the store door, and the outside of the inside area of the store door is the outside area on both sides of the store door area corresponding to state a.
  • n Uncertain area, which can include buffers near a and b, and areas very close and far away from the camera (usually caused by the matching error of the human head frame).
  • the above-mentioned state sequence may be composed of a, b, c, d, t, and n, and a regular search can be performed according to the state sequence, and then the counting and statistics of the passenger flow can be performed.
  • the state sequence of a target person is c...ca...ab...bd...dt, it can indicate a process that the target person goes from outside the store, to the outside of the store, to the inside of the store, to the inside of the store, to disappearing, which can be counted as Add 1 to the customer flow into the store;
  • the state sequence of a target person is d...db...ba...ac...ct, which can indicate a process of the target person from inside the store, to the inside of the store, to the outside of the store, to outside the store, to disappearance , which can be counted as out-of-store customer flow plus 1;
  • the status sequence of a target person is c...ca...ac...ca...ab...ba...ab...bd...dt, it can indicate that the target person goes from outside the store, to the outside of the store, to the store Outside the store, to the outside of the store, to the inside of the store, to the outside of the store, to the inside of the store,
  • the above projected trajectory can also be converted into the world coordinate system, and the state information of the trajectory corresponding to the target person at each time sequence point and the target area can be calculated to obtain the state sequence of the target person and the target area, and then according to the state sequence to The target area is monitored for passenger flow.
  • identifiers of the above states can be set according to the needs of the user, and should not be regarded as a limitation on the embodiments of the present invention.
  • the above a, b, c, d, t, and n may also be identified by 1, 2, 3, 4, 5, and 6.
  • passenger flow statistics are performed on the target person through the state sequence, which can increase the accuracy of the statistics.
  • FIG. 5 is a schematic structural diagram of a device for monitoring passenger flow according to an embodiment of the present invention. As shown in FIG. 5, the device includes:
  • an acquisition module 501 configured to acquire a first target image sequence and a second target image sequence of a target area, wherein the first target image sequence and the second target image sequence are acquired at the same moment and from different angles;
  • the processing module 502 is configured to perform human head detection on the first target image sequence and the second target image sequence, respectively, to obtain a first human head frame sequence and a second human head frame sequence, and the first human head frame sequence includes at least one a first head frame of the target person, the second head frame sequence including at least one second head frame of the target person;
  • the matching module 503 is configured to match the first human head frame and the second human head frame according to the time sequence relationship between the first human head frame sequence and the second human head frame sequence to obtain the pairing of each target person
  • a human head frame sequence, the paired human head frame sequence includes at least one paired human head frame of a target person;
  • a three-dimensional reconstruction module 504 configured to perform three-dimensional reconstruction of the human head in a preset three-dimensional space according to the paired human head frame sequence to obtain a three-dimensional human head sequence of the target person, where the three-dimensional human head sequence includes the three-dimensional human head of the target person;
  • the monitoring module 505 is configured to monitor the passenger flow of the target area according to the three-dimensional human head sequence.
  • the first target image sequence is acquired by a first camera
  • the second target image sequence is acquired by a second camera
  • the device further includes:
  • the calibration module 506 is configured to perform ground calibration under the coordinate system of the first camera or the second camera to obtain the calibrated ground;
  • the construction module 507 is configured to construct and obtain the three-dimensional space based on the marked ground.
  • the calibration module 506 includes:
  • the acquisition sub-module 5061 is used to acquire the calibration object information associated with the target area, and the calibration object information is the calibration object information in the coordinate system of the first camera or the second camera;
  • the first calibration sub-module 5062 is configured to perform ground calibration according to the calibration object information to obtain the calibrated ground.
  • the calibration module 506 includes:
  • the first calculation sub-module 5063 is used to calculate the ground feature points corresponding to the first camera and the second camera, triangulate the ground feature points, and obtain the three-dimensional space points corresponding to the ground feature points;
  • the second calibration sub-module 5064 is configured to perform plane parameter fitting on the three-dimensional space point to obtain the calibrated ground.
  • the three-dimensional reconstruction module 504 includes:
  • the second calculation submodule 5041 is used to calculate the effective disparity map of the first head frame and the second head frame in the paired head frame in the current frame;
  • the reconstruction sub-module 5042 is used for performing three-dimensional reconstruction of the human head in a preset three-dimensional space through the effective disparity map to obtain a three-dimensional human head of the current frame;
  • the sequence sub-module 5043 is configured to obtain the 3D human head sequence of the target person based on the 3D human head of the current frame.
  • the second calculation sub-module 5041 includes:
  • the first calculation unit 50411 is used to calculate the effective parallax interval according to the preset prior parallax
  • the second calculation unit 50412 is configured to calculate the disparity map of the first human head frame and the second human head frame in the paired human head frame in the current frame, and determine whether the disparity map falls within the effective disparity interval;
  • the determining unit 50413 is configured to determine that the disparity map is an effective disparity map if the disparity map falls within the valid disparity interval.
  • the reconstruction sub-module 5042 includes:
  • a third calculation unit 50421, configured to calculate the final disparity value according to the effective disparity map
  • the fourth calculation unit 50422 is used to calculate and obtain the head depth information of the target person according to the preset first camera internal parameter or second camera internal parameter;
  • the reconstruction unit 50423 is configured to perform three-dimensional reconstruction of the human head in the preset three-dimensional space according to the human head depth information to obtain the three-dimensional human head of the current frame.
  • the monitoring module 505 includes:
  • the projection sub-module 5051 is used to project the three-dimensional human head in the three-dimensional human head sequence onto the calibrated ground to obtain the projected trajectory of the target person;
  • the monitoring sub-module 5052 is configured to monitor the passenger flow of the target area according to the projected trajectory.
  • the calibrated ground includes a target calibration area corresponding to the target area
  • the monitoring sub-module 5052 includes:
  • the fifth calculation unit 50521 is used to calculate the state information of the projection trajectory and the target calibration area at each timing point, and obtain the state sequence of the projection trajectory and the target calibration area;
  • the monitoring unit 50522 is configured to monitor the passenger flow of the target area according to the state sequence.
  • the device for monitoring passenger flow provided by the embodiment of the present invention can implement each process implemented by the method for monitoring passenger flow in the above method embodiments, and can achieve the same beneficial effects. To avoid repetition, details are not repeated here.
  • FIG. 14 is a schematic structural diagram of an electronic device provided by an embodiment of the present invention, as shown in FIG. 14, including: a memory 1402, a processor 1401, and a memory 1402 and a processor A computer program running on 1401, which:
  • the processor 1401 is configured to call the computer program stored in the memory 1402, and perform the following steps:
  • first human head frame sequence includes the first person of at least one target person a head frame
  • sequence of second human head frames includes a second head frame of at least one target person
  • the first head frame and the second head frame are matched to obtain a paired head frame sequence for each target person, and the
  • the paired head frame sequence includes at least one paired head frame of the target person;
  • three-dimensional reconstruction of the human head is performed in a preset three-dimensional space to obtain a three-dimensional human head sequence of the target person, and the three-dimensional human head sequence includes the three-dimensional human head of the target person;
  • passenger flow monitoring is performed on the target area.
  • the first target image sequence is acquired by a first camera
  • the second target image sequence is acquired by a second camera
  • the processor 1401 further includes:
  • the three-dimensional space is constructed and obtained.
  • the processor 1401 performs ground calibration in the coordinate system of the first camera or the second camera to obtain the calibrated ground, including:
  • the calibration object information is calibration object information in the coordinate system of the first camera or the second camera;
  • the ground is calibrated according to the calibration object information, and the calibrated ground is obtained.
  • the processor 1401 performs ground calibration in the coordinate system of the first camera or the second camera to obtain the calibrated ground, including:
  • performing three-dimensional reconstruction of the human head on the paired human head frame sequence in a preset three-dimensional space by the processor 1401 to obtain a three-dimensional human head sequence of the target person including:
  • three-dimensional reconstruction of the human head is performed in a preset three-dimensional space to obtain a three-dimensional human head of the current frame;
  • a three-dimensional human head sequence of the target person is obtained.
  • the calculation performed by the processor 1401 of the effective disparity map of the first head frame and the second head frame in the paired head frame in the current frame includes:
  • the disparity map falls within the effective disparity interval, it is determined that the disparity map is an effective disparity map.
  • performing three-dimensional reconstruction of a human head in a preset three-dimensional space through the effective disparity map performed by the processor 1401 to obtain a three-dimensional human head of the current frame including:
  • three-dimensional reconstruction of the human head is performed in the preset three-dimensional space to obtain the three-dimensional human head of the current frame.
  • the three-dimensional space includes a calibrated ground
  • the monitoring of the passenger flow in the target area according to the three-dimensional human head sequence executed by the processor 1401 includes:
  • passenger flow monitoring is performed on the target area.
  • the demarcated ground includes a target demarcation area corresponding to the target area
  • the monitoring of passenger flow in the target area according to the projection trajectory includes:
  • the passenger flow monitoring is performed on the target area.
  • the electronic device provided in the embodiments of the present invention can implement the various processes implemented by the method for monitoring passenger flow in the above method embodiments, and can achieve the same beneficial effects. To avoid repetition, details are not repeated here.
  • Embodiments of the present invention further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

一种客流的监测方法、装置、电子设备及存储介质,所述方法包括:获取目标区域的第一目标图像序列与第二目标图像序列(101);分别对第一目标图像序列与第二目标图像序列进行人头检测,得到第一人头框序列与第二人头框序列(102);根据所述第一人头框序列与第二人头框序列的时序关系,将所述第一人头框与所述第二人头框进行匹配,得到每个目标人员的配对人头框序列(103),所述配对人头框序列包括至少一个目标人员的配对人头框;根据所述配对人头框序列,在预设的三维空间中进行人头三维重建,得到目标人员的三维人头序列(104),所述三维人头序列中包括目标人员的三维人头;根据所述三维人头序列,对所述目标区域进行客流的监测(105)。所述方法提高客流量监测效果。

Description

客流的监测方法、装置、电子设备及存储介质 技术领域
本发明涉及人工智能领域,尤其涉及一种客流的监测方法、装置、电子设备及存储介质。
本申请要求于2020年12月14日提交中国专利局,申请号为202011466662.X、发明名称为“客流的监测方法、装置、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
背景技术
随着人工智能的发展,线下店铺为寻求可以精准营销的商业模式,而采用了人工智能中的图像技术,来获取客流量信息、客流转化率信息、客户在店铺的行为信息等,来使营销策略更加精准。一般的图像技术中,对于客流量信息的获取一般采用在二维平面内画线、画框等设置来进行,当有人员跨线或进入框中时,则进行客流计数。然而,单一的计算逻辑无法同时兼容店铺场景的多变、行人进店路线多变、行人身高等等因素,造成计数准确率较差,由此分析出的其他信息可靠性也较差。因此,现有的客流量监测效果不好。
技术解决方案
本发明实施例提供一种客流的监测方法,能够提高客流量的计数准确度,进而提高客流量监测效果。
第一方面,本发明实施例提供一种客流的监测方法,所述方法包括:
获取目标区域的第一目标图像序列与第二目标图像序列,所述第一目标图像序列与所述第二目标序列为同一时刻不同角度采集得到;
分别对所述第一目标图像序列与第二目标图像序列进行人头检测,得到第一人头框序列与第二人头框序列,所述第一人头框序列包括至少一个目标人员的第一人头框,所述第二人头框序列包括至少一个目标人员的第二人头框;
根据所述第一人头框序列与第二人头框序列的时序关系,将所述第一人头框与所述第二人头框进行匹配,得到每个目标人员的配对人头框序列,所述配对人头框序列包括至少一个目标人员的配对人头框;
根据所述配对人头框序列,在预设的三维空间中进行人头三维重建,得到目标人员的三维人头序列,所述三维人头序列中包括目标人员的三维人头;
根据所述三维人头序列,对所述目标区域进行客流的监测。
可选的,所述第一目标图像序列通过第一相机进行获取,所述第二目标图像序列通过第二相机进行获取,所述方法还包括:
在所述第一相机或第二相机的坐标系下,进行地面标定,得到标定的地面;
基于所述标定的地面,构建得到所述三维空间。
可选的,所述在所述第一相机或第二相机的坐标系下,进行地面标定,得到标定的地面,包括:
获取与所述目标区域关联的标定物信息,所述标定物信息为所述第一相机或第二相机的坐标系下的标定物信息;
根据所述标定物信息进行地面标定,得到标定的地面。
可选的,所述在所述第一相机或第二相机的坐标系下,进行地面标定,得到标定的地面,包括:
计算第一相机与第二相机所对应的地面特征点,对所述地面特征点进行三角化,得到所述地面特征点对应的三维空间点;
对所述三维空间点进行平面参数拟合,得到标定的地面。
可选的,所述将所述配对人头框序列在预设的三维空间中进行人头三维重建,得到目标人员的三维人头序列,包括:
计算当前帧中所述配对人头框中第一人头框与第二人头框的有效视差图;
通过所述有效视差图,在预设的三维空间中进行人头三维重建,得到当前帧三维人头;
基于所述当前帧三维人头,得到目标人员的三维人头序列。
可选的,所述计算当前帧中所述配对人头框中第一人头框与第二人头框的有效视差图,包括:
根据预设的先验视差,计算得到有效视差区间;
计算所述当前帧中所述配对人头框中第一人头框与第二人头框的视差图,并判断所述视差图是否落入所述有效视差区间;
若所述视差图落入所述有效视差区间,则判断所述视差图为有效视差图。
可选的,所述通过所述有效视差图,在预设的三维空间中进行人头三维重建,得到当前帧三维人头,包括:
根据所述有效视差图,计算最终视差值;
根据预设的第一相机内参或第二相机内参,计算得到目标人员的人头深度信息;
根据所述人头深度信息,在所述预设的三维空间中进行人头三维重建,得到当前帧三维人头。
可选的,所述三维空间包括标定的地面,所述根据所述三维人头序列,对所述目标区域进行客流的监测,包括:
将所述三维人头序列中的三维人头投影到所述标定的地面,得到目标人员的投影轨迹;
根据所述投影轨迹,对所述目标区域进行客流的监测。
可选的,所述标定的地面包括与所述目标区域对应的目标标定区域,所述根据所述投影轨迹,对所述目标区域进行客流的监测,包括:
计算所述投影轨迹在每个时序点与所述目标标定区域的状态信息,得到所述投影轨迹与所述目标标定区域的状态序列;
根据所述状态序列,对所述目标区域进行客流的监测。
第二方面,本发明实施例还提供一种客流的监测装置,所述装置包括:
第一获取模块,用于获取目标区域的第一目标图像序列与第二目标图像序列,所述第一目标图像序列与所述第二目标序列为同一时刻不同角度采集得到;
处理模块,用于分别对所述第一目标图像序列与第二目标图像序列进行人头检测,得到第一人头框序列与第二人头框序列,所述第一人头框序列包括至少一个目标人员的第一人头框,所述第二人头框序列包括至少一个目标人员的第二人头框;
匹配模块,用于根据所述第一人头框序列与第二人头框序列的时序关系,将所述第一人头框与所述第二人头框进行匹配,得到每个目标人员的配对人头框序列,所述配对人头框序列包括至少一个目标人员的配对人头框;
三维重建模块,用于根据所述配对人头框序列,在预设的三维空间中进行人头三维重建,得到目标人员的三维人头序列,所述三维人头序列中包括目标人员的三维人头;
监测模块,用于根据所述三维人头序列,对所述目标区域进行客流的监测。
第三方面,本发明实施例提供一种电子设备,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现本发明实施例提供的客流的监测方法中的步骤。
第四方面,本发明实施例提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现发明实施例提供的客流的监测方法中的步骤。
本发明实施例中,获取目标区域的第一目标图像序列与第二目标图像序列,所述第一目标图像序列与所述第二目标序列为同一时刻不同角度采集得到;分别对所述第一目标图像序列与第二目标图像序列进行人头检测,得到第一人头框序列与第二人头框序列,所述第一人头框序列包括至少一个目标人员的第一人头框,所述第二人头框序列包括至少一个目标人员的第二人头框;根据所述第一人头框序列与第二人头框序列的时序关系,将所述第一人头框与所述第二人头框进行匹配,得到每个目标人员的配对人头框序列,所述配对人头框序列包括至少一个目标人员的配对人头框;根据所述配对人头框序列,在预设的三维空间中进行人头三维重建,得到目标人员的三维人头序列,所述三维人头序列中包括目标人员的三维人头;根据所述三维人头序列,对所述目标区域进行客流的监测。通过目标人员不同角度的人头图像,提取到更为准确的目标人头信息用于三维重建,使得三维目标人头在三维空间中的位置更为准确,从而提高客流量的计数准确度,进而提高客流量监测效果。
附图说明
图1是本发明实施例提供的一种客流的监测方法的流程图;
图2是本发明实施例提供的一种构建三维空间方法的流程图;
图3是本发明实施例提供的一种三维人头重建方法的示意图;
图4是本发明实施例提供的另一种客流的监测方法的流程图;
图4a是本发明实施例提供的一种区域与状态关系的示意图;
图5是本发明实施例提供的一种客流的监测装置的结构示意图;
图6是本发明实施例提供的另一种客流的监测装置的结构示意图;
图7是本发明实施例提供的一种标定模块的结构示意图;
图8是本发明实施例提供的另一种标定模块的结构示意图;
图9是本发明实施例提供的一种三维重建模块的结构示意图;
图10是本发明实施例提供的一种第二计算子模块的结构示意图;
图11是本发明实施例提供的一种重建子模块的结构示意图;
图12是本发明实施例提供的一种监测模块的结构示意图;
图13是本发明实施例提供的一种监测子模块的结构示意图;
图14是本发明实施例提供的一种电子设备的结构示意图。
本发明的实施方式
请参见图1,图1是本发明实施例提供的一种客流的监测方法的流程图,如图1所示,该方法用于定时或实时进行客流的监测,包括以下步骤:
101、获取目标区域的第一目标图像序列与第二目标图像序列。
在本发明实施例中,上述第一目标图像序列与上述第二目标图像序列为同一时刻不同角度采集得到。上述的第一目标图像序列与第二目标图像序列中包括至少1个目标人员。
可以通过两个不同拍摄角度的摄像头分别采集第一目标图像序列与第二目标图像序列,两个摄像头可以在安装的时候进行标定并且形成关联,以使两个摄像头处理同一坐标系进行拍摄,并使两个摄像头可以是同一时间进行拍摄。也可以通过标定好的双目摄像头采集第一目标图像序列与第二目标图像序列。在本发明实施例中,优选为通过标定好的双目摄像头来采集第一目标图像序列与第二目标图像序列,则第一目标图像序列与第二目标图像序列分别可以为左目图像序列与右目图像序列。
上述第一目标图像序列与第二目标图像序列可以是双目摄像头实时采集到的连续的帧图像序列(视频流图像),也可以是双目摄像头历史采集到的连续的帧图像。
102、分别对第一目标图像序列与第二目标图像序列进行人头检测,得到第一人头框序列与第二人头框序列。
在本发明实施例中,上述第一人头框序列包括至少一个目标人员的第一人头框,上述第二人头框序列包括至少一个目标人员的第二人头框。第一人头框对应于第一目标图像序列,第二人头框对应于第二目标图像序列。
可以通过人头检测模型对上述第一目标图像序列与第二目标图像序列分别进行人头检测,得到第一人头框序列与第二人头框序列。具体的,上述可以通过人头检测模型对上述第一目标图像序列与第二目标图像序列逐帧进行人头检测,得到每帧图像对应的人头框。
可选的,可以通过人头跟踪模型对第一人头框序列进行人头框跟踪,得到根据每帧第一目标图像对应的第一人头框进行跟踪的第一人头框序列。在上述第一人头框序列中,包括每帧第一目标图像中检测到的第一人头框。当然,也可以是通过人头跟踪模型对第二人头框序列进行人头框跟踪,得到根据每帧第二目标图像对应的第二人头框进行跟踪的第二人头框序列。另外,还可以是通过人头跟踪模型对第一人头框序列进行人头框跟踪,同时,通过人头跟踪模型对第二人头框序列进行人头框跟踪,得到根据每帧第一目标图像对应的第一人头框进行跟踪的第一人头框序列,以及根据每帧第二目标图像对应的第二人头框进行跟踪的第二人头框序列。
在一种可能的实施例中,上述第一人头框序列可以是多个目标人员对应的第一人头框序列,上述第二人头框序列可以是多个目标人员对应的第二人头框序列。进一步的,上述第一人头框可以是多个第一人头框,对应于多个目标人员,上述第二人头框也可以是多个第二人头框,对应于多个目标人员。
更进一步的,可以根据人头框跟踪算法,对每个目标人员的人头框进行ID分配,即每个目标人员的人头框对应一个人头框ID,该人头框ID用于识别对应人头框是否属于同一个人,同一个人头框ID,则对应于同一个目标人员的人头框。比如,在只对第一人头框进行跟踪时,可以为不同目标人员分配不同的人头框ID,然后根据不同的人头框ID,得到不同目标人员的第一人头框序列。进一步,可以根据第一人头框与第二人头框的相似度,得到不同目标人员的第二人头框序列。
103、根据第一人头框序列与第二人头框序列的时序关系,将第一人头框与第二人头框进行匹配,得到每个目标人员的配对人头框序列。
在本发明实施例中,上述配对人头框序列包括至少一个目标人员的配对人头框。
上述第一人头框序列与第二人头框序列的时序关系可以通过第一目标图像序列与第二目标图像序列中各个图像帧的时序进行确定。在对目标图像序列逐帧进行人头检测时,则得到的人头框具体与目标图像序列相同的时序。
由于第一目标图像序列与第二目标图像序列为同时进行采集得到的,第一目标图像序列与第二目标图像序列中对应的帧图像是同步的,进而对第一目标图像序列与第二目标图像序列提取到的第一人头框序列与第二人头框序列也具有同步属性。因此,可以将同步的第一人头框与第二人头框进行相似度匹配,得到每个目标人员的配对人头框序列。
104、根据配对人头框序列,在预设的三维空间中进行人头三维重建,得到目标人员的三维人头序列。
在本发明实施例中,上述的配对人头框序列中包括每帧第一目标图像与第二目标图像对应的配对人头框,每个配对人头框中包括同一个目标人员的第一人头框与第二人头框。可以通过同一个目标人员的第一人头框与第二人头框,来计算第一人头框与第二人头框中目标人员的人头景深信息,根据人头的景深信息,在预设的三维空间中进行人头三维重建,得到目标人员的三维人头,对目标人员的配对人头框序列中每个配对人头框都进行人头三维重建,则可以得到目标人员的三维人头序列。
进一步的,可以根据第一人头框在第一目标图像中提取对应的第一人头图像,根据第二人头框在第二目标图像中提取对应的第二人头图像,第一人头图像与第二人头图像为同一目标人员在同一时刻的人头图像。可以通过同一个目标人员的第一人头图像与第二人头图像来计算目标人员的人头景深信息,根据人头的景深信息,在预设的三维空间中进行人头三维重建,得到目标人员的三维人头,对目标人员的配对人头框序列中每个配对人头框都进行人头三维重建,则可以得到目标人员的三维人头序列。
105、根据三维人头序列,对目标区域进行客流的监测。
在本发明实施例中,由于目标人员是移动的,因此,重建得到的目标人员的三维人头在三维空间中的位置也是会发生对应的位置变化,从而形成三维人头在三维空间中的移动轨迹,可以根据三维人头在三维空间中的移动轨迹,得到目标人员的移动轨迹。比如,可以将三维人头在三维空间中的移动轨迹映射到现实空间中,得到目标人员在现实空间的移动轨迹来对目标区域进行客流的监测。也可以在三维空间中映射出目标区域在三维空间的位置,根据三维人头在三维空间中的移动轨迹与目标区域在三维空间的位置,来对目标区域进行客流的监测。
上述对目标区域进行客流的监测可以是客流量的数量统计监测,比如可以是进出目标区域客流的数量统计,也可以是对客流轨迹进行监测。
在本发明实施例中,获取目标区域的第一目标图像序列与第二目标图像序列,所述第一目标图像序列与所述第二目标序列为同一时刻不同角度采集得到;分别对所述第一目标图像序列与第二目标图像序列进行人头检测,得到第一人头框序列与第二人头框序列,所述第一人头框序列包括至少一个目标人员的第一人头框,所述第二人头框序列包括至少一个目标人员的第二人头框;根据所述第一人头框序列与第二人头框序列的时序关系,将所述第一人头框与所述第二人头框进行匹配,得到每个目标人员的配对人头框序列,所述配对人头框序列包括至少一个目标人员的配对人头框;根据所述配对人头框序列,在预设的三维空间中进行人头三维重建,得到目标人员的三维人头序列,所述三维人头序列中包括目标人员的三维人头;根据所述三维人头序列,对所述目标区域进行客流的监测。通过目标人员不同角度的人头图像,提取到更为准确的目标人头信息用于三维重建,使得三维目标人头在三维空间中的位置更为准确,从而提高客流量的计数准确度,进而提高客流量监测效果。
可选的,在本发明实施例中,上述第一目标图像序列通过第一相机进行获取,上述第二目标图像序列通过第二相机进行获取,可以根据第一相机与第二相机的设置安装来构建适于三维人头重建的三维空间。
具体的,三维空间包括标定的地面,请参见图2,图2是本发明实施例提供的一种构建三维空间方法的流程图,如图2所示,包括以下步骤:
201、在第一相机或第二相机的坐标系下,进行地面标定,得到标定的地面。
在本发明实施例中,第一相机与第二相机可以分别是双目相机中的左目相机和右目相机。在对第一相机与第二相机进行初始化时,对第一相机与第二相机的内参与外参进行标定,以及对地面进行标定,得到标定的地面。
上述对第一相机与第二相机的内参与外参进行标定可以采用棋盘格的想定标定方法。具体的,分别用棋盘格做单相机标定以及双相机标定,得到第一相机的内参、第二相机的内参,以及第一相机与第二相机之间的外参。
上述标定的地面可以是基于第一相机的坐标系所得到,也可以是基于第二相机的坐标系所得到,所述第一相机所采集到的图像可以与第二相机所采集到的图像进行坐标变换,具体可以通过第一相机的内参、第二相机的内参,以及第一相机与第二相机之间的外参,将第一相机所采集到的图像的坐标变换到第二相机的坐标系中。因此,标定的地面可以选择第一相机或第二相机中任意一个相机的坐标系进行标定。
在一种可能的实施例中,在对地面进行标定时,可以获取与目标区域关联的标定物信息,标定物信息为与地面在同一个平面的标定物的信息。上述标定物信息为上述第一相机或第二相机的坐标系下的标定物信息;根据上述标定物信息进行地面标定,得到标定的地面。具体的,上述标定物信息包括角点信息,可以根据该角点信息利用PnP(Perspective-n-Point)算法计算出相机位姿,从而得到地面参数,根据该地面参数得到标定的地面。比如,可以是在目标区域中或目标区域附近的地面上设置一个二维码,将该二维码当做特定的标定物信息与目标区域进行关联,这样,可以通过找到该二维码的4个角点,将该二维码的4个角点进行亚像素细化,并通过4个角点PnP算法得到相机位姿,从而得到标定的地面。在PnP算法中,可以通过已知的相机内参,以及4个角点在世界坐标系下的坐标,以及在图像坐标系下的坐标,计算得到相机位姿,从而得到地面参数,根据该地面参数得到标定的地面。
在另一种可能的实施例中,在对地面进行标定时,可以计算第一相机或第二相机所对应的地面特征点,对地面特征点进行三角化,得到地面特征点对应的三维空间点;对三维空间点进行平面参数拟合,得到标定的地面。具体的,可以在计算提取到地面特征点后,地面特征点进行三角化,得到特征点在世界坐标系下的点云坐标,可以通过3D Hough变换点云平面检测算法和ransac随机一致性采样算法,对平面参数进行拟合,得到地面参数,根据地面参数得到标定的地面。
在另一种可能的实施例中,可以将上述两种地面标定方法进行结合,比如,通过特征提取,提取到地面标定物信息,该地面标定物信息为与地面在同一个平面的标定物。通过计算提取到地面标定物特征点后,对地面标定物特征点进行三角化,得到地面标定物特征点在世界坐标系下的点云坐标,可以通过3D Hough变换点云平面检测算法和ransac随机一致性采样算法,对平面参数进行拟合,得到地面参数,根据地面参数得到标定的地面。
202、基于标定的地面,构建得到三维空间。
在本发明实施例中,标定的地面为基于第一相机的坐标系或基于第二相机的坐标系所得,因此,构建得到的三维空间也是基于第一相机或基于第二相机的三维空间。由于三维空间是基于相机坐标系进行建立的,所以通过该三维空间进行三维人头重建,可以得到更准确的三维人头信息。
可选的,请参见图3,图3是本发明实施例提供的一种三维人头重建方法的流程图,如图3所示,包括以下步骤:
301、计算当前帧中配对人头框中第一人头框与第二人头框的有效视差图。
在本发明实施例中,上述视差图指的是两个图像之间的距离,相同的视差代表对应物体离相机的距离相同。
具体的,可以根据预设的先验视差,计算得到有效视差区间;以及计算当前帧中配对人头框中第一人头框与第二人头框的视差图,并判断视差图是否落入有效视差区间;若视差图落入有效视差区间,则判断视差图为有效视差图。可以通过预设的视差阈值计算得到有效视差区间,比如,假设先验视差为64,阈值为10,则有效视差区间为54-74,即当图像的视差图对应的数值在该有效视差区间的54-74内时,该图像的视差图为有效视差图。
可选的,可以通过SGBM(Semi-Global Block Matching)算法计算第一人头框与第二人头框的视差图,SGBM算法中的自适应参数可以根据先验视差进行确定,可以对先验视差除以16结果四舍五入然后加一作为SGBM算法的自适应参数。
需要说明的是,上述第一人头框中包括目标人员在第一目标图像中的人头图像,第二人头框中包括目标人员在第二目标图像中的人头图像,第一人头框与第二人头框的视差图可以理解为目标人员在第一目标图像中的人头图像与目标人员在第二目标图像中的人头图像之间的视差。
302、通过有效视差图,在预设的三维空间中进行人头三维重建,得到当前帧三维人头。
在本发明实施例中,可以通过有视差图,计算得到目标人员的人头深度信息(也可以称为人对景深信息),进而根据人头深度信息进行人头三维重建。
进一步的,可以根据有效视差图,计算最终视差值;根据预设的第一相机内参或第二相机内参,计算得到目标人员的人头深度信息;根据人头深度信息,在预设的三维空间中进行人头三维重建,得到当前帧三维人头。当三维空间是基于第一相机的坐标系进行构建时,则可以根据第一相机内参计算目标人员的人头深度信息,当三维空间是基于第二相机的坐标系进行构建时,则可以根据第二相机内参计算目标人员的人头深度信息。上述最终视差值可以是有效视差图的平均值。相机内参可以包括相机焦距、基线长度。通过相机内参以及最终视差值,计算得到目标人员的人头深度信息,具体可以通过式子Z=fb/d进行计算,其中,Z代表深度,f代表焦距,b代表基线长度,d代表最终视差值。
可选的,在得到人头深度信息后,可以将人头深度信息进行坐标变换,得到第一相机或第二相机坐标系下的人头深度信息,用以在三维空间中进行人头三维重建。
303、基于当前帧三维人头,得到目标人员的三维人头序列。
在本发明实施例中,会对每帧第一目标图像与第二目标图像中的目标人员进行三维人头重建,这样可以得到目标人员的三维人头序列。
可选的,请参见图4,图4是本发明实施例提供的另一种客流的监测方法的流程图,本发明实施例中,三维空间包括标定的地面,对目标区域进行客流的监测可以是客流量的数量统计监测,比如可以是进出目标区域客流的数量统计。如图4所示,包括以下步骤:
401、将三维人头序列中的三维人头投影到标定的地面,得到目标人员的投影轨迹。
若目标人员为多个,则可以将每个目标人员的三维人头投影到三维空间中标定的地面,从而得到每个目标人员的投影轨迹。
402、根据投影轨迹,对目标区域进行客流的监测。
上述的对目标区域进行客流的监测可以是客流量的数量统计监测。
可选的,由于三维空间中标定的地面是基于第一相机坐标系或第二相机坐标系进行构建的,因此,也可以将相机坐标系与世界坐标系进行转换,进一步将目标人员的投影轨迹转换为目标人员在世界坐标系下的轨迹,从而根据目标人员在世界坐标系下的轨迹,对目标区域进行客流量的数量统计监测。
在一种可能的实施例中,三维空间中标定的地面包括与目标区域对应的目标标定区域,可以计算投影轨迹在每个时序点与目标标定区域的状态信息,得到投影轨迹与目标标定区域的状态序列;根据状态序列,对目标区域进行客流的监测。
比如,以店铺的进出门为例,统计监测进出店的客流量,投影轨迹与目标标定区域的状态序列中包括各个时序点状态信息,上述状态信息可以分为有效区域状态和无效区域状态,具体的,如图4a所示,上述目标区域可以是店门左右两侧的区域,即上述目标标定区域可以是标定的地面中店门左右两侧的固定区域,上述有效区域状态还可以分为:
a:店门外,可以表示此时人头投影在目标标定区域的靠外的一侧,进一步表示目标人员在店门区域两侧的外侧区域。
b:店门内,可以表示此时人头投影在目标标定区域的靠内的一侧,进一步表示目标人员在店门区域两侧的内侧区域。
c:店外,可以表示此时人头投影在目标标定区域的外侧,进一步表示目标人员在店门外侧区域的外侧,店门外侧区域的内侧为b状态对应的店门区域两侧的内侧区域。
d:店内,可以表示此时人头投影在目标标定区域的内侧,进一步表示目标人员在店门内侧区域的内侧,店门内侧区域的外侧为a状态对应的店门区域两侧的外侧区域。
无效状态:
t:id消失,可以表示此时目标人员脱离相机的拍摄范围 。
n:不确定区域,可以包含a、b附近缓冲区及距离相机极近、极远区域(通常由人头框匹配错误引起)。
此时,上述的状态序列可以由a、b、c、d、t、n组成,根据该状态序列可以进行正则搜索,进而进行客流量的计数统计。例如,一个目标人员的状态序列为c…ca…ab…bd…dt,则可以说明目标人员由店外、到店门外侧、到店门内侧、到店内、到消失的一个过程,可以计为进店客流加1;一个目标人员的状态序列为d…db…ba…ac…ct,则可以说明目标人员由店内、到店门内侧、到店门外侧、到店外、到消失的一个过程,可以计为出店客流加1;一个目标人员的状态序列为c…ca…ac…ca…ab…ba…ab…bd…dt,则可以说明目标人员由店外、到店门外侧、到店外、到店门外侧、到店门内侧、到店门外侧、到店门内侧、到店内、到消失的一个过程,中间虽然有由店外、到店门外侧、到店外以及店门内侧、到店门外侧、到店门内侧两个徘徊过各,但目标人员最终还是进入了店内,可以计为进店客流加1。这样,可以避免一个人员在店门处徘徊时,多次进行客流计数的问题,提高客流计数的准确度,进面提高客流监测的准确性。
当然,也可以将上述投影轨迹转换到世界坐标系中,计算对应目标人员的轨迹在每个时序点与目标区域的状态信息,得到目标人员与目标区域的状态序列,再根据该状态序列来对目标区域进行客流的监测。
需要说明的是,上述各个状态的标识可以根据用户的需要进行设置,而不应视为是对本发明实施例的限定。比如,上述a、b、c、d、t、n也可以通过1、2、3、4、5、6来进行标识。
在本发明实施例中,通过状态序列来对目标人员进行客流统计,可以增加统计的准确度。
请参见图5,图5是本发明实施例提供的一种客流的监测装置的结构示意图,如图5所示,所述装置包括:
获取模块501,用于获取目标区域的第一目标图像序列与第二目标图像序列,所述第一目标图像序列与所述第二目标序列为同一时刻不同角度采集得到;
处理模块502,用于分别对所述第一目标图像序列与第二目标图像序列进行人头检测,得到第一人头框序列与第二人头框序列,所述第一人头框序列包括至少一个目标人员的第一人头框,所述第二人头框序列包括至少一个目标人员的第二人头框;
匹配模块503,用于根据所述第一人头框序列与第二人头框序列的时序关系,将所述第一人头框与所述第二人头框进行匹配,得到每个目标人员的配对人头框序列,所述配对人头框序列包括至少一个目标人员的配对人头框;
三维重建模块504,用于根据所述配对人头框序列,在预设的三维空间中进行人头三维重建,得到目标人员的三维人头序列,所述三维人头序列中包括目标人员的三维人头;
监测模块505,用于根据所述三维人头序列,对所述目标区域进行客流的监测。
可选的,如图6所示,所述第一目标图像序列通过第一相机进行获取,所述第二目标图像序列通过第二相机进行获取,所述装置还包括:
标定模块506,用于在所述第一相机或第二相机的坐标系下,进行地面标定,得到标定的地面;
构建模块507,用于基于所述标定的地面,构建得到所述三维空间。
可选的,如图7所示,所述标定模块506,包括:
获取子模块5061,用于获取与所述目标区域关联的标定物信息,所述标定物信息为所述第一相机或第二相机的坐标系下的标定物信息;
第一标定子模块5062,用于根据所述标定物信息进行地面标定,得到标定的地面。
可选的,如图8所示,所述标定模块506,包括:
第一计算子模块5063,用于计算第一相机与第二相机所对应的地面特征点,对所述地面特征点进行三角化,得到所述地面特征点对应的三维空间点;
第二标定子模块5064,用于对所述三维空间点进行平面参数拟合,得到标定的地面。
可选的,如图9所示,所述三维重建模块504,包括:
第二计算子模块5041,用于计算当前帧中所述配对人头框中第一人头框与第二人头框的有效视差图;
重建子模块5042,用于通过所述有效视差图,在预设的三维空间中进行人头三维重建,得到当前帧三维人头;
序列子模块5043,用于基于所述当前帧三维人头,得到目标人员的三维人头序列。
可选的,如图10所示,所述第二计算子模块5041,包括:
第一计算单元50411,用于根据预设的先验视差,计算得到有效视差区间;
第二计算单元50412,用于计算所述当前帧中所述配对人头框中第一人头框与第二人头框的视差图,并判断所述视差图是否落入所述有效视差区间;
判断单元50413,用于若所述视差图落入所述有效视差区间,则判断所述视差图为有效视差图。
可选的,如图11所示,所述重建子模块5042,包括:
第三计算单元50421,用于根据所述有效视差图,计算最终视差值;
第四计算单元50422,用于根据预设的第一相机内参或第二相机内参,计算得到目标人员的人头深度信息;
重建单元50423,用于根据所述人头深度信息,在所述预设的三维空间中进行人头三维重建,得到当前帧三维人头。
可选的,如图12所示,所述监测模块505,包括:
投影子模块5051,用于将所述三维人头序列中的三维人头投影到所述标定的地面,得到目标人员的投影轨迹;
监测子模块5052,用于根据所述投影轨迹,对所述目标区域进行客流的监测。
可选的,如图13所示,所述标定的地面包括与所述目标区域对应的目标标定区域,所述监测子模块5052,包括:
第五计算单元50521,用于计算所述投影轨迹在每个时序点与所述目标标定区域的状态信息,得到所述投影轨迹与所述目标标定区域的状态序列;
监测单元50522,用于根据所述状态序列,对所述目标区域进行客流的监测。
本发明实施例提供的客流的监测装置能够实现上述方法实施例中客流的监测方法实现的各个过程,且可以达到相同的有益效果。为避免重复,这里不再赘述。
参见图14,图14是本发明实施例提供的一种电子设备的结构示意图,如图14所示,包括:存储器1402、处理器1401及存储在所述存储器1402上并可在所述处理器1401上运行的计算机程序,其中:
处理器1401用于调用存储器1402存储的计算机程序,执行如下步骤:
获取目标区域的第一目标图像序列与第二目标图像序列,所述第一目标图像序列与所述第二目标序列为同一时刻不同角度采集得到;
分别对所述第一目标图像序列与第二目标图像序列进行人头检测,得到第一人头框序列与第二人头框序列,所述第一人头框序列包括至少一个目标人员的第一人头框,所述第二人头框序列包括至少一个目标人员的第二人头框;
根据所述第一人头框序列与第二人头框序列的时序关系,将所述第一人头框与所述第二人头框进行匹配,得到每个目标人员的配对人头框序列,所述配对人头框序列包括至少一个目标人员的配对人头框;
根据所述配对人头框序列,在预设的三维空间中进行人头三维重建,得到目标人员的三维人头序列,所述三维人头序列中包括目标人员的三维人头;
根据所述三维人头序列,对所述目标区域进行客流的监测。
可选的,所述第一目标图像序列通过第一相机进行获取,所述第二目标图像序列通过第二相机进行获取,处理器1401还执行包括:
在所述第一相机或第二相机的坐标系下,进行地面标定,得到标定的地面;
基于所述标定的地面,构建得到所述三维空间。
可选的,处理器1401执行的所述在所述第一相机或第二相机的坐标系下,进行地面标定,得到标定的地面,包括:
获取与所述目标区域关联的标定物信息,所述标定物信息为所述第一相机或第二相机的坐标系下的标定物信息;
根据所述标定物信息进行地面标定,得到标定的地面。
可选的,处理器1401执行的所述在所述第一相机或第二相机的坐标系下,进行地面标定,得到标定的地面,包括:
计算第一相机与第二相机所对应的地面特征点,对所述地面特征点进行三角化,得到所述地面特征点对应的三维空间点;
对所述三维空间点进行平面参数拟合,得到标定的地面。
可选的,处理器1401执行的所述将所述配对人头框序列在预设的三维空间中进行人头三维重建,得到目标人员的三维人头序列,包括:
计算当前帧中所述配对人头框中第一人头框与第二人头框的有效视差图;
通过所述有效视差图,在预设的三维空间中进行人头三维重建,得到当前帧三维人头;
基于所述当前帧三维人头,得到目标人员的三维人头序列。
可选的,处理器1401执行的所述计算当前帧中所述配对人头框中第一人头框与第二人头框的有效视差图,包括:
根据预设的先验视差,计算得到有效视差区间;
计算所述当前帧中所述配对人头框中第一人头框与第二人头框的视差图,并判断所述视差图是否落入所述有效视差区间;
若所述视差图落入所述有效视差区间,则判断所述视差图为有效视差图。
可选的,处理器1401执行的所述通过所述有效视差图,在预设的三维空间中进行人头三维重建,得到当前帧三维人头,包括:
根据所述有效视差图,计算最终视差值;
根据预设的第一相机内参或第二相机内参,计算得到目标人员的人头深度信息;
根据所述人头深度信息,在所述预设的三维空间中进行人头三维重建,得到当前帧三维人头。
可选的,所述三维空间包括标定的地面,处理器1401执行的所述根据所述三维人头序列,对所述目标区域进行客流的监测,包括:
将所述三维人头序列中的三维人头投影到所述标定的地面,得到目标人员的投影轨迹;
根据所述投影轨迹,对所述目标区域进行客流的监测。
可选的,所述标定的地面包括与所述目标区域对应的目标标定区域,所述根据所述投影轨迹,对所述目标区域进行客流的监测,包括:
计算所述投影轨迹在每个时序点与所述目标标定区域的状态信息,得到所述投影轨迹与所述目标标定区域的状态序列;
根据所述状态序列,对所述目标区域进行客流的监测。
本发明实施例提供的电子设备能够实现上述方法实施例中客流的监测方法实现的各个过程,且可以达到相同的有益效果,为避免重复,这里不再赘述。
本发明实施例还提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现本发明实施例提供的客流的监测方法的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。

Claims (12)

  1. 一种客流的监测方法,其特征在于,包括以下步骤:
    获取目标区域的第一目标图像序列与第二目标图像序列,所述第一目标图像序列与所述第二目标图像序列为同一时刻不同角度采集得到;
    分别对所述第一目标图像序列与第二目标图像序列进行人头检测,得到第一人头框序列与第二人头框序列,所述第一人头框序列包括至少一个目标人员的第一人头框,所述第二人头框序列包括至少一个目标人员的第二人头框;
    根据所述第一人头框序列与第二人头框序列的时序关系,将所述第一人头框与所述第二人头框进行匹配,得到每个目标人员的配对人头框序列,所述配对人头框序列包括至少一个目标人员的配对人头框;
    根据所述配对人头框序列,在预设的三维空间中进行人头三维重建,得到目标人员的三维人头序列,所述三维人头序列中包括目标人员的三维人头;
    根据所述三维人头序列,对所述目标区域进行客流的监测。
  2. 如权利要求1所述的方法,其特征在于,所述第一目标图像序列通过第一相机进行获取,所述第二目标图像序列通过第二相机进行获取,所述方法还包括:
    在所述第一相机或第二相机的坐标系下,进行地面标定,得到标定的地面;
    基于所述标定的地面,构建得到所述三维空间。
  3. 如权利要求2所述的方法,其特征在于,所述在所述第一相机或第二相机的坐标系下,进行地面标定,得到标定的地面,包括:
    获取与所述目标区域关联的标定物信息,所述标定物信息为所述第一相机或第二相机的坐标系下的标定物信息;
    根据所述标定物信息进行地面标定,得到标定的地面。
  4. 如权利要求2所述的方法,其特征在于,所述在所述第一相机或第二相机的坐标系下,进行地面标定,得到标定的地面,包括:
    计算第一相机或第二相机所对应的地面特征点,对所述地面特征点进行三角化,得到所述地面特征点对应的三维空间点;
    对所述三维空间点进行平面参数拟合,得到标定的地面。
  5. 如权利要求1所述的方法,其特征在于,所述将所述配对人头框序列在预设的三维空间中进行人头三维重建,得到目标人员的三维人头序列,包括:
    计算当前帧中所述配对人头框中第一人头框与第二人头框的有效视差图;
    通过所述有效视差图,在预设的三维空间中进行人头三维重建,得到当前帧三维人头;
    基于所述当前帧三维人头,得到目标人员的三维人头序列。
  6. 如权利要求5所述的方法,其特征在于,所述计算当前帧中所述配对人头框中第一人头框与第二人头框的有效视差图,包括:
    根据预设的先验视差,计算得到有效视差区间;
    计算所述当前帧中所述配对人头框中第一人头框与第二人头框的视差图,并判断所述视差图是否落入所述有效视差区间;
    若所述视差图落入所述有效视差区间,则判断所述视差图为有效视差图。
  7. 如权利要求6所述的方法,其特征在于,所述通过所述有效视差图,在预设的三维空间中进行人头三维重建,得到当前帧三维人头,包括:
    根据所述有效视差图,计算最终视差值;
    根据预设的第一相机内参或第二相机内参,计算得到目标人员的人头深度信息;
    根据所述人头深度信息,在所述预设的三维空间中进行人头三维重建,得到当前帧三维人头。
  8. 如权利要求1至7中任一所述的方法,其特征在于,所述三维空间包括标定的地面,所述根据所述三维人头序列,对所述目标区域进行客流的监测,包括:
    将所述三维人头序列中的三维人头投影到所述标定的地面,得到目标人员的投影轨迹;
    根据所述投影轨迹,对所述目标区域进行客流的监测。
  9. 如权利要求8所述的方法,其特征在于,所述标定的地面包括与所述目标区域对应的目标标定区域,所述根据所述投影轨迹,对所述目标区域进行客流的监测,包括:
    计算所述投影轨迹在每个时序点与所述目标标定区域的状态信息,得到所述投影轨迹与所述目标标定区域的状态序列;
    根据所述状态序列,对所述目标区域进行客流的监测。
  10. 一种客流的监测装置,其特征在于,所述装置包括:
    第一获取模块,用于获取目标区域的第一目标图像序列与第二目标图像序列,所述第一目标图像序列与所述第二目标序列为同一时刻不同角度采集得到;
    处理模块,用于分别对所述第一目标图像序列与第二目标图像序列进行人头检测,得到第一人头框序列与第二人头框序列,所述第一人头框序列包括至少一个目标人员的第一人头框,所述第二人头框序列包括至少一个目标人员的第二人头框;
    匹配模块,用于根据所述第一人头框序列与第二人头框序列的时序关系,将所述第一人头框与所述第二人头框进行匹配,得到每个目标人员的配对人头框序列,所述配对人头框序列包括至少一个目标人员的配对人头框;
    三维重建模块,用于根据所述配对人头框序列,在预设的三维空间中进行人头三维重建,得到目标人员的三维人头序列,所述三维人头序列中包括目标人员的三维人头;
    监测模块,用于根据所述三维人头序列,对所述目标区域进行客流的监测。
  11. 一种电子设备,其特征在于,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如权利要求1至9中任一项所述的客流的监测方法中的步骤。
  12. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至9中任一项所述的客流的监测方法中的步骤。
PCT/CN2021/114965 2020-12-14 2021-08-27 客流的监测方法、装置、电子设备及存储介质 WO2022127181A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011466662.X 2020-12-14
CN202011466662.XA CN112633096A (zh) 2020-12-14 2020-12-14 客流的监测方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022127181A1 true WO2022127181A1 (zh) 2022-06-23

Family

ID=75312656

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/114965 WO2022127181A1 (zh) 2020-12-14 2021-08-27 客流的监测方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN112633096A (zh)
WO (1) WO2022127181A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112633096A (zh) * 2020-12-14 2021-04-09 深圳云天励飞技术股份有限公司 客流的监测方法、装置、电子设备及存储介质
CN112651571A (zh) * 2020-12-31 2021-04-13 深圳云天励飞技术股份有限公司 一种商城客流量的预测方法、装置、电子设备及存储介质
CN113326830B (zh) * 2021-08-04 2021-11-30 北京文安智能技术股份有限公司 基于俯视图像的客流统计模型训练方法和客流统计方法
CN114677651B (zh) * 2022-05-30 2022-09-27 山东极视角科技有限公司 一种基于低画质低帧率视频的客流统计方法及相关装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103455792A (zh) * 2013-08-20 2013-12-18 深圳市飞瑞斯科技有限公司 一种客流量统计方法及***
CN104103077A (zh) * 2014-07-29 2014-10-15 浙江宇视科技有限公司 一种人头检测方法和装置
CN106709432A (zh) * 2016-12-06 2017-05-24 成都通甲优博科技有限责任公司 基于双目立体视觉的人头检测计数方法
WO2018135510A1 (ja) * 2017-01-19 2018-07-26 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ 三次元再構成方法及び三次元再構成装置
CN108446611A (zh) * 2018-03-06 2018-08-24 深圳市图敏智能视频股份有限公司 一种车门状态关联的双目图像公交客流计算方法
CN112633096A (zh) * 2020-12-14 2021-04-09 深圳云天励飞技术股份有限公司 客流的监测方法、装置、电子设备及存储介质

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2518661A3 (en) * 2011-04-29 2015-02-11 Tata Consultancy Services Limited System and method for human detection and counting using background modeling, hog and haar features
JP2013093013A (ja) * 2011-10-06 2013-05-16 Ricoh Co Ltd 画像処理装置、車両
CN104662896B (zh) * 2012-09-06 2017-11-28 诺基亚技术有限公司 用于图像处理的装置和方法
KR20140108828A (ko) * 2013-02-28 2014-09-15 한국전자통신연구원 카메라 트래킹 장치 및 방법
CN104915965A (zh) * 2014-03-14 2015-09-16 华为技术有限公司 一种摄像机跟踪方法及装置
CN105160649A (zh) * 2015-06-30 2015-12-16 上海交通大学 基于核函数非监督聚类的多目标跟踪方法及***
CN107133988B (zh) * 2017-06-06 2020-06-02 科大讯飞股份有限公司 车载全景环视***中摄像头的标定方法及标定***
CN109191504A (zh) * 2018-08-01 2019-01-11 南京航空航天大学 一种无人机目标跟踪方法
CN109785396B (zh) * 2019-01-23 2021-09-28 中国科学院自动化研究所 基于双目相机的写字姿态监测方法、***、装置
CN110222673B (zh) * 2019-06-21 2021-04-06 杭州宇泛智能科技有限公司 一种基于头部检测的客流统计方法
CN111028271B (zh) * 2019-12-06 2023-04-14 浩云科技股份有限公司 基于人体骨架检测的多摄像机人员三维定位跟踪***
CN111160243A (zh) * 2019-12-27 2020-05-15 深圳云天励飞技术有限公司 客流量统计方法及相关产品
CN111354077B (zh) * 2020-03-02 2022-11-18 东南大学 一种基于双目视觉的三维人脸重建方法
CN111899282B (zh) * 2020-07-30 2024-05-14 平安科技(深圳)有限公司 基于双目摄像机标定的行人轨迹跟踪方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103455792A (zh) * 2013-08-20 2013-12-18 深圳市飞瑞斯科技有限公司 一种客流量统计方法及***
CN104103077A (zh) * 2014-07-29 2014-10-15 浙江宇视科技有限公司 一种人头检测方法和装置
CN106709432A (zh) * 2016-12-06 2017-05-24 成都通甲优博科技有限责任公司 基于双目立体视觉的人头检测计数方法
WO2018135510A1 (ja) * 2017-01-19 2018-07-26 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ 三次元再構成方法及び三次元再構成装置
CN108446611A (zh) * 2018-03-06 2018-08-24 深圳市图敏智能视频股份有限公司 一种车门状态关联的双目图像公交客流计算方法
CN112633096A (zh) * 2020-12-14 2021-04-09 深圳云天励飞技术股份有限公司 客流的监测方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN112633096A (zh) 2021-04-09

Similar Documents

Publication Publication Date Title
WO2022127181A1 (zh) 客流的监测方法、装置、电子设备及存储介质
US9646212B2 (en) Methods, devices and systems for detecting objects in a video
CN107240124B (zh) 基于时空约束的跨镜头多目标跟踪方法及装置
JP2019075156A (ja) 多因子画像特徴登録及び追尾のための方法、回路、装置、システム、及び、関連するコンピュータで実行可能なコード
JP6295645B2 (ja) 物体検出方法及び物体検出装置
CN104966062B (zh) 视频监视方法和装置
US9189859B2 (en) 3D image generation
CN109949347B (zh) 人体跟踪方法、装置、***、电子设备和存储介质
US20190051036A1 (en) Three-dimensional reconstruction method
Salih et al. Depth and geometry from a single 2d image using triangulation
CN110458897A (zh) 多摄像头自动标定方法及***、监控方法及***
CN107560592A (zh) 一种用于光电跟踪仪联动目标的精确测距方法
JP5027741B2 (ja) 画像監視装置
JP5027758B2 (ja) 画像監視装置
JP2018156408A (ja) 画像認識撮像装置
JP5667846B2 (ja) 対象物画像判定装置
CN108230351A (zh) 基于双目立体视觉行人检测的柜台评价方法与***
CN110910449B (zh) 识别物体三维位置的方法和***
CN115880643A (zh) 一种基于目标检测算法的社交距离监测方法和装置
JP6504711B2 (ja) 画像処理装置
Hung et al. Detecting fall incidents of the elderly based on human-ground contact areas
Tong et al. Human positioning based on probabilistic occupancy map
CN114694204A (zh) 社交距离检测方法、装置、电子设备及存储介质
JP2017068375A (ja) 複数のカメラ間での人物の追跡装置、追跡方法及びプログラム
US20240070897A1 (en) Geoposition determination and evaluation of same using a 2-d object detection sensor calibrated with a spatial model

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21905114

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21905114

Country of ref document: EP

Kind code of ref document: A1