CN109558831A - It is a kind of fusion space-time model across camera shooting head's localization method - Google Patents

It is a kind of fusion space-time model across camera shooting head's localization method Download PDF

Info

Publication number
CN109558831A
CN109558831A CN201811426901.1A CN201811426901A CN109558831A CN 109558831 A CN109558831 A CN 109558831A CN 201811426901 A CN201811426901 A CN 201811426901A CN 109558831 A CN109558831 A CN 109558831A
Authority
CN
China
Prior art keywords
pedestrian
camera
track
association
specified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811426901.1A
Other languages
Chinese (zh)
Other versions
CN109558831B (en
Inventor
温序铭
罗志伟
管健
王炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Sobey Digital Technology Co Ltd
Original Assignee
Chengdu Sobey Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Sobey Digital Technology Co Ltd filed Critical Chengdu Sobey Digital Technology Co Ltd
Priority to CN201811426901.1A priority Critical patent/CN109558831B/en
Publication of CN109558831A publication Critical patent/CN109558831A/en
Application granted granted Critical
Publication of CN109558831B publication Critical patent/CN109558831B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of fusion space-time models across camera shooting head's localization method, is related to pedestrian's field of locating technology, and the present invention is the following steps are included: S1, establish space-time model;S2, pedestrian track is obtained;S3, selection association camera;S4, specified pedestrian's path planning;S5, it calculates the walking used time: calculating specified pedestrian and walk in each path that S4 is planned the time used;S6, pedestrian identify again: detecting pedestrian in the time window of each association camera, the specified pedestrian originated in camera is inputted pedestrian's weight identification model with the pedestrian being associated in camera to identify, similarity is greater than the pedestrian of threshold value if it exists, and pedestrian positions successfully;Otherwise next association camera is handled, until successfully positioning pedestrian, the present invention carries out comprehensive modeling to the indoor and outdoor scene of camera and its deployment, and the model of foundation is completely reliable, reasonable path planning is carried out using run trace of the model to pedestrian, promotes the accuracy of pedestrian path prediction.

Description

It is a kind of fusion space-time model across camera shooting head's localization method
Technical field
The present invention relates to pedestrian's field of locating technology, more particularly to a kind of across camera row for merging space-time model People's localization method.
Background technique
In protection and monitor field, multiple cameras are often deployed in a region, the visual field is complemented each other between camera, Play at the same to entire area monitoring effect, when pedestrian walks in the area, the visual field of different cameras can be passed through, how in advance Survey pedestrian reaches the path that next camera passes through and opportunity is project significant in pedestrian's weight identifying system.
It is fixed that the similarity that pedestrian's localization method across camera generally relied only on video one skilled in the art's feature in the past carries out pedestrian Position, the camera scope and time window for needing to search for is all very big, therefore is also required to carry out artificial screening, in treatment effeciency, reality It is all insufficient in terms of when property and convenience.
The applying date is 2017.03.17, and application No. is 201710161404.2 Chinese invention patent applications to provide one kind Pedestrian tracting method and device and across camera pedestrian tracting method and device, this application realize fixed across the pedestrian between camera Position, but have the following deficiencies:
1, application scenarios are limited
This application is towards rail traffic, and pedestrian's moving direction is only all around four direction, division this first Not the case where method accuracy not can guarantee, and may be only available for camera regular distribution in rail traffic, and in real life scene, The distribution of camera be do not have it is inerratic;
2, modeling result is inaccurate
Complete effective spatial model is not established, the mobile path of the speed of pedestrian, pedestrian's moving direction, pedestrian is calculated Inaccurate when these parameters, this is very big by the place for causing pedestrian to occur again and opportunity and truth discrepancy, then pedestrian It identifies again and just needs to retrieve pedestrian in a large amount of camera and time window, calculation amount is very big, thus real-time performance not can guarantee;
3, camera detection opportunity is extensive
On the one hand this application does not associate video pictures position and GPS coordinate, on the other hand also not to camera shooting Whether exterior domain feasible does effective division in the head visual field, thus also can not programming pedestrian to the feasible path for being associated with camera, lead Cause the time window of video pedestrian detection excessive, therefore the real-time that pedestrian identifies again will be influenced by very big.
Summary of the invention
It is an object of the invention to: it is existing inaccurate across camera shooting head's localization method modeling result in order to solve, it leads The problem for causing the real-time that identifies again of pedestrian lower, the present invention provide a kind of fusion space-time model across camera shooting head positioning side Method.
The present invention specifically uses following technical scheme to achieve the goals above:
It is a kind of fusion space-time model across camera shooting head's localization method, comprising the following steps:
S1, it establishes space-time model: indoor and outdoor scene comprehensive modeling and field is carried out to camera deployment region in located space Camera models in scape;
S2, it obtains pedestrian track: setting pedestrian and be designated in starting camera picture, obtain specified pedestrian and imaged in starting Run trace in head picture;
S3, selection association camera: specified pedestrian appears in next camera after walking out from starting camera picture and draws It is right by different strategies for the different travelling routes of specified pedestrian if next camera is association camera in face Association camera is selected;
S4, specified pedestrian's path planning: to specified pedestrian from starting camera to selected each association camera Distance carries out path planning;
S5, it calculates the walking used time: calculating specified pedestrian and walk in each path that S4 is planned the time used;
S6, pedestrian identify again: detecting pedestrian in the time window of each association camera, will originate the finger in camera Determine pedestrian to identify with the pedestrian sequence inputting pedestrian weight identification model being associated in camera, if there are phases in association camera It is greater than the pedestrian of threshold value like degree, then pedestrian positions successfully;Otherwise, next association camera is handled, until successfully positioning Pedestrian.
Further, indoor and outdoor scene comprehensive modeling includes that outdoor scene modeling and indoor scene model in the S1, In,
Outdoor scene modeling: general Map Services are utilized, the outdoor cartographic model of located space, including map are extracted GPS coordinate, region block type and the parameter of middle each point;
Indoor scene modeling: utilizing BIM modeling tool, carries out BIM model modeling to interior of building, modeling object includes Beam, column, plate, wall, stair, elevator and door etc..
Further, camera modeling includes that description camera properties and visible area are surveyed and drawn in scene in the S1, In,
Camera properties are described: geographical GPS coordinate, deployment scenario and the height etc. of description camera deployment;
Visible area mapping: surveying and drawing camera visible area, including mapping area of feasible solutions boundary and area of feasible solutions Center.
Further, it includes following step that run trace of the specified pedestrian in starting camera picture is obtained in the S2 It is rapid:
S2.1: after specifying pedestrian in starting camera picture, record tracking start time ts
S2.2: setting frame period Δ k, the corresponding time interval of Δ k is Δ t;
S2.3: tracking specified pedestrian using track algorithm, every to generate in starting camera image by Δ k frame One pedestrian track point, until specified pedestrian walks out starting camera picture, record tracking finish time te
S2.4: the pedestrian track generated in S2.3 is pressed into chronological order and carries out line to get to specified pedestrian's Run trace.
Further, there may be some impure points in the pedestrian track point in the S2.3, it is therefore desirable to walking Track is corrected, and correction procedure is as follows:
Step 1: the track correct outside area of feasible solutions: for the pedestrian track point outside area of feasible solutions boundary, being connected with straight line The pedestrian track point and area of feasible solutions center are connect, substitutes the pedestrian track with the intersection point that the straight line and area of feasible solutions boundary generate Point;
Step 2: track impure point removes: being carried out using kalman filter method to by the revised track of step 1 Filtering removes the impure point in track.
Further, specified in the S3 pedestrian exist in located space into elevator, into stair, outdoor traveling and Indoor channel four kinds of travelling routes of traveling select association camera by different strategies for different travelling routes It selects, specifically:
Into elevator: after pedestrian track point enters elevator, direction of travel is straight up or straight down note in space The current floor number of plies is f, then being associated with camera is f ± n-layer and 1 layer of corresponding position lift port camera, and wherein n indicates pedestrian The difference of arrival floor and current floor f;
Into stair: after pedestrian track point enters stair, direction of travel is straight up or straight down, to close in space Join the corresponding position stairs port camera that camera is f ± 1 layer;
Outdoor traveling or indoor channel are advanced:
Specified pedestrian's direction of travel is judged first: obtaining the orbit tangent of pedestrian track, by orbit tangent up time Needle direction and counterclockwise each rotation 45° angle, the pedestrian for obtaining 90 ° of angles may move angle direction;
Then association camera is selected: to specify pedestrian in the last one pedestrian track point of starting camera picture as rail Locus circle is made by radius of d in the mark center of circle, and locus circle is divided into two regions by the possible move angle direction of pedestrian, and note pedestrian can Energy move angle direction is interior zone S1, and another region is perimeter S2, then the priority of camera is higher than in S2 in S1 Camera be successively selected as association finally by all cameras for including in locus circle by distance and priority orders and take the photograph As head.
Further, the planning of pedestrian way diameter is specified in the S4 specifically:
Into elevator: the path to association camera is straight up or straight down path length LElevatorFor starting camera shooting Head and the difference in height that is associated with camera;
Into stair: the path to association camera is straight up or straight down path length LStairFor starting camera shooting Head and the difference in height that is associated with camera;
Outdoor traveling: by specified pedestrian starting camera picture the last one pedestrian track point be associated with camera can Row regional center is converted to GPS location, then plans the walking path between two GPS locations using navigation Service, calculates outlet Electrical path length LIt is outdoor
Indoor channel advance: from specified pedestrian starting camera picture the last one pedestrian track point, according to The BIM model of foundation is planned to the walking path at association camera area of feasible solutions center, and path length L is calculatedIt is indoor
Further, the S5 specifically:
S5.1, pedestrian track speed v is calculatedtrack: the distance in pedestrian track between adjacent pedestrian's tracing point is calculated, by the figure As interior distance is converted to actual range;According to adjacent pedestrian's tracing point interval time and actual range, adjacent pedestrian track is calculated Pedestrian's speed between point;It brings pedestrian's speed between multiple adjacent pedestrian's tracing points into normal distyribution function, finds out pedestrian's speed Normpdf, and calculate by pedestrian's speed normpdf the speed conduct of maximum probability Pedestrian track speed vtrack
S5.2, pedestrian's movement speed v is calculatedload:
Elevator movement speed vElevator: pedestrian's speed is fixed in elevator, with elevator speed veIt is identical, not with pedestrian's path velocity vtrackVariation;
Stair movement speed vStair: pedestrian's speed in stair is with pedestrian's path velocity vtrackLevel land and stair are arranged in variation Between velocity coeffficient be z, then vStair=vtrack×z;
Outdoor traveling or indoor channel traveling pedestrian's speed: by pedestrian track speed vtrackAs path velocity vIndoor (outer)
S5.3, walking used time t is calculatedload: tload=Lload/vload, wherein vloadAnd LloadElectricity is selected as according to path type Ladder, stair, indoor and outdoors.
Further, the S6 specifically:
S6.1: t at the time of specifying pedestrian to occur in association camera is calculatedc: tc=te+tload
S6.2: in the t of association cameracPedestrian is detected in the time window of ± δ, wherein δ isTime window width, Then the specified pedestrian originated in camera pedestrian's weight identification model is inputted with the pedestrian being associated in camera to identify, if It is associated with there are the pedestrian that similarity is greater than threshold value in camera, then pedestrian positions successfully;Otherwise, next association camera is carried out Processing, until successfully positioning pedestrian.
Beneficial effects of the present invention are as follows:
1, the present invention carries out comprehensive modeling to the indoor and outdoor scene of camera and its deployment, and the model of foundation is completely reliable, The pedestrian path for being subsequent pedestrian in camera provides the ability of path planning, and path planning utilizes walking of the model to pedestrian Track carries out reasonable planning judgement, improves the accuracy of pedestrian path prediction, lays the foundation for subsequent pedestrian positioning.
2, camera types have been carried out Rational Classification, including lift port, stairs port, room by camera of the invention modeling It is interior, outdoor, and the localization method for coping with different types of camera is also different, by the run trace of pedestrian and camera types phase In conjunction with so that there are corresponding relationships for association camera and run trace, so that the accuracy of pedestrian's positioning rises.
3, the present invention obtains area of feasible solutions boundary and area of feasible solutions center in camera view by mapping means first, benefit Pedestrian track is rationally limited and corrected with said two devices, to correct the pedestrian track obtained by pedestrian detection and tracking;And Track impure point is removed using kalman filter method, not only increases the serious forgiveness of system, while to establish reasonable pedestrian Spatial model lays the foundation.
4, the present invention passes through normal distribution meter by calculating separately each track end pedestrian's speed, then by a large amount of track ends speed Pedestrian's speed of maximum probability is calculated, and using the speed as the pedestrian dummy speed of travel, weakens pedestrian's speed under normal circumstances The interference of difference improves the accuracy of pedestrian's modeling.
Detailed description of the invention
Fig. 1 is method flow schematic diagram of the invention.
Fig. 2 is area of feasible solutions mapping schematic diagram of the invention.
Fig. 3 is pedestrian track amendment schematic diagram of the present invention.
Fig. 4 is locus circle schematic diagram of the invention.
Fig. 5 is the path planning schematic diagram that indoor channel of the present invention is advanced.
Specific embodiment
In order to which those skilled in the art better understand the present invention, with reference to the accompanying drawing with following embodiment to the present invention It is described in further detail.
Embodiment 1
As shown in Figure 1, the present embodiment provides a kind of fusion space-time models across camera shooting head's localization method, including it is following Step:
S1, it establishes space-time model: indoor and outdoor scene comprehensive modeling and field is carried out to camera deployment region in located space Camera models in scape, and system is made to have complete pedestrian's perception and path planning ability;
S2, it obtains pedestrian track: setting pedestrian and be designated in starting camera picture, obtain specified pedestrian and imaged in starting Run trace in head picture;
S3, selection association camera: specified pedestrian appears in next camera after walking out from starting camera picture and draws It is right by different strategies for the different travelling routes of specified pedestrian if next camera is association camera in face Association camera is selected;
S4, specified pedestrian's path planning: to specified pedestrian from starting camera to selected each association camera Distance carries out path planning;
S5, it calculates the walking used time: calculating specified pedestrian and walk in each path that S4 is planned the time used;
S6, pedestrian identify again: detecting pedestrian in the time window of each association camera, will originate the finger in camera Determine pedestrian to identify with the pedestrian sequence inputting pedestrian weight identification model being associated in camera, if there are phases in association camera It is greater than the pedestrian of threshold value like degree, then pedestrian positions successfully;Otherwise, next association camera is handled, until successfully positioning Pedestrian.
Embodiment 2
The present embodiment advanced optimizes on the basis of embodiment 1, specifically: indoor and outdoor scene synthesis is built in the S1 Mould include outdoor scene modeling and indoor scene modeling, form external model in complete building, be subsequent pedestrian accurately with Track provides basic information, includes complete space time information in model, to provide path planning ability, wherein
Outdoor scene modeling: general Map Services are utilized, the outdoor cartographic model of located space, including map are extracted GPS coordinate, region block type and the parameter of middle each point;
Indoor scene modeling: utilizing BIM modeling tool, carries out BIM model modeling to interior of building, modeling object includes Beam, column, plate, wall, stair, elevator and door etc.;
As shown in Fig. 2, camera modeling includes that description camera properties and visible area are surveyed and drawn in scene in the S1, In,
Camera properties are described: geographical GPS coordinate, deployment scenario and the height etc. of description camera deployment;
Visible area mapping: surveying and drawing camera visible area, including mapping area of feasible solutions boundary and area of feasible solutions Center.
Run trace of the specified pedestrian in starting camera picture is obtained in the S2 to include the following steps:
S2.1: after specifying pedestrian in starting camera picture, record tracking start time ts
S2.2: setting frame period Δ k, the corresponding time interval of Δ k is Δ t;
S2.3: tracking specified pedestrian using existing track algorithm, every to originate camera image by Δ k frame One pedestrian track point of middle generation, until specified pedestrian walks out starting camera picture, record tracking finish time te
S2.4: the pedestrian track generated in S2.3 is pressed into chronological order and carries out line to get to specified pedestrian's Run trace.
As shown in figure 3, may have some impure points in pedestrian track point in the S2.3, it is therefore desirable to walking Track is corrected, and correction procedure is as follows:
Step 1: the track correct outside area of feasible solutions: for the pedestrian track point outside area of feasible solutions boundary, being connected with straight line The pedestrian track point and area of feasible solutions center are connect, substitutes the pedestrian track with the intersection point that the straight line and area of feasible solutions boundary generate Point;
Step 2: track impure point removes: being carried out using kalman filter method to by the revised track of step 1 Filtering removes the impure point in track.
Pedestrian is specified to exist in located space into elevator, into stair, outdoor traveling and indoor channel in the S3 Four kinds of travelling routes of advancing select association camera by different strategies, specifically for different travelling routes Are as follows:
Into elevator: after pedestrian track point enters elevator, direction of travel is straight up or straight down note in space The current floor number of plies is f, then being associated with camera is f ± n-layer and 1 layer of corresponding position lift port camera, and wherein n indicates pedestrian The difference of arrival floor and current floor f, n value is 5 in the present embodiment, i.e., using the camera within 5 layers as being associated with camera It pays the utmost attention to;
Into stair: after pedestrian track point enters stair, direction of travel is straight up or straight down, to close in space Join the corresponding position stairs port camera that camera is f ± 1 layer;
Outdoor traveling or indoor channel are advanced:
Specified pedestrian's direction of travel is judged first: obtaining the orbit tangent of pedestrian track, by orbit tangent up time Needle direction and counterclockwise each rotation 45° angle, the pedestrian for obtaining 90 ° of angles may move angle direction;
Then as shown in figure 4, selection association camera: to specify pedestrian in the last one pedestrian of starting camera picture Tracing point is center of track, makees locus circle by radius of d, and d is 50m in the present embodiment, pedestrian may move angle direction by rail Mark circle is divided into two regions, and the possible move angle direction of note pedestrian is interior zone S1, and another region is perimeter S2, then The priority of camera is higher than the camera in S2 in S1, finally by all cameras for including in locus circle by distance and preferential Grade sequence, is successively selected as association camera.
The planning of pedestrian way diameter is specified in the S4 specifically:
Into elevator: the path to association camera is straight up or straight down path length LElevatorFor starting camera shooting Head and the difference in height that is associated with camera;
Into stair: the path to association camera is straight up or straight down path length LStairFor starting camera shooting Head and the difference in height that is associated with camera;
Outdoor traveling: by specified pedestrian starting camera picture the last one pedestrian track point be associated with camera can Row regional center is converted to GPS location, then plans the walking path between two GPS locations using navigation Service, calculates outlet Electrical path length LIt is outdoor
It is illustrated in figure 5 indoor channel traveling: from specified pedestrian in the last one pedestrian track for originating camera picture Point sets out, and is planned according to the BIM model of foundation to the walking path at association camera area of feasible solutions center, calculates path length LIt is indoor
The S5 specifically:
S5.1, pedestrian track speed v is calculatedtrack: the distance in pedestrian track between adjacent pedestrian's tracing point is calculated, by the figure As interior distance is converted to actual range;According to adjacent pedestrian's tracing point interval time and actual range, adjacent pedestrian track is calculated Pedestrian's speed between point;It brings pedestrian's speed between multiple adjacent pedestrian's tracing points into normal distyribution function, finds out pedestrian's speed Normpdf, and calculate by pedestrian's speed normpdf the speed conduct of maximum probability Pedestrian track speed vtrack
S5.2, pedestrian's movement speed v is calculatedload:
Elevator movement speed vElevator: pedestrian's speed is fixed in elevator, with elevator speed veIt is identical, not with pedestrian's path velocity vtrackVariation;
Stair movement speed vStair: pedestrian's speed in stair is with pedestrian's path velocity vtrackLevel land and stair are arranged in variation Between velocity coeffficient be z, then vStair=vtrack×z;
Outdoor traveling or indoor channel traveling pedestrian's speed: by pedestrian track speed vtrackAs path velocity vIndoor (outer)
S5.3, walking used time t is calculatedload: tload=Lload/vload, wherein vloadAnd LloadElectricity is selected as according to path type Ladder, stair, indoor and outdoors.
The S6 specifically:
S6.1: t at the time of specifying pedestrian to occur in association camera is calculatedc: tc=te+tload
S6.2: in the t of association cameracPedestrian is detected in the time window of ± δ, wherein δ isTime window width, this δ is set as 5s in embodiment, then will originate the specified pedestrian in camera and the pedestrian sequence inputting pedestrian that is associated in camera Weight identification model is identified, if there are the pedestrian that similarity is greater than threshold value in association camera, pedestrian is positioned successfully;It is no Then, next association camera is handled, until successfully position pedestrian, pedestrian's weight identification model for being used in the present embodiment for AlignedReid pedestrian's weight identification model, and replaced used in AlignedReid network using focal loss Cross entropy loss, allow the classification for being easy to predict to reduce loss contribution, it is difficult to predict contribution of the classification to loss Increase, guidance network focuses on the classification for going study to be not easy to predict, so that the accuracy of pedestrian's positioning rises.
The above, only presently preferred embodiments of the present invention, are not intended to limit the invention, patent protection model of the invention It encloses and is subject to claims, it is all to change with equivalent structure made by specification and accompanying drawing content of the invention, similarly It should be included within the scope of the present invention.

Claims (9)

1. it is a kind of fusion space-time model across camera shooting head's localization method, which comprises the following steps:
S1, it establishes space-time model: camera deployment region in located space is carried out in indoor and outdoor scene comprehensive modeling and scene Camera modeling;
S2, it obtains pedestrian track: setting pedestrian and be designated in starting camera picture, obtain specified pedestrian and drawn in starting camera Run trace in face;
S3, selection association camera: specified pedestrian appears in next camera picture after walking out from starting camera picture, If next camera is association camera, for the different travelling routes of specified pedestrian, by different strategies to association Camera is selected;
S4, specified pedestrian's path planning: to specified pedestrian from starting camera to the distance of selected each association camera Carry out path planning;
S5, it calculates the walking used time: calculating specified pedestrian and walk in each path that S4 is planned the time used;
S6, pedestrian identify again: detecting pedestrian in the time window of each association camera, will originate the nominated bank in camera People identifies with the pedestrian sequence inputting pedestrian weight identification model being associated in camera, if there are similarities in association camera Greater than the pedestrian of threshold value, then pedestrian positions successfully;Otherwise, next association camera is handled, until successfully positioning pedestrian.
2. it is according to claim 1 it is a kind of fusion space-time model across camera shooting head's localization method, which is characterized in that institute Stating indoor and outdoor scene comprehensive modeling in S1 includes outdoor scene modeling and indoor scene modeling, wherein
Outdoor scene modeling: utilizing general Map Services, extract the outdoor cartographic model of located space, including in map respectively GPS coordinate, region block type and the parameter of point;
Indoor scene modeling: utilize BIM modeling tool, to interior of building carry out BIM model modeling, modeling object include beam, Column, plate, wall, stair, elevator and door etc..
3. it is according to claim 2 it is a kind of fusion space-time model across camera shooting head's localization method, which is characterized in that institute Stating in S1 camera modeling in scene includes description camera properties and visible area mapping, wherein
Camera properties are described: geographical GPS coordinate, deployment scenario and the height etc. of description camera deployment;
Visible area mapping: surveying and drawing camera visible area, including mapping area of feasible solutions boundary and area of feasible solutions center.
4. it is according to claim 3 it is a kind of fusion space-time model across camera shooting head's localization method, which is characterized in that institute Stating to obtain in S2 specifies run trace of the pedestrian in starting camera picture to include the following steps:
S2.1: after specifying pedestrian in starting camera picture, record tracking start time ts
S2.2: setting frame period Δ k, the corresponding time interval of Δ k is Δ t;
S2.3: tracking specified pedestrian using track algorithm, every to generate one in starting camera image by Δ k frame Pedestrian track point, until specified pedestrian walks out starting camera picture, record tracking finish time te
S2.4: the pedestrian track generated in S2.3 is pressed into chronological order progress line to get the walking of specified pedestrian is arrived Track.
5. it is according to claim 4 it is a kind of fusion space-time model across camera shooting head's localization method, which is characterized in that institute After stating S2.3 generation pedestrian track point, run trace is corrected, correction procedure is as follows:
Step 1: the track correct outside area of feasible solutions:, should with straight line connection for the pedestrian track point outside area of feasible solutions boundary Pedestrian track point and area of feasible solutions center substitute the pedestrian track point with the intersection point that the straight line and area of feasible solutions boundary generate;
Step 2: track impure point removes: it is filtered using kalman filter method to by the revised track of step 1, Remove the impure point in track.
6. it is according to claim 4 it is a kind of fusion space-time model across camera shooting head's localization method, which is characterized in that institute It states and pedestrian is specified to there are four kinds of rows of advancing into elevator, into stair, outdoor traveling and indoor channel in located space in S3 Route line selects association camera by different strategies for different travelling routes, specifically:
Into elevator: after pedestrian track point enters elevator, direction of travel is that straight up or straight down, note is current in space The floor number of plies is f, then being associated with camera is f ± n-layer and 1 layer of corresponding position lift port camera, and wherein n indicates that pedestrian reaches The difference of floor and current floor f;
Into stair: after pedestrian track point enters stair, direction of travel is that straight up or straight down, association is taken the photograph in space As the corresponding position stairs port camera that head is f ± 1 layer;
Outdoor traveling or indoor channel are advanced:
Specified pedestrian's direction of travel is judged first: obtaining the orbit tangent of pedestrian track, orbit tangent is square clockwise 45° angle is rotated to counterclockwise each, the pedestrian for obtaining 90 ° of angles may move angle direction;
Then association camera is selected: to specify pedestrian in the last one pedestrian track point of starting camera picture as locus circle The heart makees locus circle by radius of d, and locus circle is divided into two regions by the possible move angle direction of pedestrian, and note pedestrian may move Dynamic angle direction is interior zone S1, and another region is perimeter S2, then the priority of camera is higher than taking the photograph in S2 in S1 As head, all cameras for including in locus circle are successively finally selected as association camera by distance and priority orders.
7. it is according to claim 6 it is a kind of fusion space-time model across camera shooting head's localization method, which is characterized in that institute It states and specifies the planning of pedestrian way diameter in S4 specifically:
Into elevator: the path to association camera is straight up or straight down path length LElevatorFor starting camera with It is associated with the difference in height of camera;
Into stair: the path to association camera is straight up or straight down path length LStairFor starting camera with It is associated with the difference in height of camera;
Outdoor traveling: by specified pedestrian starting camera picture the last one pedestrian track point be associated with camera feasible region Domain center is converted to GPS location, then plans the walking path between two GPS locations using navigation Service, calculates path length Spend LIt is outdoor
Indoor channel is advanced: from specified pedestrian in the last one the pedestrian track point for originating camera picture, according to foundation BIM model plan to association camera area of feasible solutions center walking path, calculate path length LIt is indoor
8. it is according to claim 7 it is a kind of fusion space-time model across camera shooting head's localization method, which is characterized in that institute State S5 specifically:
S5.1, pedestrian track speed v is calculatedtrack: the distance in pedestrian track between adjacent pedestrian's tracing point is calculated, it will be in the image Distance is converted to actual range;According to adjacent pedestrian's tracing point interval time and actual range, calculate between adjacent pedestrian's tracing point Pedestrian's speed;It brings pedestrian's speed between multiple adjacent pedestrian's tracing points into normal distyribution function, finds out pedestrian's speed normal state Distribution probability density function, and the speed of maximum probability is calculated as pedestrian by pedestrian's speed normpdf Path velocity vtrack
S5.2, pedestrian's movement speed v is calculatedload:
Elevator movement speed vElevator: pedestrian's speed is fixed in elevator, with elevator speed veIt is identical, not with pedestrian's path velocity vtrack Variation;
Stair movement speed vStair: pedestrian's speed in stair is with pedestrian's path velocity vtrackVariation is arranged between level land and stair Velocity coeffficient be z, then vStair=vtrack×z;
Outdoor traveling or indoor channel traveling pedestrian's speed: by pedestrian track speed vtrackAs path velocity vIndoor (outer)
S5.3, walking used time t is calculatedload: tload=Lload/vload, wherein vloadAnd LloadElevator, building are selected as according to path type Ladder, indoor and outdoors.
9. it is according to claim 8 it is a kind of fusion space-time model across camera shooting head's localization method, which is characterized in that institute State S6 specifically:
S6.1: t at the time of specifying pedestrian to occur in association camera is calculatedc: tc=te+tload
S6.2: in the t of association cameracPedestrian is detected in the time window of ± δ, wherein δ isThen time window width will Specified pedestrian in starting camera identifies with the pedestrian sequence inputting pedestrian weight identification model being associated in camera, if closing Join there are the pedestrian that similarity is greater than threshold value in camera, then pedestrian positions successfully;Otherwise, to next association camera Reason, until successfully positioning pedestrian.
CN201811426901.1A 2018-11-27 2018-11-27 Cross-camera pedestrian positioning method fused with space-time model Active CN109558831B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811426901.1A CN109558831B (en) 2018-11-27 2018-11-27 Cross-camera pedestrian positioning method fused with space-time model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811426901.1A CN109558831B (en) 2018-11-27 2018-11-27 Cross-camera pedestrian positioning method fused with space-time model

Publications (2)

Publication Number Publication Date
CN109558831A true CN109558831A (en) 2019-04-02
CN109558831B CN109558831B (en) 2023-04-07

Family

ID=65867586

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811426901.1A Active CN109558831B (en) 2018-11-27 2018-11-27 Cross-camera pedestrian positioning method fused with space-time model

Country Status (1)

Country Link
CN (1) CN109558831B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110781797A (en) * 2019-10-22 2020-02-11 杭州宇泛智能科技有限公司 Labeling method and device and electronic equipment
CN110796074A (en) * 2019-10-28 2020-02-14 桂林电子科技大学 Pedestrian re-identification method based on space-time data fusion
CN111027462A (en) * 2019-12-06 2020-04-17 长沙海格北斗信息技术有限公司 Pedestrian track identification method across multiple cameras
CN111046752A (en) * 2019-11-26 2020-04-21 上海兴容信息技术有限公司 Indoor positioning method and device, computer equipment and storage medium
CN111368623A (en) * 2019-10-23 2020-07-03 杭州宇泛智能科技有限公司 Target searching method and target searching system
CN111738043A (en) * 2019-12-10 2020-10-02 珠海大横琴科技发展有限公司 Pedestrian re-identification method and device
CN112101170A (en) * 2020-09-08 2020-12-18 平安科技(深圳)有限公司 Target positioning method and device, computer equipment and storage medium
CN112541457A (en) * 2020-12-21 2021-03-23 重庆紫光华山智安科技有限公司 Searching method and related device for monitoring node
CN113313188A (en) * 2021-06-10 2021-08-27 四川大学 Cross-modal fusion target tracking method
CN113744302A (en) * 2020-05-27 2021-12-03 北京机械设备研究所 Dynamic target behavior prediction method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102819847A (en) * 2012-07-18 2012-12-12 上海交通大学 Method for extracting movement track based on PTZ mobile camera
CN104239905A (en) * 2013-06-17 2014-12-24 上海盖普电梯有限公司 Moving target recognition method and intelligent elevator billing system having moving target recognition function
CN105389829A (en) * 2015-10-15 2016-03-09 上海交通大学 Low-complexity dynamic object detecting and tracking method based on embedded processor
CN107240124A (en) * 2017-05-19 2017-10-10 清华大学 Across camera lens multi-object tracking method and device based on space-time restriction
CN108764167A (en) * 2018-05-30 2018-11-06 上海交通大学 A kind of target of space time correlation recognition methods and system again

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102819847A (en) * 2012-07-18 2012-12-12 上海交通大学 Method for extracting movement track based on PTZ mobile camera
CN104239905A (en) * 2013-06-17 2014-12-24 上海盖普电梯有限公司 Moving target recognition method and intelligent elevator billing system having moving target recognition function
CN105389829A (en) * 2015-10-15 2016-03-09 上海交通大学 Low-complexity dynamic object detecting and tracking method based on embedded processor
CN107240124A (en) * 2017-05-19 2017-10-10 清华大学 Across camera lens multi-object tracking method and device based on space-time restriction
CN108764167A (en) * 2018-05-30 2018-11-06 上海交通大学 A kind of target of space time correlation recognition methods and system again

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110781797A (en) * 2019-10-22 2020-02-11 杭州宇泛智能科技有限公司 Labeling method and device and electronic equipment
CN110781797B (en) * 2019-10-22 2021-04-06 杭州宇泛智能科技有限公司 Labeling method and device and electronic equipment
CN111368623A (en) * 2019-10-23 2020-07-03 杭州宇泛智能科技有限公司 Target searching method and target searching system
CN110796074A (en) * 2019-10-28 2020-02-14 桂林电子科技大学 Pedestrian re-identification method based on space-time data fusion
CN110796074B (en) * 2019-10-28 2022-08-12 桂林电子科技大学 Pedestrian re-identification method based on space-time data fusion
CN111046752A (en) * 2019-11-26 2020-04-21 上海兴容信息技术有限公司 Indoor positioning method and device, computer equipment and storage medium
CN111027462A (en) * 2019-12-06 2020-04-17 长沙海格北斗信息技术有限公司 Pedestrian track identification method across multiple cameras
CN111738043A (en) * 2019-12-10 2020-10-02 珠海大横琴科技发展有限公司 Pedestrian re-identification method and device
CN113744302A (en) * 2020-05-27 2021-12-03 北京机械设备研究所 Dynamic target behavior prediction method and system
CN113744302B (en) * 2020-05-27 2024-02-02 北京机械设备研究所 Dynamic target behavior prediction method and system
CN112101170A (en) * 2020-09-08 2020-12-18 平安科技(深圳)有限公司 Target positioning method and device, computer equipment and storage medium
CN112541457B (en) * 2020-12-21 2021-10-26 重庆紫光华山智安科技有限公司 Searching method and related device for monitoring node
CN112541457A (en) * 2020-12-21 2021-03-23 重庆紫光华山智安科技有限公司 Searching method and related device for monitoring node
CN113313188B (en) * 2021-06-10 2022-04-12 四川大学 Cross-modal fusion target tracking method
CN113313188A (en) * 2021-06-10 2021-08-27 四川大学 Cross-modal fusion target tracking method

Also Published As

Publication number Publication date
CN109558831B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN109558831A (en) It is a kind of fusion space-time model across camera shooting head's localization method
CN107514993B (en) The collecting method and system towards single building modeling based on unmanned plane
CN106441292B (en) A kind of building indoor plane figure method for building up based on crowdsourcing IMU inertial guidance data
CN103901892B (en) The control method of unmanned plane and system
Wang et al. Intelligent vehicle self-localization based on double-layer features and multilayer LIDAR
CN104714555B (en) Three-dimensional independent exploration method based on edge
CN109118585B (en) Virtual compound eye camera system meeting space-time consistency for building three-dimensional scene acquisition and working method thereof
US11989828B2 (en) Methods for generating and updating building models
CN106296816A (en) Unmanned plane determining method of path and device for reconstructing three-dimensional model
KR102195179B1 (en) Orthophoto building methods using aerial photographs
CN104776833B (en) Landslide surface image capturing method and device
CN103411609A (en) Online composition based aircraft return route programming method
KR102008520B1 (en) multi-user information share system for using space information platform based on cloud of construction sites cloud
CN104581001A (en) Related monitoring method for large range multiple cameras moving objects
CN106292656A (en) A kind of environmental modeling method and device
CN106017476A (en) Method for generating indoor positioning and navigating map model
Cefalu et al. Image based 3D Reconstruction in Cultural Heritage Preservation.
KR20140063266A (en) The automate road mapping method using observed field data
CN113124824A (en) Unmanned aerial vehicle photogrammetry acquisition planning method and system based on significance calculation
Abbate et al. Prospective upon multi-source urban scale data for 3d documentation and monitoring of urban legacies
Brumana et al. Panoramic UAV views for landscape heritage analysis integrated with historical maps atlases
Prentow et al. Estimating common pedestrian routes through indoor path networks using position traces
CN116880522A (en) Method and device for adjusting flight direction of flight device in inspection in real time
CN107958118B (en) Wireless signal acquisition method based on spatial relationship
CN113848878B (en) Indoor and outdoor three-dimensional pedestrian road network construction method based on crowd source data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant