CN112365522A - Method for tracking personnel in park across borders - Google Patents

Method for tracking personnel in park across borders Download PDF

Info

Publication number
CN112365522A
CN112365522A CN202011119432.6A CN202011119432A CN112365522A CN 112365522 A CN112365522 A CN 112365522A CN 202011119432 A CN202011119432 A CN 202011119432A CN 112365522 A CN112365522 A CN 112365522A
Authority
CN
China
Prior art keywords
target tracking
camera
person
target
tracking person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011119432.6A
Other languages
Chinese (zh)
Other versions
CN112365522B (en
Inventor
兰雨晴
周建飞
王丹星
余丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongbiao Huian Information Technology Co Ltd
Original Assignee
Zhongbiao Huian Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongbiao Huian Information Technology Co Ltd filed Critical Zhongbiao Huian Information Technology Co Ltd
Priority to CN202011119432.6A priority Critical patent/CN112365522B/en
Publication of CN112365522A publication Critical patent/CN112365522A/en
Application granted granted Critical
Publication of CN112365522B publication Critical patent/CN112365522B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a method for cross-border tracking of people in a park, which comprises the following steps: detecting target tracking personnel in a corresponding video in a park by using a plurality of cameras which are mutually associated, and extracting characteristic information corresponding to the target tracking personnel; based on the extracted feature information, performing single-lens tracking on the target tracking person in a single-camera tracking mode until the target tracking person leaves a monitoring area corresponding to the current first camera; when a corresponding pedestrian is detected based on a second camera, whether the pedestrian is the target tracking person is judged, if yes, single-mirror tracking is continuously carried out on the target tracking person until the target tracking person leaves a monitoring area corresponding to the garden; the problem of the unable system work of each surveillance camera head in the garden is solved, the purpose of carrying out the cross border pursuit to personnel in the garden has been reached, the intelligence and the convenience of cross border pursuit have been improved.

Description

Method for tracking personnel in park across borders
Technical Field
The invention relates to the technical field of video monitoring, in particular to a method for cross-border tracking of people in a garden.
Background
With the development of artificial intelligence and the progress of information technology, video monitoring technology has also advanced into a brand new field, and its application scenarios are also more and more extensive. With the continuous advance of the application places of the video monitoring technology, the video image information collected by the monitoring cameras gradually plays more and more important roles in the practice of different fields such as the intelligent monitoring field, the intelligent security and the like, the action tracks of people in a park can be tracked through a networking system aiming at a plurality of cameras in a monitoring area, and important basis and help are provided for daily safety protection, data collection and the like of all places.
At present, a plurality of cameras are generally deployed in a garden where monitoring is carried out, but each camera works independently, the plurality of cameras cannot work systematically, and the traditional video monitoring large screen cannot display a full number of monitoring video pictures, so that cross-border tracking of personnel in the garden cannot be achieved by means of independent work and monitoring large screens of each camera.
Disclosure of Invention
The invention provides a method for tracking cross-border personnel in a garden, which aims to solve the problem that each monitoring camera cannot work systematically and achieve the purpose of tracking cross-border personnel in the garden.
The invention provides a method for cross-border tracking of people in a park, which comprises the following steps:
detecting target tracking personnel in a corresponding video in a park by using a plurality of cameras which are mutually associated, and extracting characteristic information corresponding to the target tracking personnel;
based on the extracted feature information, performing single-lens tracking on the target tracking person in a single-camera tracking mode until the target tracking person leaves a monitoring area corresponding to the current first camera;
when detecting the corresponding pedestrian based on the second camera, judging whether the pedestrian is the target tracking person, if so, continuing to perform single-mirror tracking on the target tracking person until the target tracking person leaves the monitoring area corresponding to the park.
Further, the detecting the target tracking person in the corresponding video in the campus by using a plurality of cameras which are associated with each other includes:
acquiring position information of cameras deployed in a park, and associating a plurality of related cameras based on the position information;
and controlling the plurality of associated cameras to cooperatively work, and detecting target tracking personnel in the corresponding video in the park.
Further, the single-mirror tracking of the target tracking person is continued until the target tracking person leaves the monitoring area corresponding to the campus, and then the method further includes:
acquiring image information acquired by a plurality of cameras working cooperatively aiming at the target tracking personnel according to the characteristic information of the target tracking personnel;
and performing cross-border tracking on the target tracking personnel based on the image information corresponding to the target tracking personnel to obtain the activity track corresponding to the target tracking personnel.
Further, the performing cross-border tracking on the target tracking person based on the image information corresponding to the target tracking person to obtain the activity track corresponding to the target tracking person includes:
acquiring displacement information corresponding to the image information of each frame of the target tracker according to the image information corresponding to the target tracker;
and drawing the corresponding movement track of the target tracking person according to the obtained displacement information.
Further, the displacement information includes: a position coordinate point in the first camera monitoring area;
the position coordinate point acquisition mode comprises the following steps: establishing a two-dimensional rectangular coordinate system by taking the edge coordinate point of the lower left corner of the first camera monitoring area as the origin of coordinates, the lower edge of the first camera monitoring area as an X axis and the left edge of the first camera monitoring area as a Y axis; and determining a position coordinate point in the first camera monitoring area corresponding to the displacement information based on the established two-dimensional rectangular coordinate system.
Further, the drawing an activity track corresponding to the target tracking person according to the obtained displacement information includes:
calculating the average action speed and the current action direction of the target tracking person according to the obtained displacement information, and prejudging the position and the leaving direction of the target tracking person leaving the first camera monitoring area;
and selecting a corresponding second camera according to the pre-judgment result, and calculating the starting time of the second camera for tracking the target tracking personnel, so that seamless connection can be realized when the camera is switched to track the target tracking personnel.
Further, the step of calculating an average action speed and a current action direction of the target tracking person according to the acquired displacement information, and prejudging a position and an exit direction of the target tracking person leaving the first camera monitoring area includes steps a1-a 4:
step a1, calculating the average moving speed V of the target tracking person according to the obtained displacement information by using formula (1), and then:
Figure BDA0002731502480000031
in the formula (1), (X)i,Yi) The position coordinate point of the ith frame of image information corresponding to the target tracking person is acquired by the first camera; (X)i-1,Yi-1) The position coordinate point of the image information of the i-1 th frame corresponding to the target tracking person is acquired by the first camera; n represents the total number of image frames shot by the first camera from the time the target tracking person enters the monitoring area of the first camera to the present time; t represents the time taken by the first camera to shoot each frame of image;
step a2, obtaining the current action direction of the target tracking person according to the obtained displacement information by using a formula (2), wherein the method comprises the following steps:
Figure BDA0002731502480000041
in the formula (2), θiRepresents the current action direction of the target tracking person when the first camera collects the ith frame of displacement information corresponding to the target tracking person, and theta represents the current action direction of the target tracking personiHas a value range of [ -pi, pi [ -pi [ ]]I.e. when thetaiWhen the value is more than or equal to 0, the counterclockwise rotation | theta is expressed along the positive half shaft direction of the X axis of the two-dimensional rectangular coordinate systemiAn angle; when theta isi< 0, it means clockwise rotation | θ along the positive half-axis direction of the X-axisiAn angle;
step A3, according to the calculated average action speed V and current action direction theta of the target tracking personiAnd prejudging the position of the target tracking person away from the first camera, wherein the prejudging comprises the following steps:
Figure BDA0002731502480000042
in formula (3), (X, Y) represents a position coordinate point where the target tracking person leaves the first camera in advance, which is obtained according to the i-th frame displacement information; (X)max,Ymax) Representing a coordinate point at the upper right corner corresponding to the monitoring area of the first camera;
and prejudging a position coordinate point of the target tracking person leaving the first camera according to the displacement information of the ith frame, setting a camera closest to the position coordinate point as a second camera, and carrying out corresponding adjustment according to the displacement information of each frame until the target tracking person leaves a monitoring area corresponding to the first camera.
Further, the selecting a corresponding second camera according to the pre-judgment result, and calculating the starting time of the second camera for tracking the target tracking person includes:
calculating the opening time t of a second camera according to the position coordinate point of the target tracking person leaving the first camera, which is obtained by pre-judgment, and the calculated average action speed of the target tracking person, and then:
Figure BDA0002731502480000051
wherein t represents that the second camera is started after t duration from the current moment, V represents the average action speed of the target tracking person, and (X) representsi,Yi) The position coordinate point of the ith frame of image information corresponding to the target tracking person is acquired by the first camera; and (X, Y) represents a position coordinate point which is obtained according to the displacement information of the ith frame and used for prejudging that the target tracking person leaves the first camera.
Further, when a corresponding pedestrian is detected based on the second camera, it is determined whether the pedestrian is the target tracking person, and then the method further includes:
if the pedestrian is judged not to be the target tracking person, judging whether the target tracking person leaves the park or not based on the position information of the first camera;
if the target tracking person does not leave the park, calling other cameras except the second camera to obtain camera information corresponding to the camera which detects the pedestrian, and continuously judging whether the detected pedestrian is the target tracking person or not until the target tracking person is found;
and if the target tracking personnel leaves the park, executing preset monitoring operation based on preset configuration information.
Further, if the target tracking person leaves the campus, executing a preset monitoring operation based on preset configuration information, including:
if the target tracking personnel leaves the park, acquiring the importance level corresponding to the target tracking personnel;
and calling the preset configuration information matched with the importance level according to the importance level, and executing the corresponding monitoring operation.
The invention relates to a method for tracking personnel in a garden across the border, which is characterized in that a plurality of cameras which are mutually associated are used for detecting target tracking personnel in a corresponding video in the garden and extracting characteristic information corresponding to the target tracking personnel; based on the extracted feature information, performing single-lens tracking on the target tracking person in a single-camera tracking mode until the target tracking person leaves a monitoring area corresponding to the current first camera; when a corresponding pedestrian is detected based on a second camera, whether the pedestrian is the target tracking person is judged, if yes, single-mirror tracking is continuously carried out on the target tracking person until the target tracking person leaves a monitoring area corresponding to the garden; the problem of the unable system work of each surveillance camera head in the garden is solved, the purpose of carrying out the cross border pursuit to personnel in the garden has been reached, the intelligence and the convenience of cross border pursuit have been improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described below by means of the accompanying drawings and examples.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic workflow diagram of a method for cross-border tracking of people in a campus according to an embodiment of the present invention.
Figure 2 is a schematic workflow diagram of another embodiment of a method for cross-border tracking of people on a campus in accordance with the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
The invention provides a method for cross-border tracking of people in a garden, which solves the problem that each monitoring camera deployed in the garden cannot work systematically and achieves the purpose of cross-border tracking of target tracking people in the garden.
FIG. 1 is a schematic flow chart illustrating a method for cross-border tracking of people in a campus according to an embodiment of the present invention; the cross-border tracking method for personnel on the campus of the present invention can be implemented as steps S10-S30 as described below.
And step S10, detecting the target tracking personnel in the corresponding video in the garden by using the plurality of mutually associated cameras, and extracting the characteristic information corresponding to the target tracking personnel.
In the embodiment of the invention, a plurality of cameras for monitoring are preset and deployed in a monitoring area, namely a garden, and when the cameras are deployed, the serial number, the position information and the characteristic information corresponding to the cameras of each camera are stored in a database corresponding to a monitoring system, so that the deployed cameras can be conveniently and uniformly deployed and arranged by a subsequent monitoring system.
Since the monitoring range of a single camera is limited and a monitoring blind area exists, when monitoring of a target person is performed, related monitoring operations can be performed by using a plurality of cameras which are associated with each other. Aiming at target tracking personnel, a monitoring system detects the target tracking personnel in a corresponding video in a park by utilizing a plurality of cameras which are mutually associated, and acquires characteristic information corresponding to the target tracking personnel.
In the embodiment of the present invention, the feature information corresponding to the target tracking person includes, but is not limited to: face features, attribute features, gait features and the like.
Further, in an embodiment, the extracting of the feature information corresponding to the target tracking person may be implemented according to the following technical means:
according to the video information collected by the cameras, carrying out image analysis on the video information; and extracting feature information such as face features, attribute features, gait features and the like corresponding to the target tracking personnel according to the image analysis result.
Further, in an embodiment, the monitoring system detects the target tracking person in the corresponding video in the campus by using a plurality of cameras associated with each other, and may be implemented according to the following technical means:
acquiring position information of cameras deployed in a park, and associating a plurality of related cameras based on the position information; and controlling the plurality of associated cameras to cooperatively work, and detecting target tracking personnel in the corresponding video in the park.
In the embodiment of the invention, when a plurality of cameras are associated, the association is mainly carried out based on the deployed position information of the plurality of cameras; because each camera has different monitoring ranges, in order to avoid the situation that the key position in the garden may have no way of monitoring due to the camera failure and the like for a while, a plurality of cameras can be deployed to monitor the key position in the garden, and the plurality of cameras are correlated, for example, the key area is deployed as a monitoring overlapping area of the plurality of cameras; simultaneously, to the control blind area that probably exists in the garden, also can adopt above-mentioned a plurality of cameras to carry out the mode of associating to set up equally to monitored control system carries out collaborative work through a plurality of cameras that control is correlated with, reaches the purpose that detects the target tracking personnel in the video that corresponds in the garden.
And step S20, based on the extracted feature information, performing single-lens tracking on the target tracking person in a single-camera tracking mode until the target tracking person leaves a monitoring area corresponding to the current first camera.
According to the extracted feature information of the target tracking person, aiming at saving resources, the target tracking person is tracked by a single camera in a single-lens tracking mode until the target tracking person leaves a monitoring area corresponding to the current first camera, namely until the target tracking person disappears from the current first camera.
And step S30, when a corresponding pedestrian is detected based on the second camera, judging whether the pedestrian is the target tracking person, if so, continuing to perform single-lens tracking on the target tracking person until the target tracking person leaves the monitoring area corresponding to the garden.
Although the cameras in the garden are deployed independently, some cameras have a certain association relationship, when the monitoring system monitors that the target tracking person disappears from the monitoring area corresponding to the first camera, the possible action track of the target tracking person is predicted based on the monitoring condition of the first camera for the target tracking person, and meanwhile, according to the position relationship of the deployed cameras, when the second camera detects that a pedestrian appears, whether the pedestrian is the target tracking person is judged. If the pedestrian is the corresponding target tracking person, the second camera is utilized to continue tracking the target tracking person in a single-lens tracking mode; and repeatedly executing the steps of identifying the target tracking personnel and tracking the target tracking personnel in a single-mirror tracking mode until the target tracking personnel leaves the monitoring area corresponding to the park. Further, in the embodiment of the present invention, whether a pedestrian is detected by a single camera may be determined according to a preset algorithm model.
Further, if the pedestrian is detected not to be the corresponding target tracking person in the second camera, whether the target tracking person leaves the park is judged based on the position information of the first camera. And acquiring the monitoring area of the first camera according to the position information of the first camera, and judging whether the target tracking person leaves the park or not according to the monitoring area.
If the target tracking person does not leave the park, calling other cameras except the second camera to acquire camera information corresponding to the camera which detects the pedestrian, and continuously judging whether the detected pedestrian is the target tracking person or not until the target tracking person is found.
And if the target tracking personnel leaves the park, executing preset monitoring operation based on preset configuration information. For example, according to the position information of the park, searching for other cameras outside the park adjacent to the park, acquiring video information of the other cameras, and continuously monitoring the target tracking person; or stopping the monitoring operation of the target tracking personnel when the target tracking personnel is monitored to leave the monitored park.
Further, in one embodiment, the predetermined monitoring operation performed for the target tracking person leaving the campus may be determined according to the importance of the target tracking person.
When the target tracking person leaves the area, the preset monitoring operation is executed based on preset configuration information, and the following technical means can be implemented:
if the target tracking personnel leaves the park, acquiring the importance level corresponding to the target tracking personnel; and calling the preset configuration information matched with the importance level according to the importance level, and executing the corresponding monitoring operation.
For example, comparing the importance level of the target tracking person with a preset level threshold; if the importance level of the target tracking person exceeds the preset level threshold, the target tracking person may be continuously monitored according to the monitoring method described in the above embodiment. And if the importance level of the target tracking personnel does not reach the preset level threshold, stopping the monitoring operation of the target tracking personnel when the target tracking personnel is monitored to leave the monitored park.
Further, in an embodiment, the determining whether the pedestrian is the target tracking person when the corresponding pedestrian is detected based on the second camera may be implemented according to the following technical means:
when a corresponding pedestrian is detected based on the second camera, collecting characteristic information corresponding to the pedestrian; comparing the collected characteristic information corresponding to the pedestrians with the characteristic information corresponding to the target tracking personnel to obtain a similarity threshold value of the two characteristic information; judging whether the similarity threshold reaches a preset threshold or not; if the similarity threshold reaches the preset threshold, judging that the pedestrian is the target tracking person; and if the similarity threshold is smaller than the preset threshold, judging that the pedestrian is not the target tracking person. The feature information described in the embodiment of the present invention includes, but is not limited to: face characteristics, attribute characteristics, gait characteristics and other characteristic information.
Further, in an embodiment, because the face features may be used as the most critical feature information for determining a natural person, when the acquired feature information corresponding to the pedestrian is compared with the feature information corresponding to the target tracking person to obtain the similarity threshold of the two, the determination may be performed only by comparing the face features. In the embodiment of the present invention, when the acquired feature information corresponding to the pedestrian is compared with the feature information corresponding to the target tracking person to obtain the similarity threshold of the two, the following technical means may be implemented:
analyzing the collected characteristic information corresponding to the pedestrians and the characteristic information corresponding to the target tracking personnel, and respectively obtaining the face characteristics corresponding to the pedestrians and the face characteristics corresponding to the target tracking personnel; and comparing the similarity by taking the face features as comparison parameters to obtain a similarity threshold value of the pedestrian and the target tracking person.
Further, in an embodiment, in consideration of the fact that there may be twins or a single misjudgment depending on the face features may be caused by other factors, the face features, the attribute features, and the gait features corresponding to natural people may be considered comprehensively. In the embodiment of the present invention, when the acquired feature information corresponding to the pedestrian is compared with the feature information corresponding to the target tracking person to obtain the similarity threshold of the two, the following technical means may be implemented:
analyzing the collected feature information corresponding to the pedestrian and the feature information corresponding to the target tracking person, acquiring the face feature, the attribute feature and the gait feature corresponding to the pedestrian, and simultaneously acquiring the face feature, the attribute feature and the gait feature corresponding to the target tracking person.
And respectively taking the face features, the attribute features and the gait features as parameters to carry out grouping comparison, and respectively obtaining the face similarity corresponding to the face features, the attribute similarity corresponding to the attribute features and the gait similarity corresponding to the gait features.
And acquiring similarity weights corresponding to the face similarity, the attribute similarity and the gait similarity respectively.
And calculating to obtain a similarity threshold value of the pedestrian and the target tracking person according to the face similarity, the attribute similarity, the gait similarity and the corresponding similarity weights respectively.
The method for comparing the similarity of the pedestrian and the target tracking person is utilized to determine whether the pedestrian is the target tracking person, and the identification accuracy of the target tracking person is improved.
The invention relates to a method for tracking personnel in a garden across the border, which is characterized in that a plurality of cameras which are mutually associated are used for detecting target tracking personnel in a corresponding video in the garden and extracting characteristic information corresponding to the target tracking personnel; based on the extracted feature information, performing single-lens tracking on the target tracking person in a single-camera tracking mode until the target tracking person leaves a monitoring area corresponding to the current first camera; when a corresponding pedestrian is detected based on a second camera, whether the pedestrian is the target tracking person is judged, if yes, single-mirror tracking is continuously carried out on the target tracking person until the target tracking person leaves a monitoring area corresponding to the garden; the problem of the unable system work of each surveillance camera head in the garden is solved, the purpose of carrying out the cross border pursuit to personnel in the garden has been reached, the intelligence and the convenience of cross border pursuit have been improved.
Further, in one embodiment, the displacement information includes: a position coordinate point in the first camera monitoring area;
the position coordinate point acquisition mode comprises the following steps: establishing a two-dimensional rectangular coordinate system by taking the edge coordinate point of the lower left corner of the first camera monitoring area as the origin of coordinates, the lower edge of the first camera monitoring area as an X axis and the left edge of the first camera monitoring area as a Y axis; and determining a position coordinate point in the first camera monitoring area corresponding to the displacement information based on the established two-dimensional rectangular coordinate system.
Further, in an embodiment, the drawing an activity track corresponding to the target tracking person according to the obtained displacement information includes:
calculating the average action speed and the current action direction of the target tracking person according to the obtained displacement information, and prejudging the position and the leaving direction of the target tracking person leaving the first camera monitoring area;
and selecting a corresponding second camera according to the pre-judgment result, and calculating the starting time of the second camera for tracking the target tracking personnel, so that seamless connection can be realized when the camera is switched to track the target tracking personnel.
Further, in an embodiment, the calculating an average action speed and a current action direction of the target tracking person according to the acquired displacement information, and predicting a position and a leaving direction of the target tracking person leaving the first camera monitoring area includes steps a1-a 4:
step a1, calculating the average moving speed V of the target tracking person according to the obtained displacement information by using formula (1), and then:
Figure BDA0002731502480000121
in the formula (1), (X)i,Yi) The position coordinate point of the ith frame of image information corresponding to the target tracking person is acquired by the first camera; (X)i-1,Yi-1) The position coordinate point of the image information of the i-1 th frame corresponding to the target tracking person is acquired by the first camera; n represents the total number of image frames shot by the first camera from the time the target tracking person enters the monitoring area of the first camera to the present time; t represents the time taken by the first camera to shoot each frame of image;
step a2, obtaining the current action direction of the target tracking person according to the obtained displacement information by using a formula (2), wherein the method comprises the following steps:
Figure BDA0002731502480000131
in the formula (2), θiRepresents the current action direction of the target tracking person when the first camera collects the ith frame of displacement information corresponding to the target tracking person, and theta represents the current action direction of the target tracking personiHas a value range of [ -pi, pi [ -pi [ ]]I.e. when thetaiWhen the value is more than or equal to 0, the counterclockwise rotation | theta is expressed along the positive half shaft direction of the X axis of the two-dimensional rectangular coordinate systemiAn angle; when theta isi< 0, it means clockwise rotation | θ along the positive half-axis direction of the X-axisiAn angle;
step A3, according to the calculated average action speed V and current action direction theta of the target tracking personiAnd prejudging the position of the target tracking person away from the first camera, wherein the prejudging comprises the following steps:
Figure BDA0002731502480000132
in formula (3), (X, Y) represents a position coordinate point where the target tracking person leaves the first camera in advance, which is obtained according to the i-th frame displacement information; (X)max,Ymax) Representing a coordinate point at the upper right corner corresponding to the monitoring area of the first camera;
and prejudging a position coordinate point of the target tracking person leaving the first camera according to the displacement information of the ith frame, setting a camera closest to the position coordinate point as a second camera, and carrying out corresponding adjustment according to the displacement information of each frame until the target tracking person leaves a monitoring area corresponding to the first camera.
Further, in an embodiment, the selecting a corresponding second camera according to the predetermined result, and calculating the starting time of the second camera for tracking the target tracking person includes:
calculating the opening time t of a second camera according to the position coordinate point of the target tracking person leaving the first camera, which is obtained by pre-judgment, and the calculated average action speed of the target tracking person, and then:
Figure BDA0002731502480000141
wherein t represents that the second camera is started after t duration from the current moment, V represents the average action speed of the target tracking person, and (X) representsi,Yi) The position coordinate point of the ith frame of image information corresponding to the target tracking person is acquired by the first camera; and (X, Y) represents a position coordinate point which is obtained according to the displacement information of the ith frame and used for prejudging that the target tracking person leaves the first camera.
According to the steps and the formula, the coordinate point of the target tracking person when the target tracking person leaves the monitoring area of the first camera can be judged through the displacement information of the target tracking person, so that the second camera can be prepared in advance for a period of time, and the target tracking person can be enabled to be in seamless connection when the cameras switch tracking.
The beneficial effects of the above technical scheme are: obtaining the average action speed of the target tracking person by using the formula (1) in the step A1, so as to know the moving speed condition of the target tracking person, and lay a foundation for subsequently solving the starting time of the second camera; then, the current action direction of the target tracking person is obtained by using the formula (2) in the step A2, so that the current action direction of the target tracking person is known, and the position information of the target tracking person when the target tracking person leaves the monitoring area of the first camera can be judged according to the direction; then obtaining the position of the pre-judged target tracking person leaving the current first camera by using a formula (3) in the step A3, so that a second camera can be selected in advance according to the position, and the starting time of the second camera is determined by combining a formula (4) to ensure that the cameras can be seamlessly connected when switching tracking; the whole tracking process is smooth, the tracking efficiency is improved, and the target tracking personnel is prevented from being tracked and lost.
Based on the description of the embodiment of fig. 1, as shown in fig. 2, fig. 2 is a schematic workflow diagram of another embodiment of the method for cross-border tracking of people in a campus according to the present invention. In step S30 of the embodiment of fig. 1, when a corresponding pedestrian is detected based on the second camera, it is determined whether the pedestrian is the target tracking person, and if yes, the single-mirror tracking of the target tracking person is continued until the target tracking person leaves the monitoring area corresponding to the campus, and then steps S40-S50 are further included.
And step S40, acquiring image information acquired by a plurality of cameras working cooperatively aiming at the target tracking personnel according to the characteristic information of the target tracking personnel.
And step S50, performing cross-border tracking on the target tracking personnel based on the image information corresponding to the target tracking personnel, and acquiring the activity track corresponding to the target tracking personnel.
In the embodiment of the invention, the monitoring system also has the function of drawing the activity track of the target tracking personnel. And collecting image information which is matched with the characteristic information and is acquired by a plurality of cameras working in cooperation aiming at the target tracking personnel according to the characteristic information of the target tracking personnel. Since the plurality of cameras working in cooperation may acquire image information corresponding to a plurality of different target tracking persons, the image information corresponding to the plurality of cameras working in cooperation is collected according to the feature information corresponding to the target tracking persons.
And carrying out image processing and image analysis on the collected image information to obtain position information and a moving direction corresponding to one target tracking person, and further obtaining a moving track corresponding to the target tracking person according to the position information and the moving direction corresponding to the target tracking person. Similarly, the image information of the target tracking person in the historical time period can be called according to the above method, so that the historical track of the target tracking person in a certain historical time period can be obtained.
Further, in an embodiment, the cross-border tracking is performed on the target tracking person based on the image information corresponding to the target tracking person to obtain the activity track corresponding to the target tracking person, which may also be implemented according to the following technical means:
acquiring displacement information corresponding to the image information of each frame of the target tracker according to the image information corresponding to the target tracker; and drawing the corresponding movement track of the target tracking person according to the obtained displacement information. The displacement information includes: movement position information and movement direction information.
Furthermore, in order to enable personnel in the garden to track across the boundary more intuitively and conveniently, the driving track corresponding to the target tracking personnel can be drawn through the display screen to be displayed for the user to check.
According to the embodiment of the invention, the image information acquired by a plurality of cameras working cooperatively aiming at the target tracking personnel is acquired according to the characteristic information of the target tracking personnel; based on the image information corresponding to the target tracking personnel, cross-border tracking is carried out on the target tracking personnel to obtain the movement track corresponding to the target tracking personnel, so that the beneficial effect of obtaining the movement track of the target tracking personnel is achieved; furthermore, the drawn activity track can be displayed through a display screen, and the intuitiveness and the convenience of acquiring the activity information of the target tracking personnel are improved.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A method for cross-border tracking of people on a campus, the method comprising:
detecting target tracking personnel in a corresponding video in a park by using a plurality of cameras which are mutually associated, and extracting characteristic information corresponding to the target tracking personnel;
based on the extracted feature information, performing single-lens tracking on the target tracking person in a single-camera tracking mode until the target tracking person leaves a monitoring area corresponding to the current first camera;
when detecting the corresponding pedestrian based on the second camera, judging whether the pedestrian is the target tracking person, if so, continuing to perform single-mirror tracking on the target tracking person until the target tracking person leaves the monitoring area corresponding to the park.
2. The method of claim 1, wherein detecting target tracking people in a corresponding video of a campus using a plurality of cameras associated with each other comprises:
acquiring position information of cameras deployed in a park, and associating a plurality of related cameras based on the position information;
and controlling the plurality of associated cameras to cooperatively work, and detecting target tracking personnel in the corresponding video in the park.
3. The method for transregional tracking of people on a campus of claim 2 wherein said continuing single mirror tracking of said target-tracking person until said target-tracking person leaves a monitored area corresponding to said campus, further comprising:
acquiring image information acquired by a plurality of cameras working cooperatively aiming at the target tracking personnel according to the characteristic information of the target tracking personnel;
and performing cross-border tracking on the target tracking personnel based on the image information corresponding to the target tracking personnel to obtain the activity track corresponding to the target tracking personnel.
4. The method for cross-border tracking of people on a campus of claim 3, wherein the cross-border tracking of the target-tracking person based on the image information corresponding to the target-tracking person to obtain the activity track corresponding to the target-tracking person comprises:
acquiring displacement information corresponding to the image information of each frame of the target tracker according to the image information corresponding to the target tracker;
and drawing the corresponding movement track of the target tracking person according to the obtained displacement information.
5. The method for cross-border tracking of people on a campus of claim 4 wherein said displacement information includes: a position coordinate point in the first camera monitoring area;
the position coordinate point acquisition mode comprises the following steps: establishing a two-dimensional rectangular coordinate system by taking the edge coordinate point of the lower left corner of the first camera monitoring area as the origin of coordinates, the lower edge of the first camera monitoring area as an X axis and the left edge of the first camera monitoring area as a Y axis; and determining a position coordinate point in the first camera monitoring area corresponding to the displacement information based on the established two-dimensional rectangular coordinate system.
6. The method for tracking people across borders on a campus of claim 5, wherein said step of drawing the corresponding activity track of the target tracking person according to the obtained displacement information comprises:
calculating the average action speed and the current action direction of the target tracking person according to the obtained displacement information, and prejudging the position and the leaving direction of the target tracking person leaving the first camera monitoring area;
and selecting a corresponding second camera according to the pre-judgment result, and calculating the starting time of the second camera for tracking the target tracking personnel, so that seamless connection can be realized when the camera is switched to track the target tracking personnel.
7. The method for tracking people across borders on a campus of claim 6, wherein said calculating the average action speed and current action direction of said target tracking people according to said obtained displacement information, and prejudging the position and leaving direction of said target tracking people leaving said first camera monitoring area comprises steps of A1-A4:
step a1, calculating the average moving speed V of the target tracking person according to the obtained displacement information by using formula (1), and then:
Figure FDA0002731502470000031
in the formula (1), (X)i,Yi) The position coordinate point of the ith frame of image information corresponding to the target tracking person is acquired by the first camera; (X)i-1,Yi-1) The position coordinate point of the image information of the i-1 th frame corresponding to the target tracking person is acquired by the first camera; n represents the total number of image frames shot by the first camera from the time the target tracking person enters the monitoring area of the first camera to the present time; t represents the time taken by the first camera to shoot each frame of image;
step a2, obtaining the current action direction of the target tracking person according to the obtained displacement information by using a formula (2), wherein the method comprises the following steps:
Figure FDA0002731502470000032
in the formula (2), θiRepresents the current action direction of the target tracking person when the first camera collects the ith frame of displacement information corresponding to the target tracking person, and theta represents the current action direction of the target tracking personiHas a value range of [ -pi, pi [ -pi [ ]]I.e. when thetaiWhen the value is more than or equal to 0, the counterclockwise rotation | theta is expressed along the positive half shaft direction of the X axis of the two-dimensional rectangular coordinate systemiAn angle; when theta isi< 0, it means clockwise rotation | θ along the positive half-axis direction of the X-axisiAn angle;
step A3, according to the calculated average action speed V and current action direction theta of the target tracking personiAnd prejudging the position of the target tracking person away from the first camera, wherein the prejudging comprises the following steps:
Figure FDA0002731502470000041
in formula (3), (X, Y) represents a position coordinate point where the target tracking person leaves the first camera in advance, which is obtained according to the i-th frame displacement information; (X)max,Ymax) Representing a coordinate point at the upper right corner corresponding to the monitoring area of the first camera;
and prejudging a position coordinate point of the target tracking person leaving the first camera according to the displacement information of the ith frame, setting a camera closest to the position coordinate point as a second camera, and carrying out corresponding adjustment according to the displacement information of each frame until the target tracking person leaves a monitoring area corresponding to the first camera.
8. The method according to claim 7, wherein the selecting a corresponding second camera according to the predetermined result and calculating the turn-on time of the second camera for tracking the target tracking person comprises:
calculating the opening time t of a second camera according to the position coordinate point of the target tracking person leaving the first camera, which is obtained by pre-judgment, and the calculated average action speed of the target tracking person, and then:
Figure FDA0002731502470000042
wherein t represents that the second camera is started after t duration from the current moment, V represents the average action speed of the target tracking person, and (X) representsi,Yi) The position coordinate point of the ith frame of image information corresponding to the target tracking person is acquired by the first camera; and (X, Y) represents a position coordinate point which is obtained according to the displacement information of the ith frame and used for prejudging that the target tracking person leaves the first camera.
9. The method for people cross-border tracking on a campus of any one of claims 1 to 8, wherein the determining whether the pedestrian is the target tracking person when the corresponding pedestrian is detected based on the second camera further comprises:
if the pedestrian is judged not to be the target tracking person, judging whether the target tracking person leaves the park or not based on the position information of the first camera;
if the target tracking person does not leave the park, calling other cameras except the second camera to obtain camera information corresponding to the camera which detects the pedestrian, and continuously judging whether the detected pedestrian is the target tracking person or not until the target tracking person is found;
and if the target tracking personnel leaves the park, executing preset monitoring operation based on preset configuration information.
10. The method of claim 9, wherein if the target tracking person leaves the campus, performing a predetermined monitoring operation based on predetermined configuration information comprises:
if the target tracking personnel leaves the park, acquiring the importance level corresponding to the target tracking personnel;
and calling the preset configuration information matched with the importance level according to the importance level, and executing the corresponding monitoring operation.
CN202011119432.6A 2020-10-19 2020-10-19 Method for tracing cross-border of personnel in park Active CN112365522B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011119432.6A CN112365522B (en) 2020-10-19 2020-10-19 Method for tracing cross-border of personnel in park

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011119432.6A CN112365522B (en) 2020-10-19 2020-10-19 Method for tracing cross-border of personnel in park

Publications (2)

Publication Number Publication Date
CN112365522A true CN112365522A (en) 2021-02-12
CN112365522B CN112365522B (en) 2024-06-14

Family

ID=74507355

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011119432.6A Active CN112365522B (en) 2020-10-19 2020-10-19 Method for tracing cross-border of personnel in park

Country Status (1)

Country Link
CN (1) CN112365522B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113438450A (en) * 2021-06-11 2021-09-24 深圳市大工创新技术有限公司 Dynamic target tracking monitoring method, monitoring system, electronic device and storage medium
CN113592910A (en) * 2021-07-29 2021-11-02 浙江大华技术股份有限公司 Cross-camera tracking method and device
CN113610890A (en) * 2021-07-02 2021-11-05 河南橡树智能科技有限公司 Real-time personnel track two-dimensional plane display method based on multiple cameras
CN113837023A (en) * 2021-09-02 2021-12-24 北京新橙智慧科技发展有限公司 Cross-camera pedestrian automatic tracking method
CN114463385A (en) * 2022-01-12 2022-05-10 平安科技(深圳)有限公司 Target tracking method, device, equipment and medium based on gun-ball linkage system
CN114500952A (en) * 2022-02-14 2022-05-13 深圳市中壬速客信息技术有限公司 Control method, device and equipment for dynamic monitoring of park and computer storage medium
CN114613090A (en) * 2022-01-06 2022-06-10 湖南省正邦建设工程有限公司 Informationized monitoring method and construction method for asphalt concrete pavement
CN116503814A (en) * 2023-05-24 2023-07-28 北京安录国际技术有限公司 Personnel tracking method and system for analysis

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106096577A (en) * 2016-06-24 2016-11-09 安徽工业大学 Target tracking system in a kind of photographic head distribution map and method for tracing
CN106295594A (en) * 2016-08-17 2017-01-04 北京大学 A kind of based on dynamic route tree across photographic head method for tracking target and device
CN106709436A (en) * 2016-12-08 2017-05-24 华中师范大学 Cross-camera suspicious pedestrian target tracking system for rail transit panoramic monitoring
CN110706259A (en) * 2019-10-12 2020-01-17 四川航天神坤科技有限公司 Space constraint-based cross-shot tracking method and device for suspicious people
WO2020035080A1 (en) * 2018-08-13 2020-02-20 深圳市冠旭电子股份有限公司 Tracking and shooting method and apparatus, and terminal device
CN110866480A (en) * 2019-11-07 2020-03-06 浙江大华技术股份有限公司 Object tracking method and device, storage medium and electronic device
CN111008993A (en) * 2019-12-06 2020-04-14 江西洪都航空工业集团有限责任公司 Method and device for tracking pedestrian across mirrors

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106096577A (en) * 2016-06-24 2016-11-09 安徽工业大学 Target tracking system in a kind of photographic head distribution map and method for tracing
CN106295594A (en) * 2016-08-17 2017-01-04 北京大学 A kind of based on dynamic route tree across photographic head method for tracking target and device
CN106709436A (en) * 2016-12-08 2017-05-24 华中师范大学 Cross-camera suspicious pedestrian target tracking system for rail transit panoramic monitoring
WO2020035080A1 (en) * 2018-08-13 2020-02-20 深圳市冠旭电子股份有限公司 Tracking and shooting method and apparatus, and terminal device
CN110706259A (en) * 2019-10-12 2020-01-17 四川航天神坤科技有限公司 Space constraint-based cross-shot tracking method and device for suspicious people
CN110866480A (en) * 2019-11-07 2020-03-06 浙江大华技术股份有限公司 Object tracking method and device, storage medium and electronic device
CN111008993A (en) * 2019-12-06 2020-04-14 江西洪都航空工业集团有限责任公司 Method and device for tracking pedestrian across mirrors

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113438450A (en) * 2021-06-11 2021-09-24 深圳市大工创新技术有限公司 Dynamic target tracking monitoring method, monitoring system, electronic device and storage medium
CN113438450B (en) * 2021-06-11 2022-05-17 深圳市大工创新技术有限公司 Dynamic target tracking and monitoring method
CN113610890A (en) * 2021-07-02 2021-11-05 河南橡树智能科技有限公司 Real-time personnel track two-dimensional plane display method based on multiple cameras
CN113592910A (en) * 2021-07-29 2021-11-02 浙江大华技术股份有限公司 Cross-camera tracking method and device
CN113837023A (en) * 2021-09-02 2021-12-24 北京新橙智慧科技发展有限公司 Cross-camera pedestrian automatic tracking method
CN114613090A (en) * 2022-01-06 2022-06-10 湖南省正邦建设工程有限公司 Informationized monitoring method and construction method for asphalt concrete pavement
CN114463385A (en) * 2022-01-12 2022-05-10 平安科技(深圳)有限公司 Target tracking method, device, equipment and medium based on gun-ball linkage system
CN114500952A (en) * 2022-02-14 2022-05-13 深圳市中壬速客信息技术有限公司 Control method, device and equipment for dynamic monitoring of park and computer storage medium
CN116503814A (en) * 2023-05-24 2023-07-28 北京安录国际技术有限公司 Personnel tracking method and system for analysis
CN116503814B (en) * 2023-05-24 2023-10-24 北京安录国际技术有限公司 Personnel tracking method and system for analysis

Also Published As

Publication number Publication date
CN112365522B (en) 2024-06-14

Similar Documents

Publication Publication Date Title
CN112365522B (en) Method for tracing cross-border of personnel in park
CN109657575B (en) Intelligent video tracking algorithm for outdoor constructors
CN105979210B (en) A kind of pedestrian&#39;s identifying system based on the more ball video camera arrays of multiple gun
KR101423916B1 (en) Method and apparatus for recognizing the plural number of faces
CN101699862B (en) Acquisition method of high-resolution region-of-interest image of PTZ camera
CN104217428B (en) A kind of fusion feature matching and the video monitoring multi-object tracking method of data correlation
CN109446942A (en) Method for tracking target, device and system
CN105844659B (en) The tracking and device of moving component
WO2014155979A1 (en) Tracking processing device and tracking processing system provided with same, and tracking processing method
CN106331511A (en) Method and device of tracking shoot by intelligent terminal
CN110044486A (en) Method, apparatus, the equipment of repetition of alarms are avoided for human body inspection and quarantine system
CN110610150B (en) Tracking method, device, computing equipment and medium of target moving object
CN103049734A (en) Method and system for finding person in public place
CN104966304A (en) Kalman filtering and nonparametric background model-based multi-target detection tracking method
CN102970517B (en) Based on the autonomous control method of platform-lens of abnormal sight identification
CN110097586A (en) A kind of Face datection method for tracing and device
CN111242025A (en) Action real-time monitoring method based on YOLO
CN111476160A (en) Loss function optimization method, model training method, target detection method, and medium
CN109118516A (en) A kind of target is from moving to static tracking and device
CN111325133A (en) Image processing system based on artificial intelligence recognition
CN112633157A (en) AGV working area safety real-time detection method and system
CN110026982A (en) Robot servo system
CN110443134B (en) Face recognition tracking system based on video stream and working method
Zhang et al. What makes for good multiple object trackers?
CN112784813A (en) Motion recognition data set generation method and device based on image detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant