CN103024350B - A kind of principal and subordinate's tracking of binocular PTZ vision system and the system of application the method - Google Patents

A kind of principal and subordinate's tracking of binocular PTZ vision system and the system of application the method Download PDF

Info

Publication number
CN103024350B
CN103024350B CN201210454674.XA CN201210454674A CN103024350B CN 103024350 B CN103024350 B CN 103024350B CN 201210454674 A CN201210454674 A CN 201210454674A CN 103024350 B CN103024350 B CN 103024350B
Authority
CN
China
Prior art keywords
camera
video camera
image
pan
tilt
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210454674.XA
Other languages
Chinese (zh)
Other versions
CN103024350A (en
Inventor
周杰
崔智高
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Horizon Robotics Technology Research and Development Co Ltd
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201210454674.XA priority Critical patent/CN103024350B/en
Publication of CN103024350A publication Critical patent/CN103024350A/en
Application granted granted Critical
Publication of CN103024350B publication Critical patent/CN103024350B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Closed-Circuit Television Systems (AREA)
  • Studio Devices (AREA)

Abstract

The present invention proposes a kind of principal and subordinate's tracking of binocular PTZ vision system and the system of application the method, this principal and subordinate's tracking comprises the steps: to demarcate respectively two Pan/Tilt/Zoom cameras, obtains camera coordinate system model; Set up spheric coordinate system, the longitude that the corresponding points on two Pan/Tilt/Zoom camera coordinate systems are fastened in spherical coordinate is consistent, and difference of latitude is used for expressing visual angle difference, asks for transformation matrix respectively to two Pan/Tilt/Zoom camera coordinate systems; Select main camera and from video camera, the observed image of main camera select tracking target, makes main camera at any pan-tilt-zoom parameter (P m, T m, Z m) under estimate from video camera pan-tilt-zoom parameter export high-resolution panorama sketch.Principal and subordinate's tracking of the present invention can realize the monitoring under scene on a large scale, can reduce hardware spending again.Two Pan/Tilt/Zoom camera designs of principal and subordinate's tracking system of the present invention are symmetrical, and the status of the two can exchange according to task difference, and flexibility is strong, and is conducive to the information fusion in later stage.

Description

A kind of principal and subordinate's tracking of binocular PTZ vision system and the system of application the method
Technical field
The present invention relates to computer vision intelligent monitoring technology field, particularly a kind of vision system of binocular Pan/Tilt/Zoom camera that adopts carries out principal and subordinate's tracking system and tracking thereof.
Background technology
Along with in world wide to the active demand of public safety and military security, intelligent vision monitoring technology is subject to increasing attention.In the application that intelligent vision monitoring technology is early stage, degree of intelligence is very low, and safety precaution function mainly relies on people to judge the event in monitor video, and reliability and automaticity are all very low.Along with the develop rapidly of computer disposal performance and the constantly perfect of theory on computer vision, intelligent visual surveillance system obtains great development, and its development prospect is boundless, causes the great attention of a lot of scholar and engineers and technicians.
Core missions of intelligent visual surveillance system carry out real-time tracking to interested target.Traditional intelligent visual surveillance system many employings Still Camera, because camera field of view is fixed, resolution is single, cannot obtain the high-definition picture of tracking target, for the work such as inquiry in the future, proof bring difficulty; Along with the raising of level of hardware, based on monocular PTZ(Pan/Tilt/Zoom, The Cloud Terrace all-around mobile and camera lens zoom, Zoom control) active tracking system of video camera obtains extensive research and apply, these systems can make target appear at picture centre with comparatively large scale, but lost panoramic information because visual field is narrow and small, be difficult to intuitively obtain the position of target in scene.
For above-mentioned defect, the multi-vision visual system comprising Pan/Tilt/Zoom camera becomes the study hotspot in intelligent monitoring, under these systems are generally operational in master slave mode, the tracking of main camera realize target under panorama, and control Pan/Tilt/Zoom camera active tracing is carried out to target.Existing master-slave system comprises: the system architecture that a Still Camera combines with a Pan/Tilt/Zoom camera, and this system subject matter is that monitoring visual field is limited, is only confined to the monitoring visual field of Still Camera, cannot be adapted to the monitoring under large scene; The system configuration that many Still Camera combine with Pan/Tilt/Zoom camera, although this system configuration is in order to expand monitoring range, increases hardware spending greatly; The tracking system that omni-directional camera combines with Pan/Tilt/Zoom camera configures, and this system is generally applied to indoor scene, and due to omni-directional camera resolution low, the more difficult information fusion realized between two video cameras.Therefore, find a kind of monitoring that can realize under scene on a large scale, the visual tracking method that can reduce again hardware spending is a technical problem needing solution badly.
Summary of the invention
The present invention is intended at least solve the technical problem existed in prior art, especially innovatively proposes a kind of principal and subordinate's tracking system and tracking thereof of binocular PTZ vision system.
In order to realize above-mentioned purpose of the present invention, according to an aspect of the present invention, the invention provides a kind of principal and subordinate's tracking of binocular PTZ vision system, comprise the steps:
S1: two Pan/Tilt/Zoom cameras are demarcated respectively, obtains camera coordinate system model;
S2: set up spheric coordinate system, the longitude that the corresponding points on described two Pan/Tilt/Zoom camera coordinate systems are fastened in spherical coordinate is consistent, and difference of latitude is used for expressing visual angle difference, asks for transformation matrix respectively to two Pan/Tilt/Zoom camera coordinate systems;
S3: selected main camera and from video camera, and on the observed image of main camera selected tracking target, make main camera at any pan-tilt-zoom parameter (P m, T m, Z m) under, by tracking target from cameras view image I mon movement locus estimate from video camera pan-tilt-zoom parameter ( P S t , T S t , Z S t ) ( t = 1 . . . n ) ;
S4: export high-resolution panorama sketch.
Principal and subordinate's tracking of binocular PTZ vision system of the present invention can realize the monitoring under scene on a large scale, can reduce hardware spending again.
In the preferred embodiment of the present invention, in two video cameras any one as main camera, an other conduct is from video camera.
In another kind of preferred implementation of the present invention, two video cameras carry out vision tour respectively according to monitor task in monitoring scene, after any one video camera finds suspicious object, namely the tracking under Still Camera is adopted to follow the tracks of it as main camera, and enter master-slave mode, control another video camera as from video camera to suspicious object active tracing.
The present invention is in the monitoring of scene on a large scale, and the master slave mode realized under two video camera arbitrary parameters is followed the tracks of, and obtained the high-resolution panoramic image of tracking target.Due to monitoring scenes different in practical application or pan-tilt-zoom parameter corresponding to monitor task different, the master slave mode realized under arbitrary parameter is followed the tracks of, there is larger actual application value, also more there is versatility, it is fixing that the present invention only needs two video cameras to install, the situation that process two camera parameters changes can be facilitated, not by the impact that camera parameters changes.Simultaneously, the present invention exports as a result with high-resolution panorama, while display of high resolution target image, the movable information of display-object in whole monitoring scene more intuitively, enhance visuality and practicality, and can be used for the behavioural analysis, gesture recognition, gait analysis etc. of later stage target.
In order to realize above-mentioned purpose of the present invention, according to two aspects of the present invention, the invention provides a kind of principal and subordinate's tracking system of binocular PTZ vision system, comprising main frame, first video camera and the second video camera, described first video camera is connected with main frame respectively with the second video camera, in two video cameras any one as main camera, an other conduct is from video camera, image capture module, described image capture module carries out information interaction with described first video camera and the second video camera respectively, first camera control module and the second camera control module, the running of described first camera control module controls first video camera and observation mode, the running of described second camera control module controls second video camera and observation mode, also target tracking module is comprised respectively in described first video camera and the second video camera, target prediction module, spherical coordinate model module and PTZ parameter module, in described main camera, described target tracking module is followed the tracks of suspect object, and tracking results is transferred to described target prediction module, described target prediction module is connected with the spherical coordinate model module from video camera, the described position of target prediction module to suspect object is predicted, and just predict the outcome and be transferred to from video camera, the described spherical coordinate model module from video camera converts coordinate, and transformation results is transferred to PTZ parameter module calculating PTZ parameter, the PTZ parameter transmission of calculating is given corresponding camera control module by described PTZ parameter module.
Two Pan/Tilt/Zoom camera designs of principal and subordinate's tracking system of binocular PTZ vision system of the present invention are symmetrical, and the status of the two can exchange according to task difference, and flexibility is strong, and is conducive to the information fusion in later stage.
Additional aspect of the present invention and advantage will part provide in the following description, and part will become obvious from the following description, or be recognized by practice of the present invention.
Accompanying drawing explanation
Above-mentioned and/or additional aspect of the present invention and advantage will become obvious and easy understand from accompanying drawing below combining to the description of embodiment, wherein:
Fig. 1 is the structural representation of principal and subordinate's tracking system of binocular PTZ vision system of the present invention;
Fig. 2 is the PTZ parametric geometry relation schematic diagram of the present invention's two video cameras;
Fig. 3 is the flow chart that the present invention estimates from camera parameters;
Fig. 4 is according to the schematic diagram of scene depth scope estimation from video camera latitude scope;
Fig. 5 is Pan-tilt parameter Estimation schematic diagram;
Fig. 6 is that principal and subordinate follows the tracks of and high-resolution panorama result figure in first preferred embodiment of the invention;
Fig. 7 is that principal and subordinate follows the tracks of and high-resolution panorama result figure in second preferred embodiment of the invention.
Embodiment
Be described below in detail embodiments of the invention, the example of described embodiment is shown in the drawings, and wherein same or similar label represents same or similar element or has element that is identical or similar functions from start to finish.Being exemplary below by the embodiment be described with reference to the drawings, only for explaining the present invention, and can not limitation of the present invention being interpreted as.
In describing the invention, unless otherwise prescribed and limit, it should be noted that, term " installation ", " being connected ", " connection " should be interpreted broadly, such as, can be mechanical connection or electrical connection, also can be the connection of two element internals, can be directly be connected, also indirectly can be connected by intermediary, for the ordinary skill in the art, the concrete meaning of above-mentioned term can be understood as the case may be.
The invention provides a kind of principal and subordinate's tracking system of binocular PTZ vision system, as shown in Figure 1, comprise main frame (not shown in FIG.), first video camera and the second video camera, this first video camera is connected with main frame respectively with the second video camera, in two video cameras any one as main camera, an other conduct is from video camera, this system also comprises image capture module, the first camera control module and the second camera control module, and this image capture module carries out information interaction with the first video camera and the second video camera respectively, the running of this first camera control module controls first video camera and observation mode, the running of the second camera control module controls second video camera and observation mode, also target tracking module is comprised respectively in first video camera and the second video camera, target prediction module, spherical coordinate model module and PTZ parameter module, in main camera, target tracking module is followed the tracks of suspect object, and tracking results is transferred to described target prediction module, target prediction module is connected with the spherical coordinate model module from video camera, the position of target prediction module to suspect object is predicted, and just predict the outcome and be transferred to from video camera, from the spherical coordinate model module of video camera, coordinate is converted, and transformation results is transferred to PTZ parameter module calculating PTZ parameter, the PTZ parameter transmission of calculating is given corresponding camera control module by PTZ parameter module.
The present invention's two video cameras carry out vision tour respectively according to monitor task in monitoring scene, after any one video camera finds suspicious object, namely the tracking under Still Camera is adopted to follow the tracks of it as main camera, and enter master-slave mode, control another video camera as from video camera to suspicious object active tracing.
The present invention adopts binocular Pan/Tilt/Zoom camera to realize master slave mode and follows the tracks of, why select such hardware combinations, because Pan/Tilt/Zoom camera is one simply the most initiatively video camera, both can by having changed pan and tilt parameter change visual angle, also image resolution ratio can be changed by changing zoom parameter, its integrated level is very high, has comparative maturity product, along with the reduction of hardware cost has more and more been used in practical application.In addition, adopt two Pan/Tilt/Zoom cameras, due to the symmetry of two video cameras, the status of the two can exchange according to task difference, and flexibility is strong, and is conducive to the information fusion in later stage.Meanwhile, due to changeability and the controllability of camera parameters, by switching pan-tilt-zoom parameter switching monitoring visual field, the object obtaining larger monitoring visual field with minimum video camera can be realized.
Present invention also offers a kind of principal and subordinate's tracking of binocular PTZ vision system, its can be, but not limited to adopt the tracking system shown in Fig. 1 of the present invention, this principal and subordinate's tracking its comprise the steps:
S1: two Pan/Tilt/Zoom cameras are demarcated respectively, obtains camera coordinate system model;
S2: set up spheric coordinate system, the longitude that the corresponding points on two Pan/Tilt/Zoom camera coordinate systems are fastened in spherical coordinate is consistent, and difference of latitude is used for expressing visual angle difference, asks for transformation matrix respectively to two Pan/Tilt/Zoom camera coordinate systems;
S3: selected main camera and from video camera, and on the observed image of main camera selected tracking target, make main camera at any pan-tilt-zoom parameter (P m, T m, Z m) under, by tracking target from cameras view image I mon movement locus estimate from video camera pan-tilt-zoom parameter ( P S t , T S t , Z S t ) ( t = 1 . . . n ) ;
S4: export high-resolution panorama sketch.
In a preferred embodiment of the present invention, the present invention carries out following steps successively in main frame:
The first step, demarcates respectively two Pan/Tilt/Zoom cameras, obtains camera coordinate system model.In the present embodiment, the method of Feature Points Matching is utilized to demarcate outer ginseng matrix R and internal reference matrix K respectively to two Pan/Tilt/Zoom cameras, wherein, outer ginseng matrix R only with pan and tilt relating to parameters, and the rotational axis vertical of pan and tilt parameter intersects at video camera center.The step of demarcating internal reference matrix K is:
First, the pixel aspect ratio a of video camera is set rwith gradient S, in the present embodiment, the pixel aspect ratio a of video camera is got r=1, gradient S=0, and do not consider that picture distortion affects;
Then, zoom center is adopted to replace principal point (u 0, v 0), ensure pan and the tilt parameter constant of video camera, by changing zoom gain of parameter image sequence, the extraction of SIFT feature point being carried out to every width image, and the characteristic point in adjacent two two field pictures is mated, finally obtaining zoom center by least square method;
Finally, structure focal distance f, with the variation model of zoom parameter, for each fixing zoom parameter value, obtains multiple images by changing pan and tilt parameter on a small quantity, and estimates focal distance f by image registration, utilizes and organizes dis-crete sample values { zoom, f} tectonic model f=f more z(zoom).
By above step, the coordinate system model of Pan/Tilt/Zoom camera can be expressed as:
x ~ = κK ( zoom ) R ( pan , tilt ) X = κ f z ( zoom ) 0 u 0 0 f z ( zoom ) v 0 0 0 1 cos ( pan ) 0 sin ( pan ) - sin ( pan ) sin ( tilt ) cos ( tilt ) cos ( pan ) sin ( tilt ) - sin ( pan ) cos ( tilt ) - sin ( tilt ) cos ( pan ) cos ( tilt ) X
Wherein, κ is scale factor, representative image homogeneous coordinates, X represents coordinate under corresponding video camera system.
Second step: set up spheric coordinate system, the longitude that corresponding points on two Pan/Tilt/Zoom camera coordinate systems are fastened in spherical coordinate is consistent, difference of latitude is used for expressing visual angle difference, and ask for transformation matrix respectively to two Pan/Tilt/Zoom camera coordinate systems, concrete steps are as follows:
First, manually gather the N of two video cameras in same monitoring scene to image, the visual field of often pair of image is consistent substantially, to obtain more matching characteristic point pair, and makes the whole monitoring scene of collection image covering as much as possible.
Then, respectively SIFT feature point is carried out to often pair of image and extract and mate, then estimate the basis matrix F of often pair of image by RANSAC method j(j=1 ..., N), and the feature point pairs of record matching wherein, j be the jth of N to image to image, k is that the kth of jth to image is to characteristic point.
Again, estimate that the limit of spheric coordinate system is to { E 1, E 2, order and then have F j ( 1 ) E 1 = E 2 T F j ( 2 ) = 0 , Definition A 1 = [ ( F 1 ( 1 ) ) T , ( F 2 ( 1 ) ) T , . . . , ( F N ( 1 ) ) T ] T A 2 = [ F 1 ( 2 ) , F 2 ( 2 ) , . . . , F N ( 2 ) ] , Then utilize the method for singular value decomposition, solve A 1e 1=0 He obtain { E 1, E 2.
Subsequently, estimate reference zero meridian of spheric coordinate system, utilize the camera imaging model that the first step obtains, by the matching characteristic point pair of often pair of image of acquisition under transforming to respective camera coordinate system, obtain fasten a respectively arbitrarily selected longitude line zero meridian as a reference in two video camera spherical coordinates, under itself and the determined spheric coordinate system of limit that obtains, respectively will be converted into latitude coordinates, calculate the absolute deviation of all characteristic point component of Long, adjust one of them with reference to zero meridian, make average deviation minimum, final reference zero meridian n can be obtained 1_refand n 2_ref.
Finally, Cross_E is made 1_ n 1_ref=E 1× n 1_refand Cross_E 2_ n 2_ref=E 2× n 2_ref, obtain transformation matrix R 1, R 2, R 1=[E 1n 1_refcross_E 1_ n 1_ref], R 2=[E 2n 2_refcross_E 2_ n 2_ref].
3rd step: selected main camera and from video camera, and on the observed image of main camera selected tracking target, make main camera at any pan-tilt-zoom parameter (P m, T m, Z m) under, by tracking target from cameras view image I mon movement locus estimate from video camera pan-tilt-zoom parameter ( P S t , T S t , Z S t ) ( t = 1 . . . n ) .
Because the first step and second step generally can carry out by off-line, as long as two video cameras are installed fixing, the transformation matrix R of estimation 1, R 2to no longer change.During actual application on site, due to the symmetry of two video cameras, the two all can be used as main camera, when in two video cameras any one as main camera, then an other conduct is from video camera, when two video cameras carry out vision tour respectively according to monitor task in monitoring scene, after any one video camera finds suspicious object, namely the tracking under Still Camera is adopted to follow the tracks of it as main camera, and enter master-slave mode, control another video camera as from video camera to suspicious object active tracing.The present invention is convenient for stating, and wherein as main camera, will be designated as Cam-M by arbitrary video camera, another video camera is from video camera, is designated as Cam-S, and the transformation matrix that the camera coordinates making Cam-M corresponding is tied to spheric coordinate system is R m, the transformation matrix that the camera coordinates that Cam-S is corresponding is tied to spheric coordinate system is R s, two video camera geometrical relationship schematic diagrames as shown in Figure 2, if I mfor main camera Cam-M is at parameter (P m, T m, Z m) under observed image, for tracking target P in t at image I mon observation, for Cam-S makes tracking target P be in its observed image in t the parameter at center, the concrete operation step of the 3rd step is as follows:
First, at the observed image I of main camera Cam-M mon choose tracking target, the Mean-shift track algorithm tracking target frame by frame utilizing Opencv to provide, in order to offset the delay time error of image procossing and video camera mechanical movement, adopt the Kalman filter method that provides of Opencv to predict target location, and set the predicted value of t tracking target center as
Then, the parameter value of each moment from video camera Cam-S is estimated as shown in Figure 3, step is as follows:
Step 1, by the observed image coordinate transform of main camera Cam-M to camera coordinates, if for the image coordinate that the predicted value of t tracking target is corresponding, the monocular Pan/Tilt/Zoom camera coordinate system model utilizing the first step to obtain, can calculate a ray under determined camera coordinate system, as shown in the formula:
Y M t = κ R - 1 ( P M , T M ) K - 1 ( Z M ) p M → P t ~
Wherein, for the normalization coordinate of t tracking target predicted value under the camera coordinate system of main camera, represent homogeneous coordinates, κ is scale factor, to meet (P m, T m, Z m) represent the parameter of main camera Cam-M.
Step 2, transforms to the machine spherical coordinate of main shooting by the camera coordinates of main camera, will transform to the spheric coordinate system that main camera Cam-M is corresponding, and calculate the longitude and latitude longitude of t tracking target predicted value under the spheric coordinate system of main camera and latitude computing formula is as follows:
Y M → r t = R M Y M t
α M t = a tan 2 ( Y M → r t ( 3 ) , Y M → r t ( 2 ) )
β M t = a cos ( Y M → r t ( 1 ) )
Wherein, R mcamera coordinates for main camera Cam-M is tied to the transformation matrix of spheric coordinate system, for the cartesian coordinate that corresponding spherical coordinate is fastened, represent vector the n-th element.
Step 3, is mapped to the spheric coordinate system from video camera Cam-S by the spherical coordinate of main camera Cam-M, and the object of this step is by the longitude and latitude under main camera Cam-M spheric coordinate system estimate the longitude and latitude of corresponding points under the spheric coordinate system from video camera Cam-S due to second step set up spheric coordinate system made two camera coordinate systems under the longitude of corresponding points under respective spheric coordinate system be consistent, therefore in order to estimate suppose that t target depth is known, use D trepresent, the degree of depth is defined as t target P to baseline O mo sdistance, itself and baseline meet at an O t, b is the length of base of two video cameras, as shown in Figure 3.
If then there is following geometrical relationship:
tan β M t = D t x M t
tan ( π - β S t ) = D t x S t
b = x M t + x S t
Can obtain:
β S t = a tan ( - D t · tan β M t b · tan β M t - D t )
Due to the degree of depth D of target in scene tunknown, therefore cannot the accurate latitude of estimating target under the spheric coordinate system from video camera Cam-S in the present embodiment, owing to calculating target depth D twith the latitude of calculating target under the spheric coordinate system from video camera Cam-S be basic of equal value, chicken-and-egg problem can be regarded as and solve.Consider that in large scene monitoring, the length of base b relative scene degree of depth of two video cameras is very little, therefore can given scenario depth bounds D minand D max, estimate corresponding latitude respectively with as shown in Figure 4.
Because scene depth is much larger than the distance between two video cameras, with between difference very little, consider in addition principal and subordinate follow the tracks of in requirement of real-time, adopt linear weighted function method estimate from video camera latitude value computing formula is as follows:
β S → max t = a tan ( - D min · tan β M t b · tan β M t - D min )
β S → min t = a tan ( - D max · tan β M t b · tan β M t - D max )
β S t = λ · β S → max t + γ · β S → min t
Wherein λ and γ is weight coefficient, meets λ+γ=1.
Step 4, transforms to the areal coordinate from video camera Cam-S from the camera coordinates of video camera and estimates from video camera parameter first by from the latitude and longitude coordinates under video camera Cam-S spheric coordinate system with transform to from the camera coordinate system of video camera Cam-S, this process is similar to step 2 inverse process, as shown in the formula, wherein for the cartesian coordinate of target under the spheric coordinate system from video camera Cam-S, for being tied to spheric coordinate system transformation matrix R from the camera coordinates of video camera Cam-S sinverse matrix, for the coordinate of target under Cam-S camera coordinate system:
Y S → r t ( 1 ) = cos β S t
Y S → r t ( 2 ) = sin β S t cos α S t
Y S → r t ( 3 ) = sin β S t sin α S t
Y S t = R S - 1 Y S → r t
Wherein, for t tracking target predicted value is from the cartesian coordinate under video camera spheric coordinate system.
According to the physical significance of pan-tilt parameter under camera coordinate system, given from the observation station the camera coordinate system of video camera Cam-S the pan-tilt parameter of its correspondence can be calculated with when making to move to this parameter value from video camera Cam-S, point observation on image is positioned at principle point location, and (namely optical axis passes ), as shown in Figure 5, for zoom parameter can according to practical application value, in the present embodiment, preferred value is
Finally, in tracing process, calculate the distance of the image boundary of tracking target center and main shooting, when distance is less than threshold value T, follows the tracks of and terminate, two video cameras recover predetermined location.
4th step: export high-resolution panorama sketch, if for t main camera Cam-M catches image, for t catches image from video camera Cam-S.Specifically comprise the steps:
First, to the image of main camera Cam-M ask for background model the more new formula at pixel (x, y) place is:
I MB t ( x , y ) = ( 1 - α ) I MB t - 1 ( x , y ) + α I M t ( x , y )
Wherein, if α, for upgrading coefficient, is taken as 0.05, in the present embodiment initial time background model I MB 0 ( x , y ) = I M 0 ( x , y ) , If | I M t ( x , y ) - I MB t ( x , y ) | > th , Then belong to foreground area at (x, y), otherwise this pixel belongs to background area, wherein, th is compare threshold, and in the present embodiment, th is taken as 20.
Then, the method adopting distinguished point based and direct pixel to combine catches image to t main camera Cam-M with the image that t is caught from video camera carry out registration, because the length of base can be ignored relative to the monitoring scene degree of depth, registration model can select affine model.
Subsequently, by the synchronization frame registration model obtained and the main camera Cam-M image obtained background area and foreground area, calculate background area and foreground area; Again, by high-resolution panorama sketch I hsize be set to the k of original image size doubly, if represent main camera low-resolution image with from video camera high-definition picture registration model, then I hwith between registration model be:
A t = A MS t × 1 / k 0 1 - 1 / k 0 1 / k 1 - 1 / k 0 0 1
Successively will from video camera high-definition picture background area be mapped to I hin, for overlapping region with 0.5 decay factor upgrade;
Finally, successively will from video camera high-definition picture foreground area be mapped to I hin, generate the high-resolution panorama sketch in each moment.Fig. 6 and Fig. 7 is that principal and subordinate follows the tracks of and high-resolution panorama result figure in a preferred embodiment of the invention, in Fig. 6 a, left video camera is as main camera, right video camera is as from video camera, the parameter of main camera is pan=-63.52, the four frame prospects from video camera are mapped in a background to obtain high-resolution panorama by tilt=-11.32, zoom=11.50, Fig. 6 b.In figure 7 a, right video camera is as main camera, and left video camera is as from video camera, the parameter of main camera is pan=-90.66, the four frame prospects from video camera are mapped in a background to obtain high-resolution panorama by tilt=-13.50, zoom=10.00, Fig. 7 b.
The present invention, relative to the static principal and subordinate's tracking system adding voluntary camera, can increase monitoring range; Relative to the how static system adding voluntary camera, hardware spending can be reduced; Add the system of voluntary camera relative to omnidirectional, be more conducive to information fusion.The present invention devises the master-slave control method based on spherical coordinate model, and the master slave mode that can conveniently realize under two cameras meaning in office pan-tilt-zoom parameter is followed the tracks of, and realizes paying close attention to the multiple scale vision of target; Utilize the registration model between different resolution video in principal and subordinate's tracing process, adopt the method for layered shaping to obtain high-resolution panorama to export, may be used for the application such as crime is collected evidence, monitoring record is preserved, the behavioural analysis of moving target by the result after processing like this.
In the description of this specification, specific features, structure, material or feature that the description of reference term " embodiment ", " some embodiments ", " example ", " concrete example " or " some examples " etc. means to describe in conjunction with this embodiment or example are contained at least one embodiment of the present invention or example.In this manual, identical embodiment or example are not necessarily referred to the schematic representation of above-mentioned term.And the specific features of description, structure, material or feature can combine in an appropriate manner in any one or more embodiment or example.
Although illustrate and describe embodiments of the invention, those having ordinary skill in the art will appreciate that: can carry out multiple change, amendment, replacement and modification to these embodiments when not departing from principle of the present invention and aim, scope of the present invention is by claim and equivalents thereof.

Claims (7)

1. principal and subordinate's tracking of binocular PTZ vision system, is characterized in that, comprise the steps:
S1: two Pan/Tilt/Zoom cameras are demarcated respectively, obtains camera coordinate system model;
S2: set up spheric coordinate system, the longitude that the corresponding points on described two Pan/Tilt/Zoom camera coordinate systems are fastened in spherical coordinate is consistent, and difference of latitude is used for expressing visual angle difference, asks for transformation matrix respectively to two Pan/Tilt/Zoom camera coordinate systems;
S3: selected main camera and from video camera, and on the observed image of main camera selected tracking target, make main camera at any pan-tilt-zoom parameter (P m, T m, Z m) under, by tracking target from cameras view image I mon movement locus estimate from video camera pan-tilt-zoom parameter wherein, in described step S3, in two video cameras any one as main camera, an other conduct is from video camera, and two video cameras carry out vision tour respectively according to monitor task in monitoring scene, after any one video camera finds suspicious object, namely the tracking under Still Camera is adopted to follow the tracks of it as main camera, and enter master-slave mode, control another video camera as from video camera to suspicious object active tracing;
S4: export high-resolution panorama sketch.
2. principal and subordinate's tracking of binocular PTZ vision system as claimed in claim 1, it is characterized in that, described step S1 comprises and demarcates outer ginseng matrix R and internal reference matrix K respectively to two Pan/Tilt/Zoom cameras, described outer ginseng matrix R only with pan and tilt relating to parameters, and the rotational axis vertical of pan and tilt parameter intersects at video camera center.
3. principal and subordinate's tracking of binocular PTZ vision system as claimed in claim 2, it is characterized in that, the step of described demarcation internal reference matrix K is:
S31: the pixel aspect ratio a of setting video camera rwith gradient S;
S32: adopt zoom center to replace principal point (u 0, v 0), ensure pan and the tilt parameter constant of video camera, by changing zoom gain of parameter image sequence, the extraction of SIFT feature point being carried out to every width image, and the characteristic point in adjacent two two field pictures is mated, finally obtaining zoom center by least square method;
S33: structure focal distance f is with the variation model of zoom parameter.
4. principal and subordinate's tracking of binocular PTZ vision system as claimed in claim 1, it is characterized in that, described step S2 comprises the steps:
S41: manually gather the N of two video cameras in same monitoring scene to image, the visual field of often pair of image is consistent substantially;
S42: respectively SIFT feature point is carried out to often pair of image and extract and mate, estimate the basis matrix F of often pair of image j(j=1 ..., N), the feature point pairs of record matching wherein, j be the jth of N to image to image, k be the kth of jth to image to characteristic point, N is positive integer;
S43: estimate that the limit of spheric coordinate system is to { E 1, E 2;
S44: reference zero meridian estimating spheric coordinate system;
S45: obtain transformation matrix R 1, R 2.
5. principal and subordinate's tracking of binocular PTZ vision system as claimed in claim 1, it is characterized in that, described step S3 comprises the steps:
S71 chooses tracking target on the observed image of main camera;
S72: estimate the parameter value of each moment from video camera
S73: in tracing process, calculates the distance of tracking target center and image boundary, and when distance is less than threshold value T, follows the tracks of and terminate, two video cameras recover predetermined location.
6. principal and subordinate's tracking of binocular PTZ vision system as claimed in claim 5, it is characterized in that, described step S72 comprises the steps:
S81: by the camera coordinates of the observed image coordinate transform of main camera to main camera;
S82: the spherical coordinate camera coordinates of main camera being transformed to main camera;
S83: main camera spherical coordinate is mapped to from video camera spherical coordinate;
S84: by from video camera spherical coordinate transformation to from the camera coordinates of video camera and estimate from video camera parameter
7. principal and subordinate's tracking of binocular PTZ vision system as claimed in claim 1, it is characterized in that, described step S4 comprises the steps:
S91: to the image of main camera ask for background model the more new formula at pixel (x, y) place is:
Wherein, α is renewal coefficient, if initial time background model if then belong to foreground area at (x, y), otherwise this pixel belongs to background area, wherein th is compare threshold;
S92: the method adopting distinguished point based and direct pixel to combine catches image to t main camera Cam-M with the image that t is caught from video camera carry out registration;
S93: the main camera image that the synchronization frame registration model obtained by step S92 and step S91 are obtained background area and foreground area, calculate background area and foreground area;
S94: by high-resolution panorama sketch I hsize be set to the k of original image size doubly, if represent main camera low-resolution image with from video camera high-definition picture registration model, then I hwith between registration model be:
Successively will from video camera high-definition picture background area be mapped to I hin, overlapping region decay factor is upgraded;
S95: successively will from video camera high-definition picture foreground area be mapped to I hin, generate the high-resolution panorama sketch in each moment.
CN201210454674.XA 2012-11-13 2012-11-13 A kind of principal and subordinate's tracking of binocular PTZ vision system and the system of application the method Active CN103024350B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210454674.XA CN103024350B (en) 2012-11-13 2012-11-13 A kind of principal and subordinate's tracking of binocular PTZ vision system and the system of application the method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210454674.XA CN103024350B (en) 2012-11-13 2012-11-13 A kind of principal and subordinate's tracking of binocular PTZ vision system and the system of application the method

Publications (2)

Publication Number Publication Date
CN103024350A CN103024350A (en) 2013-04-03
CN103024350B true CN103024350B (en) 2015-07-29

Family

ID=47972429

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210454674.XA Active CN103024350B (en) 2012-11-13 2012-11-13 A kind of principal and subordinate's tracking of binocular PTZ vision system and the system of application the method

Country Status (1)

Country Link
CN (1) CN103024350B (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103824278B (en) * 2013-12-10 2016-09-21 清华大学 The scaling method of CCTV camera and system
CN103826071A (en) * 2014-03-11 2014-05-28 深圳市中安视科技有限公司 Three-dimensional camera shooting method for three-dimensional identification and continuous tracking
CN106575439B (en) * 2014-07-24 2019-05-31 国立研究开发法人科学技术振兴机构 Picture position alignment device, picture position alignment methods and recording medium
CN105354813B (en) * 2014-08-18 2018-11-23 杭州海康威视数字技术股份有限公司 Holder is driven to generate the method and device of stitching image
CN105141841B (en) * 2015-08-25 2018-05-08 上海兆芯集成电路有限公司 Picture pick-up device and its method
CN105335977B (en) * 2015-10-28 2018-05-25 苏州科达科技股份有限公司 The localization method of camera system and target object
CN106534693B (en) * 2016-11-25 2019-10-25 努比亚技术有限公司 A kind of photo processing method, device and terminal
CN108010089B (en) * 2017-12-22 2021-09-07 中国人民解放***箭军工程大学 High-resolution image acquisition method based on binocular movable camera
CN108377368A (en) * 2018-05-08 2018-08-07 扬州大学 A kind of one master and multiple slaves formula intelligent video monitoring apparatus and its control method
CN108683849B (en) * 2018-05-15 2021-01-08 维沃移动通信有限公司 Image acquisition method and terminal
CN109460077B (en) * 2018-11-19 2022-05-17 深圳博为教育科技有限公司 Automatic tracking method, automatic tracking equipment and automatic tracking system
CN111599018B (en) * 2019-02-21 2024-05-28 浙江宇视科技有限公司 Target tracking method and system, electronic equipment and storage medium
CN110415278B (en) * 2019-07-30 2020-04-17 中国人民解放***箭军工程大学 Master-slave tracking method of auxiliary binocular PTZ (Pan-Tilt-zoom) visual system of linear moving PTZ (pan-Tilt-zoom) camera
CN110991306B (en) * 2019-11-27 2024-03-08 北京理工大学 Self-adaptive wide-field high-resolution intelligent sensing method and system
CN111131697B (en) * 2019-12-23 2022-01-04 北京中广上洋科技股份有限公司 Multi-camera intelligent tracking shooting method, system, equipment and storage medium
CN111355926B (en) * 2020-01-17 2022-01-11 高新兴科技集团股份有限公司 Linkage method of panoramic camera and PTZ camera, storage medium and equipment
CN111526280A (en) * 2020-03-23 2020-08-11 深圳市大拿科技有限公司 Control method and device of camera device, electronic equipment and storage medium
CN111698467B (en) * 2020-05-08 2022-05-06 北京中广上洋科技股份有限公司 Intelligent tracking method and system based on multiple cameras
CN113487677B (en) * 2021-06-07 2024-04-12 电子科技大学长三角研究院(衢州) Outdoor medium-long distance scene calibration method based on multi-PTZ camera with random distributed configuration
CN113256713B (en) * 2021-06-10 2021-10-15 浙江华睿科技股份有限公司 Pallet position identification method and device, electronic equipment and storage medium
CN113487683B (en) * 2021-07-15 2023-02-10 中国人民解放***箭军工程大学 Target tracking system based on trinocular vision
CN114384568A (en) * 2021-12-29 2022-04-22 达闼机器人有限公司 Position measuring method and device based on mobile camera, processing equipment and medium
CN115713565A (en) * 2022-12-16 2023-02-24 盐城睿算电子科技有限公司 Target positioning method for binocular servo camera

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6215519B1 (en) * 1998-03-04 2001-04-10 The Trustees Of Columbia University In The City Of New York Combined wide angle and narrow angle imaging system and method for surveillance and monitoring
US6724421B1 (en) * 1994-11-22 2004-04-20 Sensormatic Electronics Corporation Video surveillance system with pilot and slave cameras
CN101699862A (en) * 2009-11-16 2010-04-28 上海交通大学 High-resolution region-of-interest image acquisition method of PTZ camera
CN101794448A (en) * 2010-04-07 2010-08-04 上海交通大学 Full automatic calibration method of master-slave camera chain
CN101969548A (en) * 2010-10-15 2011-02-09 中国人民解放军国防科学技术大学 Active video acquiring method and device based on binocular camera shooting
CN102006461A (en) * 2010-11-18 2011-04-06 无锡中星微电子有限公司 Joint tracking detection system for cameras
CN102148965A (en) * 2011-05-09 2011-08-10 上海芯启电子科技有限公司 Video monitoring system for multi-target tracking close-up shooting

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6724421B1 (en) * 1994-11-22 2004-04-20 Sensormatic Electronics Corporation Video surveillance system with pilot and slave cameras
US6215519B1 (en) * 1998-03-04 2001-04-10 The Trustees Of Columbia University In The City Of New York Combined wide angle and narrow angle imaging system and method for surveillance and monitoring
CN101699862A (en) * 2009-11-16 2010-04-28 上海交通大学 High-resolution region-of-interest image acquisition method of PTZ camera
CN101794448A (en) * 2010-04-07 2010-08-04 上海交通大学 Full automatic calibration method of master-slave camera chain
CN101969548A (en) * 2010-10-15 2011-02-09 中国人民解放军国防科学技术大学 Active video acquiring method and device based on binocular camera shooting
CN102006461A (en) * 2010-11-18 2011-04-06 无锡中星微电子有限公司 Joint tracking detection system for cameras
CN102148965A (en) * 2011-05-09 2011-08-10 上海芯启电子科技有限公司 Video monitoring system for multi-target tracking close-up shooting

Also Published As

Publication number Publication date
CN103024350A (en) 2013-04-03

Similar Documents

Publication Publication Date Title
CN103024350B (en) A kind of principal and subordinate's tracking of binocular PTZ vision system and the system of application the method
CN111836012B (en) Video fusion and video linkage method based on three-dimensional scene and electronic equipment
Senior et al. Acquiring multi-scale images by pan-tilt-zoom control and automatic multi-camera calibration
CN106204595B (en) A kind of airdrome scene three-dimensional panorama monitoring method based on binocular camera
US7583815B2 (en) Wide-area site-based video surveillance system
CN103325112B (en) Moving target method for quick in dynamic scene
US8368766B2 (en) Video stabilizing method and system using dual-camera system
CN109872483B (en) Intrusion alert photoelectric monitoring system and method
CN104378582A (en) Intelligent video analysis system and method based on PTZ video camera cruising
CN110415278B (en) Master-slave tracking method of auxiliary binocular PTZ (Pan-Tilt-zoom) visual system of linear moving PTZ (pan-Tilt-zoom) camera
CN113052876B (en) Video relay tracking method and system based on deep learning
CN103686131A (en) Monitoring apparatus and system using 3d information of images and monitoring method using the same
WO2008054489A2 (en) Wide-area site-based video surveillance system
CN102447835A (en) Non-blind-area multi-target cooperative tracking method and system
CN114905512B (en) Panoramic tracking and obstacle avoidance method and system for intelligent inspection robot
CN104038737A (en) Double-camera system and method for actively acquiring high-resolution image of interested target
CN105488777A (en) System and method for generating panoramic picture in real time based on moving foreground
CN109636763A (en) A kind of intelligence compound eye monitoring system
CN100496122C (en) Method for tracking principal and subordinate videos by using single video camera
CN113436130B (en) Intelligent sensing system and device for unstructured light field
CN104809720A (en) Small cross view field-based double-camera target associating method
CN107911697A (en) Unmanned plane image motion object detection method based on area-of-interest layering
CN103903269B (en) The description method and system of ball machine monitor video
Sankaranarayanan et al. A fast linear registration framework for multi-camera GIS coordination
Yue et al. An intelligent identification and acquisition system for UAVs based on edge computing using in the transmission line inspection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20200113

Address after: 100080, 1, Zhongguancun Avenue, Beijing, Haidian District, 3, 318

Patentee after: BEIJING HORIZON ROBOTICS TECHNOLOGY RESEARCH AND DEVELOPMENT CO., LTD.

Address before: 100084 Haidian District 100084-82 mailbox Beijing

Patentee before: Tsinghua University

TR01 Transfer of patent right