CN108111818A - Moving target active perception method and apparatus based on multiple-camera collaboration - Google Patents
Moving target active perception method and apparatus based on multiple-camera collaboration Download PDFInfo
- Publication number
- CN108111818A CN108111818A CN201711425735.9A CN201711425735A CN108111818A CN 108111818 A CN108111818 A CN 108111818A CN 201711425735 A CN201711425735 A CN 201711425735A CN 108111818 A CN108111818 A CN 108111818A
- Authority
- CN
- China
- Prior art keywords
- target
- camera
- picture
- video camera
- main camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 230000008447 perception Effects 0.000 title claims abstract description 41
- 238000013507 mapping Methods 0.000 claims abstract description 39
- 238000012795 verification Methods 0.000 claims abstract description 9
- 230000006870 function Effects 0.000 claims description 27
- 230000009466 transformation Effects 0.000 claims description 21
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 15
- 238000001514 detection method Methods 0.000 claims description 14
- 238000012790 confirmation Methods 0.000 claims description 9
- 238000011156 evaluation Methods 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 7
- 230000001133 acceleration Effects 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 6
- 230000003044 adaptive effect Effects 0.000 claims description 5
- HUTDUHSNJYTCAR-UHFFFAOYSA-N ancymidol Chemical compound C1=CC(OC)=CC=C1C(O)(C=1C=NC=NC=1)C1CC1 HUTDUHSNJYTCAR-UHFFFAOYSA-N 0.000 claims description 5
- 230000000007 visual effect Effects 0.000 claims description 5
- 238000012552 review Methods 0.000 claims description 3
- 238000010845 search algorithm Methods 0.000 claims description 2
- 238000012512 characterization method Methods 0.000 claims 1
- 238000012544 monitoring process Methods 0.000 abstract description 17
- 239000000284 extract Substances 0.000 abstract description 6
- 238000003384 imaging method Methods 0.000 description 10
- 238000012546 transfer Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000009931 harmful effect Effects 0.000 description 2
- 238000012806 monitoring device Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000012911 target assessment Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000007493 shaping process Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24147—Distances to closest patterns, e.g. nearest neighbour classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/144—Movement detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/144—Movement detection
- H04N5/145—Movement estimation
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Image Analysis (AREA)
- Studio Devices (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The present invention provides a kind of moving target active perception apparatus and method based on multiple-camera collaboration.This method includes:According to main camera picture with being demarcated from camera views, position mapping relations are established;Moving target in monitoring main camera picture in real time, obtains candidate target set;Candidate target is chosen according to target importance, is chosen according to position mapping relations from video camera and carries out track up;Camera lens azimuth and zoom magnification are calculated, adjusts from video camera and is directed at candidate target region, obtains the high quality graphic of the candidate target;Candidate target classification is analyzed according to high quality graphic, is concern target the goal verification for belonging to predetermined set type, is put into concern target collection.The present invention is transferred the high quality graphic that candidate target in main camera picture is obtained from video camera, extracts target image characteristics, confirm target classification, realized that the active of target confirms using position correspondence between principal and subordinate's video camera.
Description
Technical field
The present invention relates to the imaging method in a kind of multiple-camera monitoring system and devices more particularly to one kind to be based on taking the photograph more
The target active perception method and apparatus of camera collaboration, belong to field of video monitoring.
Background technology
Now, various video monitoring equipment extensive use in production, living environment.One vital task of video monitoring is
It was found that and record the target in scene, further, find and record the key message on target identities, for follow-up identification
The identity information of target provides help.Face, car plate, vehicle annual inspection mark that video camera takes etc., may be incorporated for
It determines target identities, is the part for including target uniqueness description information.Video camera is taken more comprising uniqueness description letter
Breath can be that the identification of target identities brings help.
Existing video monitoring equipment analyzes video image, reaches identification and hair by gathering the video image of monitoring scene
The effect of target in live scape.But since video surveillance applications scene differs, the position of target and posture are varied, portion
Partial objectives for is separate or side is poor to the target image image quality of video camera, causes based on the recognition methods of characteristics of image to part mesh
It is poor to mark result.And existing monitoring device uses imaging and passive imaging strategy.Equipment shoots monitoring scene in fixed position, can not actively adjust
Whole camera position and imaging parameters improve imaging effect.Imaging and passive imaging equipment can not active obtaining high quality target image,
Target uniqueness description information can not be obtained, thus target can not effectively be identified, there is report by mistake, fail to report in the application
The problem of.It needs to design a kind of active perception equipment with Active Imaging, solves the problems, such as that target actively confirms.
A kind of bayonet camera system avoids target appearance by assuming monitoring device by the region of posture in limited target
The influence that state carrys out imaging belt in limited scene, solves the problems, such as the acquisition of target uniqueness description information.Such system
By setting up video camera in regions such as entrances, it is aided with the equipment such as flash lamp, infrared lamp, the higher image of quality can be obtained.
Target image high resolution, the quality of such system photographs are good, are easy to extract target critical information, target identification accuracy rate is high.
But scene requirement is constrained to, such system, which is only capable of being deployed in a few locations, the application scenarios such as charge bayonet, building entrance, to be had
Limit.
The present invention is cooperateed with by video camera, for the target to be confirmed found in main camera, is transferred and is carried out from video camera
Tracking is captured, and according to the high-quality target image captured, is confirmed target classification, met the demand of target active confirmation
While, solve the problems, such as that bayonet camera applications scene is limited.
The present invention devises a kind of moving target active perception method based on multiple-camera collaboration, is taken the photograph in this method from master
Moving region in picture is detected in camera picture, candidate target set is obtained, using video camera interaction relation, transfers from video camera
Obtain the high quality graphic of each candidate target;Target high quality graphic is analyzed using grader, extracts target image characteristics,
Target classification is analyzed, the goal verification of predetermined set type is paid close attention into target according to classification results.
The content of the invention
Present invention solves the problem in that the candidate target that main camera detects, transfer periphery and obtained from video camera
The high quality graphic of candidate target extracts target image characteristics, analyzes target classification, confirms whether candidate target is concern target
(interested object)。
The video camera that the present invention uses is divided into two classes, relative to the fixed panoramic camera of monitoring scene and with rotation
(Pan), pitching (Tilt), the Pan/Tilt/Zoom camera of zoom (Zoom) function.
The invention discloses a kind of moving target active perception device, including a motionless panoramic camera and Duo Tai PTZ
Video camera and the moving target active perception device.Wherein panoramic monitoring camera is main camera, for obtaining panorama
Monitor video, Pan/Tilt/Zoom camera are from video camera, for tracking and photographic subjects, obtain target high quality graphic.Moving target master
Dynamic sensing device is used to extract target to be confirmed in main camera picture, transfers from video camera and shoots high-quality target image,
Target classification is analyzed, target is confirmed as by belong to predetermined set type target to be confirmed.
The invention discloses a kind of moving targets based on multiple-camera collaboration independently to perceive imaging method, it is characterised in that
Include the following steps:
(1) according to main camera picture with from camera views, being led by way of feature extraction and characteristic matching
Video camera and from automatic Calibration between video camera, establishes position mapping relations,
(2) set according to detection threshold value, multiple moving regions in main camera visual field are detected in real time, are waited
The set of target is selected,
(3) the highest candidate target of importance in candidate target set is chosen according to Assessment of Important function, according to position
Mapping relations will be chosen from Camera location and shoot.
(4) according to candidate target position and main camera and from position mapping relations between video camera, mirror is calculated from video camera
Head azimuth and zoom magnification adjust from video camera and are directed at candidate target region, obtain the high quality graphic of the candidate target,
(5) feature of the high quality graphic of target is extracted, is analyzed to identify target classification, according to target classification as a result, category
It is concern target in the goal verification of predetermined set type, concern target collection is put into, the mesh for being not belonging to predetermined set type
Mark confirms as non-interesting target, is not put into concern target collection,
(6) it is confirmed whether to complete the confirmation of all candidate targets, is to exit, otherwise return to step 3.
Moving target active perception method as described above based on multiple-camera collaboration, it is characterised in that the step 1
With following flow:
1.1 choose the slave video camera do not demarcated arbitrarily.
1.2 to manually adjust from focal length of camera be minimum value, is adjusted from camera lens direction, until from video camera and master
Video camera, which has, maximizes overlapped view.
1.3 extract main camera picture and accelerate robust features (Speeded-Up Robust from camera views respectively
Features)。
1.4 accelerate robust using K arest neighbors (k-Nearest Neighbor) algorithm and force search algorithmic match
(SURF) characteristic point obtains matching result GoodMatches.
1.5 main cameras by matching result GoodMatches, using least square method calculate main camera picture with from camera shooting
Affine matrix between machine picture establishes position mapping relations, completes principal and subordinate's camera calibration.
1.6 judge it is all from video camera, whether registration finishes, be then return to step 1.1, otherwise exit.
Moving target active perception method as described above based on multiple-camera collaboration, it is characterised in that:The step 1
In, using K nearest neighbor algorithms and force search algorithm in character matching step, match main camera picture and from camera views
In acceleration robust features point.For each accelerating robust features point from camera views, accelerate robust features in main camera
Using 3 characteristic points that K arest neighbors (KNN) algorithm search Euclidean distance is nearest in point set, set Matches is as a result charged to;
All Euclidean distances for accelerating robust features point pair in Matches are calculated, the wherein minimum distance of note is d, takes institute in Matches
It is matching characteristic point to collection to forming set GoodMatches, the set to have min (2d, the minDist) points that distance is less than
It closes.Wherein minDist is preset threshold value, can be adjusted according to actual conditions, but should ensure that GoodMatches midpoints to a
Number no less than 15.
Moving target active perception method as described above based on multiple-camera collaboration, it is characterised in that:The step 1
In, main camera and from position mapping relations between video camera include two parts:Main camera picture coordinate and from video camera correspond to
Relation;Main camera and from camera views coordinate transformation relation,
Main camera picture coordinate and from video camera by encirclement main camera picture matching characteristic point convex closure represent,
Matching characteristic point pair in GoodMatches, all features can be surrounded by calculating in main camera picture
The convex closure of point, in step in C, falls the candidate target in the convex closure and distributes to this from video camera.
It main camera and is represented from camera views coordinate transformation relation by affine transformation,
According to the image coordinate location correspondence at set GoodMatches midpoints pair, it is calculated by least square method
Main camera picture to the affine transformation from camera views.
Moving target active perception method as described above based on multiple-camera collaboration, it is characterised in that:The step 2
In, the candidate target in main camera picture is detected using frame difference method,
The candidate target in main camera picture is tracked using continuous adaptive mean shift algorithm,
And the result detected in real time of candidate target has following form:
[ObjectID,Time,PosX_Left,PosY_Left,PosX_Right,PosY_Right];
Wherein:
What ObjectID was represented is the number of candidate target,
Time represents the time of occurrence of candidate target,
PosX_Left, PosY_Left, PosX_Right, PosY_Right represent the bounding box upper left corner and the lower right corner respectively
Coordinate time series.
Moving target active perception method as described above based on multiple-camera collaboration, it is characterised in that the step 3
In, the importance of target is described using the following formula:
E=Eleave+α×Ewait
Wherein, EleaveThe evaluation function of image time is left for description target, the time that target leaves picture is shorter, the letter
Number numerical value is bigger.EwaitTo describe the evaluation function of target waiting time in object queue, the time not being crawled is more long, should
Function value is bigger;α is customized parameter, and α is more big more concern target enters order.
Moving target active perception method as described above based on multiple-camera collaboration, it is characterised in that the step 3
In, target leaves the time of picture by following Function Estimations:
Wherein, w, h be the wide height of main camera views, (x, y) be target current location, [x0,y0] for target enter picture when
Position,For target speed estimate.The time that the function calculates represents target in main camera picture along current
The direction of motion goes to the time of picture boundaries with linear uniform motion.
Moving target active perception method as described above based on multiple-camera collaboration, it is characterised in that:The step 4
In, using main camera and from camera views coordinate transformation relation, by the coordinate of candidate target in main camera be converted to from
Relative coordinate on video camera initial position picture, will be from the phase of video camera initial position picture according to flake spherical projection rule
The angular coordinate from camera lens direction is converted to coordinate.From focal length of camera, thus method is estimated:If to the length and width that set the goal
Maximum is l*Pixel, focal length when establishing position mapping matrix from video camera are f, and candidate target is from camera views
Wide a height of w, h, then can release adjustment back focal length is:
Description of the drawings
Fig. 1 is the moving target active perception method cooperateed with based on multiple-camera according to one embodiment of the present of invention
Flow chart.
Fig. 2 is the moving target active perception device cooperateed with based on multiple-camera according to one embodiment of the present of invention
System layout.
Fig. 3 is the moving target active perception method cooperateed with based on multiple-camera according to one embodiment of the present of invention
The flow chart of principal and subordinate's camera calibration.
Specific embodiment
The present invention is described in further details with reference to the accompanying drawings and detailed description.
The system of moving target active perception device according to an embodiment of the invention based on multiple-camera collaboration
Configuration is as shown in Figure 2.Realizing the device of the moving target active perception method based on multiple-camera collaboration in the present invention includes:
At least two video cameras are used to implement principal and subordinate's camera operation pattern;And a set of moving target active perception device.This hair
Principal and subordinate's camera operation pattern in bright refers specifically to, and for the target found in main camera, transfers and is carried out actively from video camera
It perceives, obtains high-quality video image.
The video camera that the present invention uses is divided into two classes, and one kind is the corresponding fixed panoramic camera of monitoring scene, another
Class is that have the function of to rotate (Pan), pitching (Tilt), the Pan/Tilt/Zoom camera of zoom (Zoom).One according to the present invention specific real
The main camera in example is applied with including the fixed panoramic camera in a visual field and Duo Tai Pan/Tilt/Zoom cameras from video camera;As master
The panoramic camera of video camera has the larger visual field, and picture can cover at least most of region of monitoring scene.In basis
In the specific embodiment of the present invention, main camera is fixed gunlock.In another specific embodiment according to the present invention
In, main camera is the Pan/Tilt/Zoom camera in fixed lens direction.
According to one embodiment of present invention, one covers from the monitoring scene that camera views cover and main camera
Monitoring scene have an overlapping, it is and not be overlapped with the scene that other are covered from video camera.But in another implementation according to the present invention
In example, a scene covered from the monitoring scene that camera views cover with other from video camera overlaps.
Moving target active perception device according to the present invention gathers the video image of main camera and Duo Tai from video camera,
The video image of acquisition is handled.
Moving target active perception device according to an embodiment of the invention is arranged on a personal computer
(PC), on embedded processing box or board.
According to one embodiment of present invention, the hardware of moving target active perception device according to the present invention is carried
It is integrated in main camera hardware.According to still another embodiment of the invention, moving target according to the present invention is actively
Sensing device is arranged on by network connection main camera and from the computer of video camera.
As shown in Fig. 2, moving target active perception device according to an embodiment of the invention includes Image Acquisition list
Member, candidate target detection unit, Object selection unit, position map unit, target following confirmation unit.
Image acquisition units gather main camera and the image from video camera.Image acquisition units will be from main camera
Image is sent to candidate target detection unit, for carrying out the detection of candidate target;Image acquisition units are also in the future since camera shooting
The image of machine is sent to target following confirmation unit, for extracting target signature, analysis and confirmation target classification.
Candidate target detection unit receives the image of main camera from image acquisition units, according to detection threshold value setting to figure
Moving region as in is monitored in real time, obtains the set of candidate target, and the set is sent to Object selection unit.Root
According to one embodiment of the present of invention, candidate target detection unit extracts the moving region in main camera picture using frame difference method,
And use continuous adaptive mean shift algorithm tracking candidate target.
Object selection unit receives the set of the candidate target from candidate target detection unit, is commented according to target importance
Valency function chooses the highest candidate target of current time importance in candidate target set, the candidate target selected is sent to
Target location map unit.
The candidate target that position map unit receiving is selected, is selected according to main camera and from coordinate mapping relations between video camera
It selects from video camera, and sends coordinate of the target candidate again in main camera picture to selected from video camera.It is selected from camera shooting
Machine receives the coordinate of target, calculates angle lens and Focussing amount, and alignment candidate target is shot, confirmed to target following
Unit transmits high-quality target image.In addition, position map unit is in moving target active perception device according to the present invention
In start-up course, control main camera and the process of automatic Calibration, record position mapping relations are completed from video camera.
Target following confirmation unit receives, from the selected high-quality target image from video camera, to extract high quality target
Characteristics of image in image is classified by the characteristics of image of grader target, obtains target classification, belonging to predetermined set type
Goal verification for concern target, be put into concern target collection, be not belonging to predetermined set type goal verification be non-interesting
Target is not put into concern target collection.
Shown in FIG. 1 is the moving target active perception according to an embodiment of the invention based on multiple-camera collaboration
Method, including 5 steps:
Position mapping relations based on Image Feature Matching are established;
Based on moving object detection, candidate target set is obtained;
The highest candidate target of importance in candidate target set is chosen according to Assessment of Important function, according to main camera
And it from the position mapping relations between video camera, chooses from video camera into line trace and confirms;
According to principal and subordinate's camera position mapping relations, camera lens orientation and Focussing amount are calculated from video camera, is adjusted from taking the photograph
Camera lens alignment candidate target orientation, obtains high quality graphic;
Target signature is extracted using target high quality graphic, is analyzed to identify target classification.
Above-mentioned 5 steps according to one embodiment of present invention are illustrated in turn below.
(1) the position mapping relations based on Image Feature Matching are established
As shown in figure 3, the position mapping relations based on Image Feature Matching of the method according to the invention are established, pass through spy
Sign extraction, the mode of characteristic matching, calibration main camera, from video camera, establish position mapping relations with each.In the present invention
Calibration refer to the process of to establish main camera picture and from camera views same object coordinate mapping.
Main camera in the present invention and include two parts from position mapping relations between video camera:Main camera picture coordinate
With from video camera correspondence;Main camera picture and from camera views coordinate transformation relation.
Main camera picture and the coordinate mapping relations from camera views are described using affine transformation in the present invention.Pass through
It extracts principal and subordinate's camera views and accelerates robust features (Speeded-Up Robust Features) point, utilize main camera picture
With position correspondence progress principal and subordinate's camera calibration of the similar features point from camera views, position mapping relations are established.
According to one embodiment of present invention, main camera picture coordinate and the table from video camera target location coordinate mapping relations
Sign form is affine transformation matrix.
Affine transformation for translation and two class function of Linear Mapping it is compound.In image processing field, affine transformation is applicable in
In description image translation, rotation, scaling and reversion (mirror image).If affine transformation M, M can be represented with the following formula:
The main camera picture and coordinate correspondence of Same Scene can be described with affine transformation from camera views.
Main camera picture and known several matching double points from camera views are provided, is brought into equation (1), using minimum two
Multiplication solves parameter a1~a4、txAnd ty, you can the affine transformation of two images is obtained, i.e., the position mapping square in the present invention
Battle array.
According to one embodiment of present invention, initial position image of the matching double points by main camera and from video camera
Extraction accelerates robust features characteristic point, and matching way obtains.Use force search main camera picture during characteristic matching
With from the similar acceleration robust features characteristic point in camera views.Main camera picture is extracted first and from camera views
Accelerate robust features point, for accelerating robust features point from each of camera views, in the acceleration robust of main camera picture
Characteristic point is concentrated use in 3 nearest characteristic points of K arest neighbors (KNN) algorithm search Euclidean distance, as a result charges to set
Matches;All Euclidean distances for accelerating robust features point pair in Matches are calculated, the wherein minimum distance of note is d, is taken
Point of all distances less than min (2d, minDist) is to forming set GoodMatches, the GoodMatches collection in Matches
The matching characteristic point of as output is closed to set.Wherein minDist is preset threshold value, can be adjusted according to actual conditions.
According to one embodiment of present invention, threshold value minDist can be set to 1000.
In a specific embodiment according to the present invention, position map unit is chosen according to position correspondence from camera shooting
Machine track up target to be confirmed.Position correspondence is by the matching characteristic point in GoodMatches to by being calculated.Position
It puts map unit and calculates the convex closure that can include main camera pictorial feature point in set GoodMatches, in subsequent step
The candidate target fallen in convex closure is distributed to corresponding from Camera location shooting.
(2) candidate target extraction and candidate target set
Moving target is the object most paid close attention in video monitoring, therefore, the movement mesh in candidate target extraction concern scene
Mark.Moving region in scene is known as the potential region of target, may include the candidate target passed through.In the present invention, pass through movement
The target that region detection obtains is known as candidate target.All candidate targets form candidate target set.Remember in candidate target set
Each candidate target is recorded from the time into picture and the time series of bounding box coordinate.
In the present invention, candidate target detection unit obtains candidate target on main camera picture using frame difference method, uses
Continuous adaptive average drifting (CamShift) algorithm keeps track candidate target, candidate's mesh is recorded by the position sequence of candidate target
In mark set.
According to one embodiment of present invention, candidate target set includes but not limited to:
[ObjectID,Time,PosX_Left,PosY_Left,PosX_Right,PosY_Right];
What wherein ObjectID was represented is candidate target number, and Time represents the time of occurrence when candidate target, PosX_
Left, PosY_Left, PosX_Right, PosY_Right represent the coordinate in the bounding box upper left corner and the lower right corner respectively.
(3) candidate target based on target Assessment of Important function is chosen
Target Assessment of Important mainly according to the position of candidate target, the direction of motion, speed and enters after monitoring scene not
Perceived time span, overall merit target importance.According to one embodiment of present invention, evaluation principle is movement
Speed, and the direction of motion relatively near from monitored picture edge towards edge, into scene after not the perceived time it is about long, then
The target importance is higher.Object selection unit is perceived according to importance ranking, the selection highest target of importance.
According to one embodiment of present invention, Assessment of Important functional form is as follows:
E=Eleave+α×Ewait
Wherein, EleaveThe evaluation function of image time is left for description target, the time that target leaves picture is shorter, the letter
Number numerical value is bigger.EwaitTo describe the evaluation function of target waiting time in object queue, the time not being crawled is more long, should
Function value is bigger;α is customized parameter, and α is more big more concern target enters order.
According to one embodiment of present invention, target leaves the time of picture by following Function Estimations:
Wherein, w, h be the wide height of main camera review, (x, y) be target current location, [x0,y0] for target enter picture when
Position,For target speed estimate.The time that the function calculates represents target in picture along current kinetic direction
The time on monitored picture border is gone to linear uniform motion.
For the highest candidate target of selected importance, position map unit is chosen according to position mapping relations from camera shooting
The coordinates of targets sequence of candidate target set record is sent to from video camera by machine.
(4) the slave camera control parameter based on position mapping relations between principal and subordinate's video camera calculates
According to one embodiment of present invention, video camera is carried out using the position mapping matrix M generated by initialization step
Between coordinate conversion.If the candidate target central point (x, y) in main camera is become by mapping matrix M in position between principal and subordinate's video camera
Changing to the relative coordinate procedural representation from video camera initial picture center is:
From the relative coordinate (x', y') at video camera initial picture center be from the two-dimensional pixel coordinate in camera views,
Video camera can not be adjusted accordingly, it is necessary to be converted to the azimuthal coordinate from video camera.From video camera according to flake spherical projection rule
The relative coordinate of initial position picture is then converted to the angular coordinate from camera lens direction.
According to one embodiment of present invention, if target the relative coordinate from video camera initial picture center be (x, y),
Wide and a height of (w, h) from camera views, field angle isCamera angle adjustment amountThere is following form:
According to one embodiment of present invention, main camera and from bounding box between video camera (Bounding Box) ruler
Very little conversion is estimated by using coordinate switch target upper left, bottom right vertex coordinate.To convert in rear left, bottom right vertex surrounds
Bounding box size, be target expected dimension.
According to one embodiment of present invention, the change for the target size that video camera is brought in adjustment focal length (visual field)
Change, calculated by the inverse proportion function of focal length.Realize that method according to an embodiment of the invention and/or device are being run
In can gather fixed-size target and capture image, it is given capture picture size requirement in the case of, it is estimated that from
Focal length of camera numerical value.As being l to the length and width maximum that sets the goal*Pixel, focal length when establishing position mapping relations from video camera
F, candidate target from camera views expected dimension be (w, h), then can release adjustment back focal length is:
The camera direction and Focussing amount calculated from video camera according to the above method adjusts video camera, after alignment target
The continuous track up of a period of time is carried out, obtains high-quality target image.
(5) target signature is extracted using target high quality graphic, is analyzed to identify target classification
According to one embodiment of present invention, the target that the reception of target following confirmation unit is shot from video camera is high-quality
Spirogram picture, and target signature is extracted, target classification is analyzed using grader, and candidate target is updated according to target classification result
Classification.According to one embodiment of present invention, type belong to predetermined set type confirm as concern target
(interested object), is put into concern target collection, and the goal verification for being not belonging to predetermined set type is non-interesting mesh
Mark, is not put into concern target collection.
According to one embodiment of present invention, extraction is characterized in the feature that can identify target type, refers mainly to
Be the features such as face, trunk, four limbs or motor vehicle face shaping, wheel, license plate area.By such distinctive feature to waiting
The classification of target is selected to be confirmed.Grader provides the classification results of target by analyzing the target signature in picture.
According to one embodiment of present invention, classify from video camera to each vertical frame dimension quality image of shooting, it is comprehensive
The classification results of each frame are counted, choose classification results of the classification results of possibility maximum as target.Since video camera turns
Dynamic speed is limited, and former frame videos are likely to occur larger fuzzy or fail to capture target, have harmful effect to classification results.
According to one embodiment of present invention, low-quality image can be abandoned in target classification, to avoid to classification results
Harmful effect.
Disclosed above is only specific embodiments of the present invention.The premise of scope of the presently claimed invention is not being departed from
Under, those skilled in the art, the basic fundamental design provided according to the present invention can carry out various corresponding variations, amendment.
Claims (13)
- A kind of 1. moving target active perception method based on multiple-camera collaboration, it is characterised in that including:A) according to main camera picture with from camera views, main camera is carried out by way of feature extraction and characteristic matching With from automatic Calibration between video camera, establishing position mapping relations,B) set according to detection threshold value, multiple moving regions in main camera visual field are detected in real time, obtain candidate's mesh Target set,C) according to Assessment of Important function, the highest candidate target of importance in the set of candidate target is chosen, is reflected according to position Relation selection is penetrated to shoot from Camera location accordingly,D) according to candidate target position and main camera and from position mapping relations between video camera, camera lens orientation is calculated from video camera Angle and zoom magnification adjust from video camera and are directed at candidate target region, obtain the high quality graphic of the candidate target,E the feature of the high quality graphic of target) is extracted, is analyzed to identify target classification, according to target classification as a result, handle belongs to predetermined The goal verification of type is set concern target collection to be put into, the goal verification for being not belonging to predetermined set type for concern target For non-interesting target, concern target collection is not put into.
- 2. the moving target active perception method according to claim 1 based on multiple-camera collaboration, it is characterised in that institute State step A) include:A1 the slave video camera do not demarcated arbitrarily) is chosen,A2) minimum value is adjusted to from focal length of camera, adjust from camera lens direction, until from video camera and main camera With maximize overlapped view,A3 main camera and the acceleration robust features from camera views) are extracted respectively,A4) accelerate robust features point using K arest neighbors (k-Nearest Neighbor) algorithm and force search algorithmic match, obtain To matching result GoodMatches.A5) by matching result GoodMatches, the affine matrix of principal and subordinate's camera views is calculated using least square method, is completed Principal and subordinate's camera calibration.A6) judge it is all from video camera, whether registration finishes, be then return to step A1), otherwise exit.It is and described based on more camera shootings The moving target active perception method of machine collaboration further comprises:F) it is confirmed whether to complete the confirmation of all candidate targets, is to exit, otherwise return to step C).
- 3. the moving target active perception method according to claim 1 based on multiple-camera collaboration, it is characterised in that:The step A) in feature matching operation using K nearest neighbor algorithms and force search algorithm, match main camera and from Acceleration robust features point in camera views,For each accelerating robust features point from camera views, in main camera robust features point is accelerated to be concentrated use in K nearest Result, is charged to set Matches by 3 nearest characteristic points of adjacent algorithm search Euclidean distance,All Euclidean distances for accelerating robust features point pair in set of computations Matches, the wherein minimum distance of note is d, takes collection It closes all distances in Matches and is less than the point of min (2d, minDist) to forming set GoodMatches, the set GoodMatches is matching characteristic point to set, and wherein minDist is preset threshold value, can be according to actual conditions tune It is whole, but should ensure that the number at set GoodMatches midpoints pair is no less than 15.
- 4. the moving target active perception method according to claim 1 based on multiple-camera collaboration, it is characterised in that:In the step A, main camera and from position mapping relations between video camera include two parts:Main camera picture coordinate and From video camera correspondence;Main camera and from camera views coordinate transformation relation,Main camera picture coordinate and from video camera by encirclement main camera picture matching characteristic point convex closure represent,Matching characteristic point pair in set GoodMatches, all features can be surrounded by calculating in main camera picture The convex closure of point, in step in C, falls the candidate target in the convex closure and distributes to this from video camera.It main camera and is represented from camera views coordinate transformation relation by affine transformation,According to the image coordinate location correspondence at set GoodMatches midpoints pair, the master being calculated by least square method Camera views are to the affine transformation from camera views.
- 5. the moving target active perception method according to claim 1 based on multiple-camera collaboration, it is characterised in that:The step B) in, the candidate target in main camera picture is detected using frame difference method,The candidate target in main camera picture is tracked using continuous adaptive mean shift algorithm,And the result detected in real time of candidate target has following form:[ObjectID,Time,PosX_Left,PosY_Left,PosX_Right,PosY_Right];Wherein:What ObjectID was represented is the number of candidate target,Time represents the time of occurrence of candidate target,PosX_Left, PosY_Left, PosX_Right, PosY_Right represent the seat in the bounding box upper left corner and the lower right corner respectively Target time series.
- 6. the moving target active perception method as described in claim 1 based on multiple-camera collaboration, it is characterised in that described Step C) in,Use the importance of the following formula characterization target:E=Eleave+α×EwaitWherein,EleaveThe evaluation function of image time is left for description target, the time that target leaves picture is shorter, which gets over Greatly,EwaitTo describe the evaluation function of target waiting time in object queue, the time not being crawled is more long, the function value It is bigger,α is customized parameter, and α is more big more concern target enters order,The time that target leaves picture is characterized by following functions:Wherein,W, h are the wide height of main camera review,(x, y) is target current location,[x0,y0] position when entering picture for target,For target speed estimate,The time characterized by above formula illustrates that target is walked in main camera picture along current kinetic direction with linear uniform motion To the time on the border of picture.
- 7. the moving target active perception method as described in claim 1 based on multiple-camera collaboration, it is characterised in that described Step D) in, use step B from video camera) in generation main camera and calculated from position mapping relations between video camera from camera shooting The angular coordinate and focal length of machine lens direction,It is calculated from camera lens direction according to following manner:The coordinate of the candidate target on main camera is converted to from video camera initial position picture according to coordinate mapping shutdown Relative coordinate, then according to flake spherical projection rule by from the relative coordinate on video camera initial position picture be converted to from The angular coordinate in camera lens directionFocal length of camera is calculated according to following manner:If to the length and width maximum that sets the goal for l* pixels, focal length when establishing position mapping relations from video camera is f, candidate target From width a height of w, the h in camera views, then can release adjustment back focal length is:。
- 8. a kind of target active perception device, it is characterised in that including:Image acquisition units, for obtaining main camera and video image from video camera;Candidate target detection unit for extracting candidate target from the video image of main camera, forms candidate target set;Object selection unit is used for the highest candidate target of current time importance in candidate target set;Position map unit, for establishing main camera and being clapped from position mapping relations between video camera and selection from video camera It takes the photograph selected candidate target and sends candidate target location information to from video camera;Target following confirmation unit, for according to the target high quality graphic shot from video camera, analyzing target classification, belonging to The goal verification of predetermined set type is concern target, is put into concern target collection.
- 9. target active perception device as claimed in claim 8, which is characterized in that the position map unit uses K arest neighbors Algorithm and force search algorithmic match main camera and from the acceleration robust features point in camera views,For each accelerating robust features point from camera views, in main camera robust features point is accelerated to be concentrated use in K nearest Result, is charged to set Matches by 3 nearest characteristic points of adjacent algorithm search Euclidean distance,All Euclidean distances for accelerating robust features point pair in set of computations Matches, the wherein minimum distance of note is d, takes collection It closes all distances in Matches and is less than the point of min (2d, minDist) to forming set GoodMatches, the set GoodMatches is matching characteristic point to set, and wherein minDist is preset threshold value, can be according to actual conditions tune It is whole, but should ensure that the number at set GoodMatches midpoints pair is no less than 15.
- 10. target active perception device as claimed in claim 8, which is characterized in that the main camera shooting in the position map unit Machine and from the position mapping relations between video camera include two parts:Main camera picture coordinate and from video camera correspondence;It is main Video camera and from camera views coordinate transformation relation,Main camera picture coordinate and from video camera by encirclement main camera picture matching characteristic point convex closure represent,Matching characteristic point pair in GoodMatches, all characteristic points can be surrounded by calculating in main camera picture Convex closure in step in C, falls the candidate target in the convex closure and distributes to this from video camera.It main camera and is represented from camera views coordinate transformation relation by affine transformation,According to the image coordinate location correspondence at set GoodMatches midpoints pair, the master being calculated by least square method Camera views are to the affine transformation from camera views.
- 11. target active perception device as claimed in claim 8, which is characterized in that the candidate target detection unit use makes The candidate target in main camera picture is detected with frame difference method,The candidate target in main camera picture is tracked using continuous adaptive mean shift algorithm,And the result detected in real time of candidate target has following form:[ObjectID,Time,PosX_Left,PosY_Left,PosX_Right,PosY_Right];Wherein:What ObjectID was represented is the number of candidate target,Time represents the time of occurrence of candidate target,PosX_Left, PosY_Left, PosX_Right, PosY_Right represent the seat in the bounding box upper left corner and the lower right corner respectively Target time series.
- 12. target active perception device as claimed in claim 8, which is characterized in that the Object selection unit uses following public affairs Formula characterizes the importance of target:E=Eleave+α×EwaitWherein,EleaveThe evaluation function of image time is left for description target, the time that target leaves picture is shorter, which gets over Greatly,EwaitTo describe the evaluation function of target waiting time in object queue, the time not being crawled is more long, the function value It is bigger,α is customized parameter, and α is more big more concern target enters order,The time that target leaves picture is characterized by following functions:Wherein,W, h are the wide height of main camera review,(x, y) is target current location,[x0,y0] position when entering picture for target,For target speed estimate,The time characterized by above formula illustrates that target is walked in main camera picture along current kinetic direction with linear uniform motion To the time on the border of picture.
- 13. the target active perception device as described in claim 8 or 11, it is characterised in that in the target tracking unit, from Video camera is calculated using the main camera in address map unit and from position mapping relations between video camera from camera lens side To angular coordinate and focal length,It is calculated from camera lens direction according to following manner:The coordinate of the candidate target on main camera is converted to from video camera initial position picture according to coordinate mapping shutdown Relative coordinate, then according to flake spherical projection rule by from the relative coordinate on video camera initial position picture be converted to from The angular coordinate in camera lens directionFocal length of camera is calculated according to following manner:If to the length and width maximum that sets the goal for l* pixels, focal length when establishing position mapping relations from video camera is f, candidate target From width a height of w, the h in camera views, then can release adjustment back focal length is:。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711425735.9A CN108111818B (en) | 2017-12-25 | 2017-12-25 | Moving target actively perceive method and apparatus based on multiple-camera collaboration |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711425735.9A CN108111818B (en) | 2017-12-25 | 2017-12-25 | Moving target actively perceive method and apparatus based on multiple-camera collaboration |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108111818A true CN108111818A (en) | 2018-06-01 |
CN108111818B CN108111818B (en) | 2019-05-03 |
Family
ID=62213191
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711425735.9A Active CN108111818B (en) | 2017-12-25 | 2017-12-25 | Moving target actively perceive method and apparatus based on multiple-camera collaboration |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108111818B (en) |
Cited By (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109377518A (en) * | 2018-09-29 | 2019-02-22 | 佳都新太科技股份有限公司 | Target tracking method, device, target tracking equipment and storage medium |
CN109522846A (en) * | 2018-11-19 | 2019-03-26 | 深圳博为教育科技有限公司 | One kind is stood up monitoring method, device, server and monitoring system of standing up |
CN110059641A (en) * | 2019-04-23 | 2019-07-26 | 重庆工商大学 | Depth birds recognizer based on more preset points |
CN110176039A (en) * | 2019-04-23 | 2019-08-27 | 苏宁易购集团股份有限公司 | A kind of video camera adjusting process and system for recognition of face |
CN110177256A (en) * | 2019-06-17 | 2019-08-27 | 北京影谱科技股份有限公司 | A kind of tracking video data acquisition methods and device |
CN110191324A (en) * | 2019-06-28 | 2019-08-30 | Oppo广东移动通信有限公司 | Image processing method, device, server and storage medium |
CN110430395A (en) * | 2019-07-19 | 2019-11-08 | 苏州维众数据技术有限公司 | Video data AI processing system and processing method |
CN110493569A (en) * | 2019-08-12 | 2019-11-22 | 苏州佳世达光电有限公司 | Monitoring objective shoots method for tracing and system |
WO2020029921A1 (en) * | 2018-08-07 | 2020-02-13 | 华为技术有限公司 | Monitoring method and device |
CN110881117A (en) * | 2018-09-06 | 2020-03-13 | 杭州海康威视数字技术股份有限公司 | Inter-picture area mapping method and device and multi-camera observation system |
CN111131697A (en) * | 2019-12-23 | 2020-05-08 | 北京中广上洋科技股份有限公司 | Multi-camera intelligent tracking shooting method, system, equipment and storage medium |
CN111179305A (en) * | 2018-11-13 | 2020-05-19 | 晶睿通讯股份有限公司 | Object position estimation method and object position estimation device |
CN111354011A (en) * | 2020-05-25 | 2020-06-30 | 江苏华丽智能科技股份有限公司 | Multi-moving-target information capturing and tracking system and method |
CN111541851A (en) * | 2020-05-12 | 2020-08-14 | 南京甄视智能科技有限公司 | Face recognition equipment accurate installation method based on unmanned aerial vehicle hovering survey |
CN111612812A (en) * | 2019-02-22 | 2020-09-01 | 富士通株式会社 | Target detection method, target detection device and electronic equipment |
CN111684458A (en) * | 2019-05-31 | 2020-09-18 | 深圳市大疆创新科技有限公司 | Target detection method, target detection device and unmanned aerial vehicle |
CN111698467A (en) * | 2020-05-08 | 2020-09-22 | 北京中广上洋科技股份有限公司 | Intelligent tracking method and system based on multiple cameras |
CN111815722A (en) * | 2020-06-10 | 2020-10-23 | 广州市保伦电子有限公司 | Double-scene matting method and system |
CN111866392A (en) * | 2020-07-31 | 2020-10-30 | Oppo广东移动通信有限公司 | Shooting prompting method and device, storage medium and electronic equipment |
CN111918023A (en) * | 2020-06-29 | 2020-11-10 | 北京大学 | Monitoring target tracking method and device |
CN112215048A (en) * | 2019-07-12 | 2021-01-12 | ***通信有限公司研究院 | 3D target detection method and device and computer readable storage medium |
CN112308924A (en) * | 2019-07-29 | 2021-02-02 | 浙江宇视科技有限公司 | Method, device and equipment for calibrating camera in augmented reality and storage medium |
CN112492261A (en) * | 2019-09-12 | 2021-03-12 | 华为技术有限公司 | Tracking shooting method and device and monitoring system |
CN112767452A (en) * | 2021-01-07 | 2021-05-07 | 北京航空航天大学 | Active sensing method and system for camera |
CN112954188A (en) * | 2019-12-10 | 2021-06-11 | 李思成 | Human eye perception imitating active target snapshot method and device |
CN113179371A (en) * | 2021-04-21 | 2021-07-27 | 新疆爱华盈通信息技术有限公司 | Shooting method, device and snapshot system |
CN113190013A (en) * | 2018-08-31 | 2021-07-30 | 创新先进技术有限公司 | Method and device for controlling terminal movement |
CN113518174A (en) * | 2020-04-10 | 2021-10-19 | 华为技术有限公司 | Shooting method, device and system |
CN113792715A (en) * | 2021-11-16 | 2021-12-14 | 山东金钟科技集团股份有限公司 | Granary pest monitoring and early warning method, device, equipment and storage medium |
CN114155433A (en) * | 2021-11-30 | 2022-03-08 | 北京新兴华安智慧科技有限公司 | Illegal land detection method and device, electronic equipment and storage medium |
CN114938426A (en) * | 2022-04-28 | 2022-08-23 | 湖南工商大学 | Method and apparatus for creating a multi-device media presentation |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110285810A1 (en) * | 2010-05-21 | 2011-11-24 | Qualcomm Incorporated | Visual Tracking Using Panoramas on Mobile Devices |
CN102291569A (en) * | 2011-07-27 | 2011-12-21 | 上海交通大学 | Double-camera automatic coordination multi-target eagle eye observation system and observation method thereof |
CN103198487A (en) * | 2013-04-15 | 2013-07-10 | 厦门博聪信息技术有限公司 | Automatic calibration method for video monitoring system |
CN103607576A (en) * | 2013-11-28 | 2014-02-26 | 北京航空航天大学深圳研究院 | Traffic video monitoring system oriented to cross camera tracking relay |
CN104125433A (en) * | 2014-07-30 | 2014-10-29 | 西安冉科信息技术有限公司 | Moving object video surveillance method based on multi-PTZ (pan-tilt-zoom)-camera linkage structure |
CN105208327A (en) * | 2015-08-31 | 2015-12-30 | 深圳市佳信捷技术股份有限公司 | Master/slave camera intelligent monitoring method and device |
-
2017
- 2017-12-25 CN CN201711425735.9A patent/CN108111818B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110285810A1 (en) * | 2010-05-21 | 2011-11-24 | Qualcomm Incorporated | Visual Tracking Using Panoramas on Mobile Devices |
CN102291569A (en) * | 2011-07-27 | 2011-12-21 | 上海交通大学 | Double-camera automatic coordination multi-target eagle eye observation system and observation method thereof |
CN103198487A (en) * | 2013-04-15 | 2013-07-10 | 厦门博聪信息技术有限公司 | Automatic calibration method for video monitoring system |
CN103607576A (en) * | 2013-11-28 | 2014-02-26 | 北京航空航天大学深圳研究院 | Traffic video monitoring system oriented to cross camera tracking relay |
CN104125433A (en) * | 2014-07-30 | 2014-10-29 | 西安冉科信息技术有限公司 | Moving object video surveillance method based on multi-PTZ (pan-tilt-zoom)-camera linkage structure |
CN105208327A (en) * | 2015-08-31 | 2015-12-30 | 深圳市佳信捷技术股份有限公司 | Master/slave camera intelligent monitoring method and device |
Cited By (46)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020029921A1 (en) * | 2018-08-07 | 2020-02-13 | 华为技术有限公司 | Monitoring method and device |
US11790504B2 (en) | 2018-08-07 | 2023-10-17 | Huawei Technologies Co., Ltd. | Monitoring method and apparatus |
CN113190013B (en) * | 2018-08-31 | 2023-06-27 | 创新先进技术有限公司 | Method and device for controlling movement of terminal |
CN113190013A (en) * | 2018-08-31 | 2021-07-30 | 创新先进技术有限公司 | Method and device for controlling terminal movement |
CN110881117A (en) * | 2018-09-06 | 2020-03-13 | 杭州海康威视数字技术股份有限公司 | Inter-picture area mapping method and device and multi-camera observation system |
CN109377518A (en) * | 2018-09-29 | 2019-02-22 | 佳都新太科技股份有限公司 | Target tracking method, device, target tracking equipment and storage medium |
CN111179305A (en) * | 2018-11-13 | 2020-05-19 | 晶睿通讯股份有限公司 | Object position estimation method and object position estimation device |
CN111179305B (en) * | 2018-11-13 | 2023-11-14 | 晶睿通讯股份有限公司 | Object position estimation method and object position estimation device thereof |
CN109522846A (en) * | 2018-11-19 | 2019-03-26 | 深圳博为教育科技有限公司 | One kind is stood up monitoring method, device, server and monitoring system of standing up |
CN109522846B (en) * | 2018-11-19 | 2020-08-14 | 深圳博为教育科技有限公司 | Standing monitoring method, device, server and standing monitoring system |
CN111612812A (en) * | 2019-02-22 | 2020-09-01 | 富士通株式会社 | Target detection method, target detection device and electronic equipment |
CN111612812B (en) * | 2019-02-22 | 2023-11-03 | 富士通株式会社 | Target object detection method, detection device and electronic equipment |
CN110059641B (en) * | 2019-04-23 | 2023-02-03 | 重庆工商大学 | Depth bird recognition algorithm based on multiple preset points |
CN110176039A (en) * | 2019-04-23 | 2019-08-27 | 苏宁易购集团股份有限公司 | A kind of video camera adjusting process and system for recognition of face |
CN110059641A (en) * | 2019-04-23 | 2019-07-26 | 重庆工商大学 | Depth birds recognizer based on more preset points |
CN111684458B (en) * | 2019-05-31 | 2024-03-12 | 深圳市大疆创新科技有限公司 | Target detection method, target detection device and unmanned aerial vehicle |
CN111684458A (en) * | 2019-05-31 | 2020-09-18 | 深圳市大疆创新科技有限公司 | Target detection method, target detection device and unmanned aerial vehicle |
CN110177256B (en) * | 2019-06-17 | 2021-12-14 | 北京影谱科技股份有限公司 | Tracking video data acquisition method and device |
CN110177256A (en) * | 2019-06-17 | 2019-08-27 | 北京影谱科技股份有限公司 | A kind of tracking video data acquisition methods and device |
CN110191324A (en) * | 2019-06-28 | 2019-08-30 | Oppo广东移动通信有限公司 | Image processing method, device, server and storage medium |
CN112215048B (en) * | 2019-07-12 | 2024-03-22 | ***通信有限公司研究院 | 3D target detection method, device and computer readable storage medium |
CN112215048A (en) * | 2019-07-12 | 2021-01-12 | ***通信有限公司研究院 | 3D target detection method and device and computer readable storage medium |
CN110430395A (en) * | 2019-07-19 | 2019-11-08 | 苏州维众数据技术有限公司 | Video data AI processing system and processing method |
CN112308924A (en) * | 2019-07-29 | 2021-02-02 | 浙江宇视科技有限公司 | Method, device and equipment for calibrating camera in augmented reality and storage medium |
CN112308924B (en) * | 2019-07-29 | 2024-02-13 | 浙江宇视科技有限公司 | Method, device, equipment and storage medium for calibrating camera in augmented reality |
CN110493569A (en) * | 2019-08-12 | 2019-11-22 | 苏州佳世达光电有限公司 | Monitoring objective shoots method for tracing and system |
CN112492261A (en) * | 2019-09-12 | 2021-03-12 | 华为技术有限公司 | Tracking shooting method and device and monitoring system |
CN112954188A (en) * | 2019-12-10 | 2021-06-11 | 李思成 | Human eye perception imitating active target snapshot method and device |
CN111131697A (en) * | 2019-12-23 | 2020-05-08 | 北京中广上洋科技股份有限公司 | Multi-camera intelligent tracking shooting method, system, equipment and storage medium |
CN111131697B (en) * | 2019-12-23 | 2022-01-04 | 北京中广上洋科技股份有限公司 | Multi-camera intelligent tracking shooting method, system, equipment and storage medium |
CN113518174A (en) * | 2020-04-10 | 2021-10-19 | 华为技术有限公司 | Shooting method, device and system |
CN111698467A (en) * | 2020-05-08 | 2020-09-22 | 北京中广上洋科技股份有限公司 | Intelligent tracking method and system based on multiple cameras |
CN111541851A (en) * | 2020-05-12 | 2020-08-14 | 南京甄视智能科技有限公司 | Face recognition equipment accurate installation method based on unmanned aerial vehicle hovering survey |
CN111354011A (en) * | 2020-05-25 | 2020-06-30 | 江苏华丽智能科技股份有限公司 | Multi-moving-target information capturing and tracking system and method |
CN111815722A (en) * | 2020-06-10 | 2020-10-23 | 广州市保伦电子有限公司 | Double-scene matting method and system |
CN111918023A (en) * | 2020-06-29 | 2020-11-10 | 北京大学 | Monitoring target tracking method and device |
CN111918023B (en) * | 2020-06-29 | 2021-10-22 | 北京大学 | Monitoring target tracking method and device |
CN111866392A (en) * | 2020-07-31 | 2020-10-30 | Oppo广东移动通信有限公司 | Shooting prompting method and device, storage medium and electronic equipment |
CN111866392B (en) * | 2020-07-31 | 2021-10-08 | Oppo广东移动通信有限公司 | Shooting prompting method and device, storage medium and electronic equipment |
CN112767452B (en) * | 2021-01-07 | 2022-08-05 | 北京航空航天大学 | Active sensing method and system for camera |
CN112767452A (en) * | 2021-01-07 | 2021-05-07 | 北京航空航天大学 | Active sensing method and system for camera |
CN113179371A (en) * | 2021-04-21 | 2021-07-27 | 新疆爱华盈通信息技术有限公司 | Shooting method, device and snapshot system |
CN113792715A (en) * | 2021-11-16 | 2021-12-14 | 山东金钟科技集团股份有限公司 | Granary pest monitoring and early warning method, device, equipment and storage medium |
CN114155433A (en) * | 2021-11-30 | 2022-03-08 | 北京新兴华安智慧科技有限公司 | Illegal land detection method and device, electronic equipment and storage medium |
CN114938426A (en) * | 2022-04-28 | 2022-08-23 | 湖南工商大学 | Method and apparatus for creating a multi-device media presentation |
CN114938426B (en) * | 2022-04-28 | 2023-04-07 | 湖南工商大学 | Method and apparatus for creating a multi-device media presentation |
Also Published As
Publication number | Publication date |
---|---|
CN108111818B (en) | 2019-05-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108111818B (en) | Moving target actively perceive method and apparatus based on multiple-camera collaboration | |
CN110738142B (en) | Method, system and storage medium for adaptively improving face image acquisition | |
CN109887040B (en) | Moving target active sensing method and system for video monitoring | |
CN103761514B (en) | The system and method for recognition of face is realized based on wide-angle gunlock and many ball machines | |
KR101172747B1 (en) | Camera tracking monitoring system and method using thermal image coordinates | |
CN108419014A (en) | The method for capturing face using panoramic camera and the linkage of Duo Tai candid cameras | |
KR101788225B1 (en) | Method and System for Recognition/Tracking Construction Equipment and Workers Using Construction-Site-Customized Image Processing | |
WO2014080613A1 (en) | Color correction device, method, and program | |
Huang et al. | Efficient image stitching of continuous image sequence with image and seam selections | |
WO2014043353A2 (en) | Methods, devices and systems for detecting objects in a video | |
KR20160062880A (en) | road traffic information management system for g using camera and radar | |
CN103686131A (en) | Monitoring apparatus and system using 3d information of images and monitoring method using the same | |
KR20140013407A (en) | Apparatus and method for tracking object | |
CN110633648B (en) | Face recognition method and system in natural walking state | |
CN112307912A (en) | Method and system for determining personnel track based on camera | |
JP2018120283A (en) | Information processing device, information processing method and program | |
CN107547865A (en) | Trans-regional human body video frequency object tracking intelligent control method | |
Premachandra et al. | A hybrid camera system for high-resolutionization of target objects in omnidirectional Images | |
Abd Manap et al. | Smart surveillance system based on stereo matching algorithms with IP and PTZ cameras | |
Rybok et al. | Multi-view based estimation of human upper-body orientation | |
CN111465937B (en) | Face detection and recognition method employing light field camera system | |
CN109543496B (en) | Image acquisition method and device, electronic equipment and system | |
KR20150019230A (en) | Method and apparatus for tracking object using multiple camera | |
Wang et al. | Real-time distributed tracking with non-overlapping cameras | |
CN113033350B (en) | Pedestrian re-identification method based on overlook image, storage medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20210427 Address after: No.18 Chuanghui street, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province Patentee after: BUAA HANGZHOU INNOVATION INSTITUTE Address before: 100191 Haidian District, Xueyuan Road, No. 37, Patentee before: BEIHANG University |