CN102163335A - Multi-camera network structure parameter self-calibration method without inter-camera feature point matching - Google Patents

Multi-camera network structure parameter self-calibration method without inter-camera feature point matching Download PDF

Info

Publication number
CN102163335A
CN102163335A CN2011101300508A CN201110130050A CN102163335A CN 102163335 A CN102163335 A CN 102163335A CN 2011101300508 A CN2011101300508 A CN 2011101300508A CN 201110130050 A CN201110130050 A CN 201110130050A CN 102163335 A CN102163335 A CN 102163335A
Authority
CN
China
Prior art keywords
target
camera
matrix
respect
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011101300508A
Other languages
Chinese (zh)
Other versions
CN102163335B (en
Inventor
许东
孙茜
吴祖亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN 201110130050 priority Critical patent/CN102163335B/en
Publication of CN102163335A publication Critical patent/CN102163335A/en
Application granted granted Critical
Publication of CN102163335B publication Critical patent/CN102163335B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a multi-camera network structure parameter self-calibration method without inter-camera feature point matching, and the method provided by the invention comprises the following two basic steps: 1, detecting and tracking a target feature point in an image sequence aiming at a single camera, and obtaining a motion parameter of each camera with respect to a target by estimating an essential matrix corresponding to the motion of the camera with respective to the target; and 2, establishing a relation equation among the motion parameters of different cameras with respective to the target, and obtaining a multi-camera structure parameter by observing a plurality of motions of the target. In the self-calibration process of the multi-camera structure parameter, only the target feature point in the single camera image sequence is tracked, and the feature point matching among different cameras is not required; and the problem that the feature points between different cameras are hard to match due to the great image feature difference is avoided, and the multi-camera network structure parameter can be rapidly and accurately estimated.

Description

A kind of many video recorder networks structural parameters self-calibrating method that need not Feature Points Matching between camera
Technical field
The present invention relates to computer vision and wireless many cameras sensing network field, relate in particular to the research of wireless many cameras sensing network structural parameters self-calibrating method.
Background technology
The research hot fields that wireless many cameras sensing network is the current multidisciplinary intersection that receives much concern in the world, the camera node that its utilizes a large amount of low costs be deployed in the observation area, low-power consumption, long-time running carries out collaborative perception, collection and processing to the information of object being observed in the network's coverage area.Because its powerful information acquisition and processing power, wireless many cameras sensing network has a wide range of applications at aspects such as military affairs, medical monitoring, intelligent buildings.Different with traditional sensor, camera is not the total space to the perception of scene on every side, and certain directivity and regionality are arranged, and the structural parameters of many video recorder networks have determined covering and the perception of network to the observation area.It is to realize that wireless many cameras sensing network is efficient, the key point of low power operation that these parameters are demarcated.
The demarcation of many cameras structural parameters is basic problems in the computer vision field.Traditional many cameras structural parameters calibration method, by detect spatial point at different cameras as the projected position on the plane, utilize the perspective geometry relation that many cameras structural parameters are demarcated.These methods at first need the image that different cameras obtain is handled, detect a series of feature point sets in the image with obvious characteristic, utilize the point that matches each other in certain matching process acquisition different images to collection then, utilize these match points that collection is carried out many cameras structural parameters calibration again.
And in wireless many cameras sensing network, the common low price of node camera, camera parameter have than big-difference; Dissimilar cameras such as infrared, visible light, low-light camera are arranged in the network; The position of node and direction are disposed at random.These problems have caused the Feature Points Matching between the camera to have very big difficulty, make traditional many video recorder networks structural parameters calibration method based on Feature Points Matching be difficult to be applied in the demarcation of wireless many cameras sensing network.
Summary of the invention
The objective of the invention is to propose a kind of many video recorder networks structural parameters self-calibrating method that need not Feature Points Matching between camera, overcome problems such as the characteristics of image coupling is difficult between wireless many cameras sensing network node, transmission bandwidth is limited.
For achieving the above object, the present invention proposes a kind of many video recorder networks structural parameters self-calibrating method that need not Feature Points Matching between camera, comprises that single camera is with respect to the kinematic parameter estimation of target and two basic steps of structural parameters estimation between many cameras.
Step 1, in one embodiment of the invention, described single camera is estimated further to comprise with respect to the kinematic parameter of target: choose current frame image and background image and do difference and ask absolute value, obtain the absolute difference image; A given threshold value detects the zone greater than threshold value in the absolute difference image, described zone is the target region; Unique point in each two field picture of detected image sequence keeps the unique point in each two field picture target area successively, and described unique point is the unique point on the target; In image sequence, select arbitrary image right,, choose on every side small neighbourhood as matching template for each unique point on the target in the piece image wherein; Seek the match point on the target in another right width of cloth image of image, the unique point that keeps coupling mutually is right, the unique point of described mutual coupling to be on the target unique point respectively in the motion front-back direction; By the unique point before and after the target travel to the structure matrix of coefficients, described matrix of coefficients is carried out svd obtain two orthogonal matrixes, last column of one of them orthogonal matrix is the pile up vector of camera with respect to the corresponding fundamental matrix of motion of target, rearrange the described element that piles up vector, and to retrain its order be 2, obtains corresponding fundamental matrix; By the inner parameter matrix and the described fundamental matrix of camera, calculate camera with respect to the pairing essential matrix of the motion of target; Described essential matrix is carried out svd obtain two orthogonal matrixes,, calculate the kinematic parameter of camera with respect to target by described orthogonal matrix and skew matrix by constructing two skew matrixes.
Step 2, in one embodiment of the invention, the structural parameters between described many cameras are estimated further to comprise: set up different cameras with respect to the relation equation R between the same parameters of target motion -1t C2+ (I-R T) t-t C1=0, wherein R is camera C 1With camera C 2Between rotation matrix, t is camera C 1With camera C 2Between translation vector, R TBe the rotation matrix of target, t C1Be camera C 1With respect to the translation vector of target, t C2Be camera C 2With respect to the translation vector of target, I is a unit matrix; From described sequence image, choose n, utilize the motion parameters estimation method of above-mentioned single camera, calculate its n kinematic parameter at different cameras respectively with respect to same target with respect to target to image; Utilize camera with respect to the relation equation between the parameters of target motion, set up n pairing Simultaneous Equations of motion of target; Utilize the Levenberg-Marquardt method to find the solution structural parameters between the different cameras.
A kind of many video recorder networks structural parameters self-calibrating method that need not Feature Points Matching between camera that the present invention proposes, the coupling that does not need unique point between the camera, only the image sequence that need utilize the node camera to obtain is estimated the motion of each camera with respect to target respectively, utilizes different cameras to find the solution the structural parameters of many video recorder networks with respect to the relation equation between the same parameters of target motion then.The present invention has overcome that wireless many cameras sensing network transmission bandwidth is limited, problem such as characteristics of image coupling difficulty between node, the structural parameters calibration that not only can be used for video recorder network of the same type can also be used to comprise many video recorder networks structural parameters calibration of dissimilar cameras such as visible light, infrared, low-light.
Description of drawings
Fig. 1 is the many cameras structural parameters self-calibrating method process flow diagram that need not Feature Points Matching between camera of the embodiment of the invention;
Fig. 2 a is that the camera of the embodiment of the invention is motionless, the target travel synoptic diagram;
Fig. 2 b is that the target hovering, camera of the embodiment of the invention is with respect to the target synoptic diagram that rotates;
Fig. 2 c is that target hovering, the camera of the embodiment of the invention made the translation motion synoptic diagram with respect to target;
Fig. 3 is that the target anglec of rotation error of the embodiment of the invention is to video camera attitude estimation effect;
Fig. 4 is that the target anglec of rotation error of the embodiment of the invention is to the camera position estimation effect;
Fig. 5 is that the Target Location Error of the embodiment of the invention is to video camera attitude estimation effect;
Fig. 6 is that the Target Location Error of the embodiment of the invention is to the camera position estimation effect;
Fig. 7 is that the image sequence length of the embodiment of the invention is to video camera attitude estimation effect;
Fig. 8 is that the image sequence length of the embodiment of the invention is to the camera position estimation effect.
Embodiment
Describe embodiments of the invention below in detail, the example of described embodiment is shown in the drawings, and wherein identical from start to finish or similar label is represented identical or similar meaning.The embodiments described below are exemplary, only are used to explain the present invention, and can not be interpreted as limitation of the present invention.
The present invention be directed to that wireless many cameras sensing network transmission bandwidth is limited, problem such as characteristics of image coupling difficulty between node, a kind of many video recorder networks structural parameters self-calibrating method that need not Feature Points Matching between camera of proposition.
In order clearer understanding to be arranged, be briefly described at this to the present invention.The present invention includes two basic steps: step 1, single camera are estimated with respect to the kinematic parameter of target, are used for estimating the kinematic parameter of camera with respect to target from the image sequence that single camera obtains; Step 2, structural parameters between many cameras estimate, utilizes different cameras to estimate structural parameters between many cameras with respect to the relation equation between the kinematic parameter of target.
Concrete, Figure 1 shows that to may further comprise the steps a kind of process flow diagram that need not many video recorder networks structural parameters self-calibrating method of Feature Points Matching between camera of the embodiment of the invention:
Step S101 detects the moving target in the scene.
In one embodiment of the invention,, the image in the described image sequence is numbered, is designated as I at single camera 1, I 2, I 3..., background image is designated as I 0After numbering was finished, order was chosen image and the background image in the image sequence, each pixel is done difference obtain difference image, with image I 1Be example, its pairing difference image is I D1(x, y)=I 1(x, y)-I 0(x, y); Again each pixel value of difference image is taken absolute value and obtain the absolute difference image I Ad1(x, y)=| I D1(x, y) |; In the absolute difference image I Ad1Go up given threshold value t, detect pixel value I Ad1(x is y) greater than the region D of threshold value t 1=(x, y) | I Ad(x, y)>t}, described zone is the target area, and the rest may be inferred in the detection of other image target area in the image sequence.
In one embodiment of the invention, above-mentioned threshold value t adopts following steps to try to achieve.With the absolute difference image I Ad1Be example, its minimum and max pixel value are respectively g MinAnd g MaxThreshold value t is from g MinTo g MaxIncrease progressively one by one with 1 gray level in interval, at certain gray level t, the statistics gray scale is smaller or equal to the probability ω of the pixel appearance of t 1With the probability ω of gray scale greater than the pixel appearance of t 2Calculate the gray average μ of gray scale smaller or equal to all pixels of t 1With the gray scale square error And gray scale is greater than the gray average μ of all pixels of t 2With the gray scale square error
Figure BDA0000062169080000032
For from g MinTo g MaxBetween each gray level, calculate valuation functions
Figure BDA0000062169080000033
Choose the gray level t of valuation functions maximal value correspondence, described gray level t is above-mentioned threshold value.
Step S102, the unique point on the detection and tracking target.
The detection of unique point on the target in step 2.1 image.In one embodiment of the invention, with image I 1And corresponding target area D 1Be example, (x y) calculates x and y direction gradient value g respectively to each pixel x(x, y)=I (x, y)-I (x-1, y) and g y(x, y)=I (x, y)-I (x, y-1); (x, the small neighbourhood Γ of n * n y) (n desirable 3 or 5 usually) calculates its covariance matrix to each pixel
Figure BDA0000062169080000041
Calculate the angle point response R=det[M of each pixel]-k (trace[M]) 2, det[M wherein] and be the determinant of matrix M, trace[M] be the element sum on the diagonal line of matrix M, k is taken between 0.04 to 0.15 usually; At target area D 1In seek the angle point response (N desirable 100~200 usually) of top n maximum, the pixel corresponding with these angle point responses is the unique point on the target, the feature point set on all N unique point formation target is designated as { p 1i.In other image of image sequence on the target detection of unique point the rest may be inferred, corresponding feature point set is designated as { p respectively 2j, { p 3k...
The tracking of unique point on step 2.2 target.In one embodiment of the invention, it is right to select two width of cloth images to form image from image sequence, with image I 1And I 2Be example, for image I 1Feature point set { p on the middle target 1iIn arbitrary unique point (be designated as p 1), choose p 1The small neighbourhood of a n * n on every side (n desirable 3 or 5 usually) is as matching template (being designated as A); In image I 2Feature point set { p on the middle target 2jIn find a unique point (to be designated as p 2), unique point p 2Small neighbourhood (being designated as B) and the image I of a n * n on every side 1In unique point p 1The correlativity of the matching template that forms is maximum in all unique points, and greater than given threshold value (threshold value gets 0.95 usually), image I 1In unique point p 1And image I 2In unique point p 2It is right to form a matching characteristic point, and note is made m=(p 1, p 2); The rest may be inferred, tries to achieve image to I 1And I 2In all unique points to { m i; Further all images that can expand in the sequence is right.
In one embodiment of the invention, above-mentioned template correlativity is tried to achieve by following steps.For above-mentioned unique point p 1N * n small neighbourhood the A that forms calculates its gray average
Figure BDA0000062169080000042
For above-mentioned unique point p 2N * n small neighbourhood the B that forms calculates its gray average
Figure BDA0000062169080000043
Calculated characteristics point is to (p 1, p 2) the small neighbourhood A that forms and the correlativity of B
Figure BDA0000062169080000044
Step S103 estimates the kinematic parameter of camera with respect to target.
Step 3.1 is calculated fundamental matrix by the unique point between two images to collection.In one embodiment of the invention, with the image I in the sequence 1And I 2Be example, the unique point in the image is to m iIn unique point p I1And p I2Corresponding coordinate is respectively (x I1, y I1) and (x I2, y I2), utilize n unique point to the structure matrix of coefficients
A = x 12 x 11 x 12 y 11 x 12 y 12 x 11 y 12 y 11 y 12 x 11 y 11 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . x n 2 x n 1 x n 2 y n 1 x n 2 y n 2 x n 1 y n 2 y n 1 y n 2 x n 1 y n 1 1
Matrix A is carried out svd A=U AD AV A, obtain two orthogonal matrix U A, V AWith a diagonal matrix D AOrthogonal matrix V ALast column be one 9 dimensional vector, per 3 delegation of the element in this vector are lined up the matrix F of one 3 row, 3 row; Matrix F is carried out svd F=U FD FV F, obtain two orthogonal matrix U F, V FWith a diagonal matrix D F, make diagonal matrix D FLast element be 0 to obtain diagonal matrix
Figure BDA0000062169080000052
By orthogonal matrix U F, V FAnd diagonal matrix
Figure BDA0000062169080000053
Compute matrix
Figure BDA0000062169080000054
Described matrix F is above-mentioned fundamental matrix.
Step 3.2 is by fundamental matrix and camera inner parameter matrix computations essential matrix.In one embodiment of the invention, the inner parameter matrix K of camera is provided by the upper triangular matrix of 3 row, 3 row, by above-mentioned fundamental matrix F compute matrix E=K TFK, wherein matrix K TBe the transposition of matrix K, described matrix E is above-mentioned essential matrix.
Step 3.3 is calculated the kinematic parameter of camera with respect to target by essential matrix.In one embodiment of the invention, above-mentioned essential matrix E is carried out svd E=U ED EV E, obtain two orthogonal matrix U EAnd V EStructure skew symmetry battle array
Figure BDA0000062169080000055
Compute matrix
Figure BDA0000062169080000056
With
Figure BDA0000062169080000057
Camera has 4 possible separating with respect to the kinematic parameter of target
Figure BDA0000062169080000058
Figure BDA00000621690800000510
With
Figure BDA00000621690800000511
U wherein 3Be above-mentioned matrix U E3 dimensional vectors that the 3rd column element constitutes, R is the rotation matrix of camera with respect to target, t is the translation vector of camera with respect to target.
In one embodiment of the invention, above-mentioned camera adopts following steps to determine with respect to separating of the parameters of target motion: with the image I in the sequence 1And I 2Be example, construct image I 1Corresponding projection matrix P 1=[I | 0], image I 2Corresponding projection matrix P 2=[R|t], wherein I is the unit matrix of 3 row, 3 row, and R is the rotation matrix of above-mentioned camera with respect to target, and t is the translation vector of above-mentioned camera with respect to target; For each unique point to m, calculate its character pair point volume coordinate (X, Y, Z); Calculate 3 dimensional vector X=R ((X, Y, Z) T-t); Selection is for image I 1And I 2In all unique points right, the 3rd element of described 3 dimensional vector X all greater than 1 pairing rotation matrix R and translation vector t as above-mentioned camera separating with respect to the parameters of target motion.
In one embodiment of the invention, the volume coordinate of above-mentioned unique point adopts following steps to calculate: with the image I in the sequence 1And I 2Be example, for unique point to m, unique point p in two images 1And p 2Corresponding coordinate (x 1, y 1) and (x 2, y 2) satisfy equation Z ((x 2, y 2, 1) T-R (x 1, y 1, 1) T)=t, wherein Z is the Z component of unique point volume coordinate, and R is the rotation matrix of above-mentioned camera with respect to target, and t is the translation vector of above-mentioned camera with respect to target; Calculate X=x 1Z and Y=y 1Z obtains the X component and the Y component of unique point volume coordinate.
Step S104 sets up different cameras with respect to the relation equation between the parameters of target motion.
Target in the scene R that rotates TWith translation motion t T(shown in Fig. 2 a), camera can be regarded target hovering, camera as earlier around the rotation of the center of target to the observation of target of equal valuely
Figure BDA0000062169080000061
(shown in Fig. 2 b) be translation t again C(shown in Fig. 2 c).In one embodiment of the invention, two camera C 1And C 2Simultaneously target is observed camera C 2The position by camera C 1R and translation t obtain by rotation, described, and (R t) is structural parameters between above-mentioned camera; Utilize step S101~described method of step S103, with the image I in the sequence 1And I 2Be example, can calculate camera C 1With respect to rotatablely moving of target With translation motion t C1, camera C 2With respect to rotatablely moving of target
Figure BDA0000062169080000063
With translation motion t C2Camera C 1And C 2Kinematic parameter with respect to target satisfies relation
R Tt c2+(I-R T)t-t c1=0 (1)
Step S105 finds the solution the structural parameters between the camera.
Step 5.1 is set up the solving equation of structural parameters between the camera.In one embodiment of the invention, two camera C 1And C 2Simultaneously target is observed, n motion for target utilizes step S101~described method of step S103 to calculate camera C 1And C 2N group kinematic parameter with respect to target
Figure BDA0000062169080000064
N group kinematic parameter substitution equation (1), can obtain a Simultaneous Equations, the form of being write as matrix is
R TT C2+R ITt-T C1=0 (2)
In formula (2), (R t) is structural parameters between above-mentioned camera; T C2Be the matrix of one 3 capable n row, by camera C 2N translation motion vector rearrange, form is
Figure BDA0000062169080000065
T C1Be the matrix of one 3 capable n row, by camera C 1N translation motion vector rearrange, form is
Figure BDA0000062169080000066
R ITBe the matrix of capable 3 row of 3n, by the rotation matrix generation of target, form is
Figure BDA0000062169080000067
Step 5.2 is found the solution the structural parameters between the camera.In one embodiment of the invention, two camera C 1And C 2Simultaneously target is observed,, adopt Levenberg-Marquardt method solving equation (2) to obtain two structural parameters between the camera at n motion of target.
Fig. 3~Fig. 8 has carried out error analysis to many cameras structural parameters of estimating respectively.When there was error in single camera of estimating in an embodiment as can be seen from Fig. 3~Fig. 6 with respect to the kinematic parameter of target, the present invention still was stable to the estimation of many cameras structural parameters.The present invention utilizes multiple image that many cameras structural parameters are estimated to obtain more accurate and stable result as can be seen from Figures 7 and 8, along with the increase parameter estimating error of image sequence length levels off to 0.
Structural parameters self-calibrating method between a kind of many cameras that need not Feature Points Matching between camera that propose by the present invention, can overcome that wireless many cameras sensing network transmission bandwidth is limited, problem such as characteristics of image coupling difficulty between node, the structural parameters calibration that not only can be used for video recorder network of the same type can also be used to comprise many cameras structural parameters calibration of dissimilar cameras such as visible light, infrared, low-light.
It should be noted that at last: above embodiment only in order to technical scheme of the present invention to be described, is not intended to limit.Those of ordinary skill in the art is to be understood that: it still can be made amendment to the technical scheme that previous embodiment is put down in writing, or part technical characterictic wherein is equal to replacement, and these modifications or replacement, do not make the essence of appropriate technical solution break away from the spirit and scope of various embodiments of the present invention technical scheme, scope of the present invention is by claims and be equal to and limit.

Claims (8)

1. the many cameras structural parameters self-calibrating method that need not characteristic matching between camera is characterized in that, comprises that single camera is with respect to the kinematic parameter estimation of target and two basic steps of structural parameters estimation between many cameras.
2. the method for claim 1 is characterized in that, described single camera is estimated further to comprise with respect to the kinematic parameter of target:
Moving object detection in the scene;
Feature point detection on the target and tracking;
Camera is estimated with respect to the kinematic parameter of target.
3. method as claimed in claim 2 is characterized in that, the moving object detection in the described scene further comprises:
Choose current frame image and background image and do difference and ask absolute value, obtain the absolute difference image;
A given threshold value detects the zone greater than threshold value in the absolute difference image, described zone is the target region.
4. method as claimed in claim 2 is characterized in that, feature point detection on the described target and tracking further comprise:
Unique point in each two field picture of detected image sequence successively keeps the unique point of utilizing in each two field picture in the target area that the described method of claim 3 obtains, and described unique point is the unique point on the target;
In image sequence, select arbitrary image right,, choose on every side small neighbourhood as matching template for each unique point on the target in the piece image wherein;
Seek the match point on the target in another right width of cloth image of image, the unique point that keeps coupling mutually is right, the unique point of described mutual coupling to be on the target unique point respectively in the motion front-back direction.
5. method as claimed in claim 2 is characterized in that, described camera is estimated further to comprise with respect to the kinematic parameter of target:
By the unique point before and after the target travel to the structure matrix of coefficients, described matrix of coefficients is carried out svd obtain two orthogonal matrixes, last column of one of them orthogonal matrix is the pile up vector of camera with respect to the corresponding fundamental matrix of motion of target, rearrange the described element that piles up vector, and to retrain its order be 2, obtains corresponding fundamental matrix;
By the inner parameter matrix and the described fundamental matrix of camera, calculate camera with respect to the pairing essential matrix of the motion of target;
Described essential matrix is carried out svd obtain two orthogonal matrixes,, calculate the kinematic parameter of camera with respect to target by described orthogonal matrix and skew matrix by constructing two skew matrixes.
6. the method for claim 1 is characterized in that, structural parameters are estimated further to comprise between described many cameras:
Set up different cameras with respect to the relation equation between the same parameters of target motion;
Find the solution the structural parameters between the camera.
7. method as claimed in claim 6 is characterized in that, described different cameras can be described as R with respect to the relation equation between the parameters of target motion -1t C2+ (I-R T) t-t C1=0, wherein R is camera C 1With camera C 2Between rotation matrix, t is camera C 1With camera C 2Between translation vector, R TBe the rotation matrix of target, t C1Be camera C 1With respect to the translation vector of target, t C2Be camera C 2With respect to the translation vector of target, I is a unit matrix.
8. method as claimed in claim 6 is characterized in that, the described method of finding the solution the structural parameters between the camera further comprises:
From described image sequence, choose n, utilize the motion parameters estimation method of the described single camera of claim 2~5, calculate its n kinematic parameter at different cameras respectively with respect to same target with respect to target to image;
Utilize the described camera of claim 7 with respect to the relation equation between the parameters of target motion, set up n pairing Simultaneous Equations of motion of target;
Find the solution n pairing Simultaneous Equations of motion of target, obtain the structural parameters between the different cameras.
CN 201110130050 2011-05-19 2011-05-19 Multi-camera network structure parameter self-calibration method without inter-camera feature point matching Expired - Fee Related CN102163335B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110130050 CN102163335B (en) 2011-05-19 2011-05-19 Multi-camera network structure parameter self-calibration method without inter-camera feature point matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110130050 CN102163335B (en) 2011-05-19 2011-05-19 Multi-camera network structure parameter self-calibration method without inter-camera feature point matching

Publications (2)

Publication Number Publication Date
CN102163335A true CN102163335A (en) 2011-08-24
CN102163335B CN102163335B (en) 2013-02-13

Family

ID=44464546

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110130050 Expired - Fee Related CN102163335B (en) 2011-05-19 2011-05-19 Multi-camera network structure parameter self-calibration method without inter-camera feature point matching

Country Status (1)

Country Link
CN (1) CN102163335B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103366158A (en) * 2013-06-27 2013-10-23 东南大学 Three dimensional structure and color model-based monocular visual road face detection method
CN104966281A (en) * 2015-04-14 2015-10-07 中测新图(北京)遥感技术有限责任公司 IMU/GNSS guiding matching method of multi-view images
CN106204574A (en) * 2016-07-07 2016-12-07 兰州理工大学 Camera pose self-calibrating method based on objective plane motion feature
CN106530341A (en) * 2016-11-01 2017-03-22 成都理工大学 Point registration algorithm capable of keeping local topology invariance
WO2023103377A1 (en) * 2021-12-09 2023-06-15 上海商汤智能科技有限公司 Calibration method and apparatus, electronic device, storage medium, and computer program product

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101038671A (en) * 2007-04-25 2007-09-19 上海大学 Tracking method of three-dimensional finger motion locus based on stereo vision

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101038671A (en) * 2007-04-25 2007-09-19 上海大学 Tracking method of three-dimensional finger motion locus based on stereo vision

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
CEM TAYLAN ASLAN ET AL: "Automatic Calibration of Camera Networks based on Local Motion Features", 《WORKSHOP ON MULTI-CAMERA AND MULTI-MODAL SENSOR FUSION ALGORITHMS AND APPLICATIONS》, 31 October 2008 (2008-10-31) *
GILLES SIMON ET AL: "Markerless Tracking using Planar Structures in the Scene", 《IEEE AND ACM INTERNATIONAL SYMPOSIUM ON AUGMENTED REALITY 2000》, 31 December 2000 (2000-12-31) *
TSUHAN CHEN ET AL: "Accurate Self-calibration of Two Cameras by Observations of a Moving Person on a Ground Plane", 《IEEE CONFERENCE ON ADVANCED VIDEO AND SIGNAL BASED SURVEILLANCE 2007》, 7 September 2007 (2007-09-07) *
罗刚 等: "应用角点匹配实现目标跟踪", 《中国光学与应用光学》, vol. 2, no. 6, 31 December 2009 (2009-12-31) *
邓小炼 等: "一种基于特征角点和动态模板的遥感影像控制点匹配算法", 《计算机工程》, vol. 32, no. 8, 30 April 2006 (2006-04-30) *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103366158A (en) * 2013-06-27 2013-10-23 东南大学 Three dimensional structure and color model-based monocular visual road face detection method
CN104966281A (en) * 2015-04-14 2015-10-07 中测新图(北京)遥感技术有限责任公司 IMU/GNSS guiding matching method of multi-view images
CN104966281B (en) * 2015-04-14 2018-02-02 中测新图(北京)遥感技术有限责任公司 The IMU/GNSS guiding matching process of multi-view images
CN106204574A (en) * 2016-07-07 2016-12-07 兰州理工大学 Camera pose self-calibrating method based on objective plane motion feature
CN106204574B (en) * 2016-07-07 2018-12-21 兰州理工大学 Camera pose self-calibrating method based on objective plane motion feature
CN106530341A (en) * 2016-11-01 2017-03-22 成都理工大学 Point registration algorithm capable of keeping local topology invariance
CN106530341B (en) * 2016-11-01 2019-12-31 成都理工大学 Point registration algorithm for keeping local topology invariance
WO2023103377A1 (en) * 2021-12-09 2023-06-15 上海商汤智能科技有限公司 Calibration method and apparatus, electronic device, storage medium, and computer program product

Also Published As

Publication number Publication date
CN102163335B (en) 2013-02-13

Similar Documents

Publication Publication Date Title
CN107808407B (en) Binocular camera-based unmanned aerial vehicle vision SLAM method, unmanned aerial vehicle and storage medium
US8798387B2 (en) Image processing device, image processing method, and program for image processing
US20130163853A1 (en) Apparatus for estimating robot position and method thereof
CN102163335B (en) Multi-camera network structure parameter self-calibration method without inter-camera feature point matching
WO2018159168A1 (en) System and method for virtually-augmented visual simultaneous localization and mapping
CN103886107B (en) Robot localization and map structuring system based on ceiling image information
US20130258067A1 (en) System and method for trinocular depth acquisition with triangular sensor
Fernandez-Moral et al. Extrinsic calibration of a set of range cameras in 5 seconds without pattern
CN101930603B (en) Method for fusing image data of medium-high speed sensor network
CN105245841A (en) CUDA (Compute Unified Device Architecture)-based panoramic video monitoring system
US20150350609A1 (en) Method and apparatus for sensing moving ball
WO2020228453A1 (en) Pose tracking method, pose tracking device and electronic device
JP2009288885A (en) Lane detection device, lane detection method and lane detection program
Fiala et al. Visual odometry using 3-dimensional video input
CN112258409A (en) Monocular camera absolute scale recovery method and device for unmanned driving
Cvišić et al. Recalibrating the KITTI dataset camera setup for improved odometry accuracy
KR101203816B1 (en) Robot fish localization system using artificial markers and method of the same
CN112184767A (en) Method, device, equipment and storage medium for tracking moving object track
CN111583316A (en) Method for realizing vision autonomous positioning system
CN115100744A (en) Badminton game human body posture estimation and ball path tracking method
Ding et al. Opportunistic image acquisition of individual and group activities in a distributed camera network
Li et al. Multiple feature points representation in target localization of wireless visual sensor networks
WO2005004060A1 (en) Contour extracting device, contour extracting method, and contour extracting program
CN113838101B (en) Target tracking method suitable for camera network with overlapped view field
CN109961092A (en) A kind of binocular vision solid matching method and system based on parallax anchor point

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130213

Termination date: 20140519