CN100557634C - A kind of camera marking method based on double 1-dimension drone - Google Patents

A kind of camera marking method based on double 1-dimension drone Download PDF

Info

Publication number
CN100557634C
CN100557634C CNB2008101029783A CN200810102978A CN100557634C CN 100557634 C CN100557634 C CN 100557634C CN B2008101029783 A CNB2008101029783 A CN B2008101029783A CN 200810102978 A CN200810102978 A CN 200810102978A CN 100557634 C CN100557634 C CN 100557634C
Authority
CN
China
Prior art keywords
camera
dimension
coordinate system
target image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2008101029783A
Other languages
Chinese (zh)
Other versions
CN101261738A (en
Inventor
孙军华
张广军
吴子彦
杨珍
魏振忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CNB2008101029783A priority Critical patent/CN100557634C/en
Publication of CN101261738A publication Critical patent/CN101261738A/en
Application granted granted Critical
Publication of CN100557634C publication Critical patent/CN100557634C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The present invention discloses a kind of camera marking method based on double 1-dimension drone, and this method may further comprise the steps: two 1-dimension drones of arbitrarily placing are set; Video camera is taken the target image from different perspectives, sets up camera coordinate system, the plane of delineation coordinate system of video camera under different camera sites respectively, and sets up world coordinate system; After respectively the target image of taking being carried out distortion correction, find the solution intrinsic parameters of the camera and external parameter.Adopt scaling method of the present invention, with two 1-dimension drones as demarcating thing, after the field of view of video camera is placed two 1-dimension drones arbitrarily, take the target image from different perspectives, only need extract three or three above unique points to every width of cloth target image can carry out camera interior and exterior parameter and demarcate, need not utility appliance, simple to operate, and can improve camera calibration precision, expansion camera calibration scope.

Description

A kind of camera marking method based on double 1-dimension drone
Technical field
The present invention relates to computer vision technique, be specifically related to a kind of camera marking method based on double 1-dimension drone.
Background technology
Camera calibration is an important and crucial problem in computer vision and the photogrammetry, so-called camera calibration just is meant the position relation of determining between camera review pixel and the corresponding fields sight spot, be specially: according to camera model, the inner parameter and the external parameter of finding the solution camera model by the image coordinate and the world coordinates of known features point.
Early stage R.Y.Tsai has proposed to demarcate based on the known three-dimensional of structure the camera marking method of thing in article " An efficient and accurate camera calibration techniquefor 3D machine vision; Proc.of IEEE Conference of Computer Vision and PatternRecognition; pp364-374; 1986 ", though adopt this scaling method can access higher stated accuracy, but the three-dimensional thing difficulty on making of demarcating is bigger, so this method is restricted in application in engineering.
Z.Zhang has proposed the camera marking method based on plane target drone in " A flexible new technique for camera calibration; IEEE Trans.onPattern Analysis and Machine Intelligence; 22 (11): pp1330-1334; 2000 ", this method is by the image of camera plane target at two above diverse locations, just can carry out camera calibration not needing to know under the situation of relative movement parameters between plane target drone and the video camera, but when adopting this scaling method to carry out camera calibration, stated accuracy depends on the precision of plane target drone to a great extent, and camera calibration generally requires target to occupy in the visual field big area of trying one's best, when measurement range is bigger, large scale, high-precision plane target drone is made relatively difficulty, and precision is difficult to ensure, thereby can causes stated accuracy to reduce.
Because one dimension demarcates thing and has simple structure, be easy to processing, do not have characteristics such as self block, and in big visual field test was used, it was more much easier than the two dimensional surface target or the three-dimensional target of an equal yardstick of processing to process a large-sized 1-dimension drone.So, become the focus that people pay close attention to gradually based on the camera marking method of 1-dimension drone.
Z.Zhang has proposed a kind of camera marking method based on 1-dimension drone in " Camera calibration with one-dimensional objects; IEEETransactions on Pattern Analysis and Machine Intelligence; 2004; VOL.26; NO.7; 892~899 ", when adopting this scaling method to carry out camera calibration, need a fixedly end of 1-dimension drone, taking any picture that rotates of several 1-dimension drones demarcates, yet this point is difficult to accurately be achieved on engineering, complex operation and can influence stated accuracy.Wu Fuchao etc. have proposed a kind of 1-dimension drone scaling method based on plane motion in " Camera calibration with moving one-dimensional objects.Pattern Recognition; 2005; 38 (5); 755~765 ", but wherein the realization of plane motion depends on the support of motion platform, and promptly adopting this scaling method to carry out camera calibration needs utility appliance.Wang Liang etc. have proposed a kind of utilization and have done the scaling method that the 1-dimension drone of any rigid motion carries out camera calibration in " multiple-camera of demarcating thing based on one dimension is demarcated; the robotization journal; the 33rd volume; the 3rd phase; 2007; 225~231 ", but this scaling method only is applicable to the demarcation multiple-camera, and can not demarcate single camera.
As can be seen, in the prior art, though in the camera calibration of big visual field test, have many advantages based on the camera marking method of 1-dimension drone, but still have that stated accuracy is lower, complex operation, need deficiencies such as utility appliance, calibration range be limited.
Summary of the invention
In view of this, fundamental purpose of the present invention is to provide a kind of camera marking method based on double 1-dimension drone, and can improve stated accuracy, enlarge calibration range, need not utility appliance, and simple to operate.
For achieving the above object, technical scheme of the present invention is achieved in that
A kind of camera marking method based on double 1-dimension drone, this method comprises:
A, two 1-dimension drones of arbitrarily placing are set;
B, video camera are taken the target image from different perspectives, set up camera coordinate system, the plane of delineation coordinate system of video camera under different camera sites respectively, and set up world coordinate system;
C, respectively the target image of taking is carried out distortion correction after, find the solution intrinsic parameters of the camera and external parameter.
After the step c, this method further comprises: intrinsic parameters of the camera and external parameter are carried out the optimum solution that nonlinear optimization obtains intrinsic parameters of the camera and external parameter.
The target image of the described shooting of step b comprises two 1-dimension drones at least, and each 1-dimension drone comprises the unique point of three conllinear at least.
The described world coordinates of setting up of step b is: with the camera coordinate system of first camera site of video camera as world coordinate system.
The described intrinsic parameters of the camera of finding the solution of step c is: the coordinate of asking for the shadow point that disappears of latter two 1-dimension drone unique point place straight line of every width of cloth target image distortion correction respectively; Spatially angle is constant according to two 1-dimension drones, finds the solution intrinsic parameters of the camera in conjunction with the coordinate of the resulting shadow point that disappears.
The described video camera external parameter of finding the solution of step c is:
C1, an optional width of cloth target image, utilize behind the coordinate, distortion correction of the shadow point that disappears of 1-dimension drone unique point place straight line the space length between two-end-point characteristic point coordinates in the target image and two-end-point unique point, try to achieve the coordinate of 1-dimension drone two-end-point unique point under camera coordinate system;
C2, obtain the coordinate of two 1-dimension drone end points unique points under world coordinate system;
C3, try to achieve video camera when taking the described target image of step c1 under the present position according to coordinate united law of the same name, world coordinates is tied to the transformation matrix of camera coordinate system, as the external parameter of video camera when this position.
Describedly the video camera internal and external parameter is carried out nonlinear optimization be: set up with intrinsic parameters of the camera, external parameter and two objective functions that the 1-dimension drone relative position is a parameter; The intrinsic parameters of the camera and the external parameter of trying to achieve with step c are initial value, optimize the optimum solution that obtains intrinsic parameters of the camera and external parameter.
Camera marking method based on double 1-dimension drone provided by the invention, with two 1-dimension drones as demarcating thing, field of view at video camera is placed two 1-dimension drones arbitrarily, take the target image from different perspectives, only need extract three or three above unique points to every width of cloth target image just can carry out camera interior and exterior parameter and demarcate, need not utility appliance, simple to operate, and improved stated accuracy, enlarged calibration range.
Description of drawings
Fig. 1 is the camera marking method process flow diagram that the present invention is based on double 1-dimension drone;
Fig. 2 is the 1-dimension drone structural representation;
Fig. 3 is a calibration principle synoptic diagram of the present invention;
Fig. 4, Fig. 5, Fig. 6 are respectively three width of cloth target images that video camera is taken.
Embodiment
Basic thought of the present invention is: with two 1-dimension drones as demarcating thing, field of view at video camera is placed two 1-dimension drones arbitrarily, take the target image from different perspectives, only need extract three or three above unique points to every width of cloth target image and carry out camera interior and exterior parameter and demarcate.
Below in conjunction with specific embodiment and accompanying drawing the present invention is described in further detail.
Fig. 1 is the camera marking method process flow diagram that the present invention is based on double 1-dimension drone, and as shown in Figure 1, the camera marking method that the present invention is based on double 1-dimension drone may further comprise the steps:
Step 11: two 1-dimension drones of arbitrarily placing are set.
Here, described 1-dimension drone is the linear pattern target of monumented point conllinear, and Fig. 2 is the 1-dimension drone structural representation, as shown in Figure 2, it is provided with 3~10 round dot marks, and the round dot diameter is 3~20mm, precision is 0.001~0.01mm, and the round dot center is a unique point, each unique point conllinear.
Step 12: video camera is taken the target image from different perspectives, sets up camera coordinate system, the plane of delineation coordinate system of video camera under different camera sites respectively, and sets up world coordinate system.
Here, the target image that video camera is taken need comprise two 1-dimension drones, and each 1-dimension drone comprises the unique point of three conllinear at least.
Fig. 3 is a calibration principle synoptic diagram of the present invention, as shown in Figure 3, sets up camera coordinate system O according to the putting position of video camera c-x cy cz cAnd the plane of delineation coordinate system O-UV of video camera.
When setting up world coordinate system, can be world coordinate system, be without loss of generality that the present invention gets the camera coordinate system of video camera under first camera site as world coordinate system with the coordinate of video camera under arbitrary camera site.
Step 13: respectively the target image of taking is carried out distortion correction, and ask for behind the distortion correction coordinate of the shadow point that disappears of two 1-dimension drone unique point place straight lines in every width of cloth target image.
When the target image is carried out distortion correction, need to extract the coordinate of 1-dimension drone unique point under image coordinate system in every width of cloth target image, in the present embodiment, the unique point on the target is a round dot.According to perspective projection transformation, circle video camera as the plane on imaging be approximately oval.So, determine elliptical center by the method for marginal point being carried out ellipse fitting in the present embodiment, can extract the unique point on the target image.Concrete extracting method has a detailed description in doctor's Wei Zhenzhong paper " based on the online flexible three-dimensional coordinates measurement systematic study of machine vision, the doctoral candidate of BJ University of Aeronautics ﹠ Astronautics academic dissertation ", repeats no more here.
It is prior art that the target image is carried out distortion correction, and the distortion model of video camera is as follows:
u = ( u u - u 0 ) ( 1 + k 1 r u 2 + k 2 r u 4 ) v = ( v u - v 0 ) ( 1 + k 1 r u 2 + k 2 r u 4 ) r u 2 = ( u u 2 + v u 2 ) - - - ( 1 )
Wherein, (u v) is (u u, v u) through the real image coordinate after the distortion, (u 0, v 0) be video camera principal point coordinate, k 1, k 2Be the video camera coefficient of radial distortion.When carrying out distortion correction, can set up objective function earlier, find the solution optimum distortion factor by nonlinear optimization then based on collinear feature point linearity in the image.Concrete distortion correction method has a detailed description in the article " Nonmetric Calibration of Camera LensDistortion:Differential Methods and Robust Estimation; IEEE Transactions onPattern Analysis and Machine Intelligence; 2005; VOL.14; NO.8,1215~1230 " of Moumen Ahmed.
Utilize mutual distance on the 1-dimension drone known three or three above unique points can be in the hope of the coordinate of the shadow point that disappears of space line in image, the method that is provided in " R.Harley; A.Zisserman; A Multiple View Geometry in Computer Vision.Cambridge:CambridgeUniversity Press, 2000 " that concrete acquiring method can adopt R.Harley to show.
Step 14: constant at space angle according to two 1-dimension drones, the coordinate of the integrating step 13 resulting shadow points that disappear is found the solution intrinsic parameters of the camera.
The intrinsic parameters of the camera matrix can be expressed as:
K = f x α u 0 0 f y v 0 0 0 1 - - - ( 2 )
Wherein, f x, f yBe respectively the scale factor of plane of delineation coordinate system U axle and V axle, α is the out of plumb factor of U axle and V axle.
Because f x≈ f y, α ≈ 0, u 0≈ 0.5N u, v 0≈ 0.5N v, N wherein u, N vBe respectively image at U axle and the axial pixel count of V.If f x≈ f y=f moves to principal point with coordinate origin, and the inner parameter matrix can be reduced to:
K ~ = f 0 0 0 f 0 0 0 1 - - - ( 3 )
Again because, if disappear shadow point the homogeneous coordinates under image coordinate system of two 1-dimension drones in image are respectively v 1=(x 1, y 1, 1) TAnd v 2=(x 2, y 2, 1) T, two the 1-dimension drone unique point place angles of straight line in the space are θ, then have:
cos θ = v 1 T ω v 2 ( v 1 T ω v 1 ) ( v 2 T ω v 2 ) - - - ( 4 )
Wherein, ω=(KK T) -1
After coordinate origin moved to principal point, formula (4) became:
cos θ = v ~ 1 T ω ~ v ~ 2 ( v ~ 1 T ω ~ v ~ 1 ) ( v ~ 2 T ω ~ v ~ 2 ) - - - ( 5 )
Wherein, ω ~ = ( K ~ K ~ T ) - 1 , v ~ 1 = ( x 1 - u 0 , y 1 - v 0 , 1 ) T , v ~ 2 = ( x 2 - u 0 , y 2 - v 0 , 1 ) T Be respectively the homogeneous coordinates of shadow point under coordinate system after the translation that disappear of two 1-dimension drones.
If in m width of cloth image and the n width of cloth image, the shadow point that disappears of two 1-dimension drones is respectively
Figure C20081010297800098
With
Figure C20081010297800099
Then have in that space angle is constant according to two 1-dimension drones:
( v ~ m , 1 T ω ~ v ~ m , 2 ) 2 ( v ~ m , 1 T ω ~ v ~ m , 1 ) ( v ~ m , 2 T ω ~ v ~ m , 2 ) = ( v ~ n , 1 T ω ~ v ~ n , 2 ) 2 ( v ~ n , 1 T ω ~ v ~ n , 1 ) ( v ~ n , 2 T ω ~ v ~ n , 2 ) - - - ( 6 )
According to formula (3) and ω ~ = ( K ~ K ~ T ) - 1 , Formula (6) can be written as about
Figure C20081010297800102
Simple cubic equation, separate this equation, get positive root, can try to achieve f.
Then, set up about all parameter f in the inner parameter matrix K x, f y, α, u 0, v 0Objective function as follows:
Σ m = 1 M | | cos θ m - μ ( cos θ ) | | 2 = min - - - ( 7 )
Wherein, cos θ m = v m , 1 T ω v m , 2 ( v m , 1 T ω v m , 1 ) ( v m , 2 T ω v m , 2 ) , μ ( cos θ ) = 1 M Σ i = 1 M cos θ i , m = 1,2 , . . . , M , The target figure film size number of M for taking.
Again with f x=f, f y=f, α=0, u 0=0.5N u, v 0=0.5N vAs initial value, adopt the Levenberg-Marquardt nonlinear optimization algorithm to be optimized to five variablees, just can obtain the intrinsic parameters of the camera matrix K.
Step 15:, find the solution the video camera external parameter according to the space length between two-end-point characteristic point coordinates and two-end-point unique point in the target image behind the coordinate of the described shadow point that disappears of step 13, the distortion correction.
For any one 1-dimension drone in any width of cloth target image, all have system of equations (8) to set up:
s 1 p 1 = KP 1 s 2 p 2 = KP 2 | | P 1 - P 2 | | = L ( p 1 - p 2 ) × v = 0 - - - ( 8 )
Wherein, P 1=(x C1, y C1, z C1, 1) T, P 2=(x C2, y C2, z C2, 1) TFor 1-dimension drone two-end-point unique point at camera coordinate system O c-x cy cz cUnder three-dimensional coordinate, p 1=(u 1, v 1, 1) T, p 2=(u 2, v 2, 1) TBe the image coordinate of two end points unique points of 1-dimension drone, v=(x, y, 1) TBe the coordinate of the shadow point that disappears of 1-dimension drone unique point place straight line, L is the space length between 1-dimension drone two-end-point unique point, s 1, s 2Be and be not equal to zero constant.
Here, an optional width of cloth target image, utilize behind the coordinate, distortion correction of the shadow point that disappears of 1-dimension drone unique point place straight line the space length between two-end-point characteristic point coordinates in the target image and two-end-point unique point, by solving equation group (8), obtain 1-dimension drone two-end-point unique point at camera coordinate system O c-x cy cz cUnder three-dimensional coordinate P 1And P 2
Try to achieve two all unique points of target according to system of equations (8) and be in coordinate under the local measurement coordinate system of diverse location at video camera.Under the present position, world coordinates is tied to the transformation matrix of camera coordinate system, i.e. the external parameter of video camera when this position when trying to achieve video camera shooting target image according to coordinate united law of the same name then." machine vision " that concrete acquiring method is write at Zhang Guangjun (ISBN:7-03-014717-0) the 186th page to the 187th page have a detailed description.
Step 16: intrinsic parameters of the camera and external parameter are carried out nonlinear optimization, obtain the optimum solution of intrinsic parameters of the camera and external parameter.
Here, be initial value with the intrinsic parameters of the camera of step 14 gained and the video camera external parameter of step 15 gained, be that objective function carries out nonlinear optimization to each parameter with the quadratic sum of the projection error of all unique points on the plane of delineation on two targets.
At first, set up with intrinsic parameters of the camera, external parameter and two objective functions that the 1-dimension drone relative position is a parameter:
Figure C20081010297800111
Wherein, m represents m width of cloth image, and r represents r target, and j represents j unique point on the target,
Figure C20081010297800112
Be the image coordinate of obtaining by camera model, x M, r, jBe the real image coordinate.P 1, r, 1Represent the position of first end points under world coordinate system on r the target, θ rBe r the target angle that the projection in the xy plane is become with the x axle under world coordinate system,
Figure C20081010297800113
Be r the angle that target is become with the xy plane under world coordinate system.
Then, the camera interior and exterior parameter that step 14 and step 15 are tried to achieve utilizes the Levenberg-Marquardt optimization method to find the solution this nonlinear optimal problem as initial value, can obtain the optimum solution of intrinsic parameters of the camera and external parameter.
If the video camera of being demarcated is a Cannon EOS-5D digital camera, camera lens adopts 50mm F/4 camera lens, and cmos sensor resolution is 4368 * 2912.Operating distance is 200mm, and measurement range is 200mm * 160mm.Get six unique points on two 1-dimension drones respectively, unique point is spaced apart 12.728mm.Video camera has been taken totally three width of cloth target images from different perspectives respectively as Fig. 4, Fig. 5, shown in Figure 6, according to the described method of step 13, obtains that the unique point coordinate data sees Table 1 in the target image:
Figure C20081010297800121
Table 1
The 13 described methods data of utilizing target image shown in Figure 4 to obtain are tried to achieve distortion factor k set by step 1=0.55, k 2=-3.93.
It is as follows that the described method of 13~step 14 is tried to achieve the intrinsic parameters of the camera matrix set by step:
K = 6131.02 0 2187.86 0 6111.01 1469.64 0 0 1
It is as follows that 15 described methods are tried to achieve the video camera external parameter of each width of cloth image correspondence set by step:
The video camera external parameter of target image correspondence shown in Figure 4 is:
R 1 = 0.638 0.684 0.351 0.684 - 0.714 0.146 0.351 0.147 - 0.924 , T 1 = 135.35 - 81.36 625.81
The video camera external parameter of target image correspondence shown in Figure 5 is:
R 2 = 0.723 0.669 - 0.173 0.688 - 0.721 0.082 - 0.070 - 0.179 - 0.981 , T 2 = 121.67 - 107.93 654.70
The video camera external parameter of target image correspondence shown in Figure 6 is:
R 3 = 0.590 0.791 - 0.161 0.686 - 0.596 - 0.417 - 0.426 0.136 - 0.895 , T 3 = 131.32 - 84.15 672.25
At last, 16 described methods are optimized the inside and outside parameter of video camera set by step, obtain optimum solution and see Table 2:
Figure C20081010297800135
Table 2
Calculated characteristics point is 0.263 pixel by the root-mean-square error of camera model projection gained image coordinate and real image coordinate.
The above is preferred embodiment of the present invention only, is not to be used to limit protection scope of the present invention.

Claims (4)

1, a kind of camera marking method based on double 1-dimension drone is characterized in that, this method comprises:
A, two 1-dimension drones of arbitrarily placing are set;
B, video camera are taken the target image from different perspectives, set up camera coordinate system and the plane of delineation coordinate system of video camera under different camera sites respectively, and set up world coordinate system;
C, respectively the target image of taking is carried out distortion correction after, find the solution intrinsic parameters of the camera and external parameter;
The target image of described shooting comprises two 1-dimension drones, and each 1-dimension drone comprises the unique point of three conllinear at least;
The described detailed process of finding the solution intrinsic parameters of the camera is: the coordinate of asking for the shadow point that disappears of latter two 1-dimension drone unique point place straight line of every width of cloth target image distortion correction respectively; Spatially angle is constant according to two 1-dimension drones, finds the solution intrinsic parameters of the camera in conjunction with the coordinate of the resulting shadow point that disappears;
The described detailed process of finding the solution the video camera external parameter is: an optional width of cloth target image, utilize behind the coordinate, distortion correction of the shadow point that disappears of 1-dimension drone unique point place straight line the space length between two-end-point characteristic point coordinates in the target image and two-end-point unique point, try to achieve the coordinate of 1-dimension drone two-end-point unique point under camera coordinate system; Obtain the coordinate of two 1-dimension drone end points unique points under world coordinate system; Try to achieve video camera when taking described optional target image under the present position according to coordinate united law of the same name, world coordinates is tied to the transformation matrix of camera coordinate system, as the external parameter of video camera when this position;
Described 1-dimension drone i.e. the linear pattern target of three or three above unique point conllinear.
2, method according to claim 1 is characterized in that, after the step c, this method further comprises: intrinsic parameters of the camera and external parameter are carried out the optimum solution that nonlinear optimization obtains intrinsic parameters of the camera and external parameter.
3, method according to claim 1 is characterized in that, the described world coordinates of setting up of step b is: with the camera coordinate system of first camera site of video camera as world coordinate system.
4, method according to claim 2 is characterized in that, describedly the video camera internal and external parameter is carried out nonlinear optimization is: set up with intrinsic parameters of the camera, external parameter and two objective functions that the 1-dimension drone relative position is a parameter; The intrinsic parameters of the camera and the external parameter of trying to achieve with step c are initial value, optimize the optimum solution that obtains intrinsic parameters of the camera and external parameter;
Described objective function is:
Figure C2008101029780003C1
Wherein, m represents m width of cloth image, and the span of m is: m=1, and 2 ..., M, the target figure film size number of M for taking, M gets the positive integer more than or equal to 2; R represents r target, and the span of r is: r=1,2; J represents j unique point on the target, and the span of j is: j=1, and 2 ..., J, J are that the feature on the target is counted, J gets the positive integer more than or equal to 3;
Figure C2008101029780003C2
Be the image coordinate of obtaining by camera model, x M, r, jBe real image coordinate, P 1, r, lRepresent the position of first end points under world coordinate system on r the target, θ rBe r the target angle that the projection in the xy plane is become with the x axle under world coordinate system,
Figure C2008101029780003C3
Be r the angle that target is become with the xy plane under world coordinate system, k 1, k 2The expression distortion factor, K represents the intrinsic parameters of the camera matrix of trying to achieve, T m, R mThe video camera external parameter of representing m width of cloth target image correspondence.
CNB2008101029783A 2008-03-28 2008-03-28 A kind of camera marking method based on double 1-dimension drone Expired - Fee Related CN100557634C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2008101029783A CN100557634C (en) 2008-03-28 2008-03-28 A kind of camera marking method based on double 1-dimension drone

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2008101029783A CN100557634C (en) 2008-03-28 2008-03-28 A kind of camera marking method based on double 1-dimension drone

Publications (2)

Publication Number Publication Date
CN101261738A CN101261738A (en) 2008-09-10
CN100557634C true CN100557634C (en) 2009-11-04

Family

ID=39962178

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2008101029783A Expired - Fee Related CN100557634C (en) 2008-03-28 2008-03-28 A kind of camera marking method based on double 1-dimension drone

Country Status (1)

Country Link
CN (1) CN100557634C (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101586943B (en) * 2009-07-15 2011-03-09 北京航空航天大学 Method for calibrating structure light vision transducer based on one-dimensional target drone
CN101975588B (en) 2010-08-20 2012-07-11 北京航空航天大学 Global calibration method and device of rigid rod of multisensor vision measurement system
CN102034236B (en) * 2010-12-01 2012-12-26 北京航空航天大学 Multi-camera layered calibration method based on one-dimensional object
CN101996407B (en) * 2010-12-01 2013-02-06 北京航空航天大学 Colour calibration method for multiple cameras
CN102692183B (en) * 2011-03-23 2014-10-22 比比威株式会社 Measurement method of initial positions and poses of multiple cameras
CN102957895A (en) * 2011-08-25 2013-03-06 上海安维尔信息科技有限公司 Satellite map based global mosaic video monitoring display method
CN102622747B (en) * 2012-02-16 2013-10-16 北京航空航天大学 Camera parameter optimization method for vision measurement
CN102788552B (en) * 2012-02-28 2016-04-06 王锦峰 A kind of linear coordinate calibration method
CN103512558B (en) * 2013-10-08 2016-08-17 北京理工大学 A kind of conical target binocular video pose measuring method and target pattern
CN103578109B (en) * 2013-11-08 2016-04-20 中安消技术有限公司 A kind of CCTV camera distance-finding method and device
CN106197381A (en) * 2016-09-07 2016-12-07 吉林大学 Motor racing pose full filed code detection system
CN107202551A (en) * 2017-06-19 2017-09-26 合肥斯科尔智能科技有限公司 A kind of 3D printer printer model precision detection system
WO2019056360A1 (en) * 2017-09-25 2019-03-28 深圳大学 Method and device for positioning center of high-precision circular mark point resulting from large-distortion lens
CN108592789A (en) * 2018-03-29 2018-09-28 浙江精工钢结构集团有限公司 A kind of steel construction factory pre-assembly method based on BIM and machine vision technique
CN108571971B (en) * 2018-05-17 2021-03-09 北京航空航天大学 AGV visual positioning system and method
WO2019232793A1 (en) * 2018-06-08 2019-12-12 Oppo广东移动通信有限公司 Two-camera calibration method, electronic device and computer-readable storage medium
CN110264508B (en) * 2019-06-25 2021-01-01 北京理工大学 Vanishing point estimation method based on convex quadrilateral principle
CN110487249A (en) * 2019-07-17 2019-11-22 广东工业大学 A kind of unmanned plane scaling method for structure three-dimensional vibration measurement
CN112435300B (en) * 2019-08-26 2024-06-04 华为云计算技术有限公司 Positioning method and device
CN115824038B (en) * 2022-08-17 2023-09-29 宁德时代新能源科技股份有限公司 Calibration ruler, calibration method and device, and detection method and device
CN115200555B (en) * 2022-09-19 2022-11-25 中国科学院长春光学精密机械与物理研究所 Dynamic photogrammetry internal and external orientation element calibration method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于一维标定物的多摄像机标定. 王亮,吴福朝.自动化学报,第33卷第3期. 2007 *
基于一维移动物体的双目装置自标定. 王年,唐俊,韦穗,范益政,梁栋.机器人,第28卷第2期. 2006 *

Also Published As

Publication number Publication date
CN101261738A (en) 2008-09-10

Similar Documents

Publication Publication Date Title
CN100557634C (en) A kind of camera marking method based on double 1-dimension drone
CN102376089B (en) Target correction method and system
CN102072725B (en) Spatial three-dimension (3D) measurement method based on laser point cloud and digital measurable images
CN101586943B (en) Method for calibrating structure light vision transducer based on one-dimensional target drone
WO2021004416A1 (en) Method and apparatus for establishing beacon map on basis of visual beacons
CN104457710B (en) Aviation digital photogrammetry method based on non-metric digital camera
CN112132908B (en) Camera external parameter calibration method and device based on intelligent detection technology
Hui et al. Line-scan camera calibration in close-range photogrammetry
CN110672020A (en) Stand tree height measuring method based on monocular vision
CN105046715B (en) A kind of line-scan digital camera scaling method based on interspace analytic geometry
CN103278138A (en) Method for measuring three-dimensional position and posture of thin component with complex structure
CN105300362A (en) Photogrammetry method used for RTK receivers
CN104279960A (en) Method for measuring size of object by mobile equipment
Hui et al. A novel line scan camera calibration technique with an auxiliary frame camera
CN102314674B (en) Registering method for data texture image of ground laser radar
CN104268876A (en) Camera calibration method based on partitioning
CN104240262A (en) Camera external parameter calibration device and calibration method for photogrammetry
CN107689065A (en) A kind of GPS binocular cameras demarcation and spatial point method for reconstructing
CN109974618A (en) The overall calibration method of multisensor vision measurement system
CN102944191A (en) Method and device for three-dimensional vision measurement data registration based on planar circle target
CN105607760A (en) Trace restoration method and system based on micro inertial sensor
CN108537849A (en) The scaling method of the line-scan digital camera of three-dimensional right angle target based on donut
CN107917700A (en) The 3 d pose angle measuring method of target by a small margin based on deep learning
CN104167001A (en) Large-visual-field camera calibration method based on orthogonal compensation
CN108154535A (en) Camera Calibration Method Based on Collimator

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20091104

Termination date: 20200328

CF01 Termination of patent right due to non-payment of annual fee