WO2020063059A1 - 一种可动视觉***的立体标定方法 - Google Patents

一种可动视觉***的立体标定方法 Download PDF

Info

Publication number
WO2020063059A1
WO2020063059A1 PCT/CN2019/096546 CN2019096546W WO2020063059A1 WO 2020063059 A1 WO2020063059 A1 WO 2020063059A1 CN 2019096546 W CN2019096546 W CN 2019096546W WO 2020063059 A1 WO2020063059 A1 WO 2020063059A1
Authority
WO
WIPO (PCT)
Prior art keywords
freedom
calibration
camera
vision system
degree
Prior art date
Application number
PCT/CN2019/096546
Other languages
English (en)
French (fr)
Inventor
王开放
杨冬冬
张晓林
李嘉茂
王文浩
Original Assignee
上海爱观视觉科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海爱观视觉科技有限公司 filed Critical 上海爱观视觉科技有限公司
Priority to US17/280,747 priority Critical patent/US11663741B2/en
Priority to JP2021516446A priority patent/JP7185821B2/ja
Priority to DE112019004844.9T priority patent/DE112019004844T5/de
Publication of WO2020063059A1 publication Critical patent/WO2020063059A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • G06T2207/30208Marker matrix

Definitions

  • the present invention relates to a stereo calibration method for a vision system, and more particularly to a stereo calibration method for a multi-eye (two and more) vision system with at least one head independently movable in the visual system.
  • Existing vision systems are mainly divided into monocular and multi-vision vision systems by the number of cameras.
  • fixed vision systems and movable vision systems are divided according to whether the camera position is movable.
  • the monocular vision system is widely used because of its simple structure and low cost, but a single camera cannot obtain the depth information of objects in the field of view.
  • Multi-eye cameras are most widely used with binocular cameras. Multi-eye cameras can obtain stereo images, but in order to calculate the depth information of objects in the field of view, the internal parameter information of each camera and the external parameter information between cameras need to be calibrated in advance.
  • the relative position between fixed binocular cameras is fixed, and only one calibration is needed to calculate the relative position relationship between the binocular cameras.
  • the related calibration algorithms are very mature and widely used, such as camera calibration algorithms based on 3D stereo targets. , Camera calibration algorithms based on 2D planar targets, etc., which are derived from fixed multi-eye vision systems, and the relative position relationship between multi-eye cameras can be obtained by calibrating the relative position relationship between multi-eyes one by one.
  • the movable multi-eye vision system will move due to multiple cameras during use. Although the internal parameter information of each camera does not change with the camera's movement, the external parameter information will change with the camera's movement. For the object's depth information, you need to know the precise external reference information after the camera moves.
  • the mechanical movement part of the structure of the movable multi-eye vision system is limited by the existing mechanical processing level and the limitations of the installation process. It cannot guarantee that each mechanical shaft moves strictly and accurately according to the design requirements. At the same time, the camera element and the mechanical shaft Relative position is also difficult to guarantee, and stereo vision has high requirements for the accuracy of the positional relationship between multiple cameras. At this time, if a good stereo vision effect is needed, errors in this regard need to be eliminated as much as possible.
  • the present invention provides a stereo calibration for a movable vision system.
  • Method even if the relative positions of multiple camera parts have changed and there are errors caused by the relative positions of the axes and camera parts due to machining, assembly, etc., the camera parts can still be calculated in real time based on the position recording components on each axis Relative positional relationship.
  • the three-dimensional calibration method of the movable vision system proposed by the present invention calculates the calibration parameters of the movable multi-eye system through a series of image information and position information of each degree of freedom of movement, and then the calibration parameters and each degree of freedom of movement can be obtained through the calibration parameters
  • the motion information of the camera is used to calculate the external reference information of the camera component in real time.
  • the three-dimensional calibration method for a movable multi-eye vision system includes:
  • At least two sets of camera components each set of which includes at least one camera element capable of acquiring continuous images; each set of camera components has an arbitrary number of degrees of freedom of movement; and each of the degrees of freedom of movement is provided with an acquisition-capable rotation Or a position acquisition device for panning information;
  • At least one calculation component that can calculate and process image information and motion information of various degrees of freedom of motion
  • At least one control component that can control the movement of each degree of freedom of movement
  • the stereo calibration method may include: recording position information of each degree of freedom of each camera component at a reference position; performing stereo calibration at the reference position to obtain accurate relative position parameters between camera elements; and placing an image sensor in front of the camera element
  • the calibration template is described, and each camera element is driven to move in each degree of freedom of movement.
  • the corresponding camera element is used to record one or more groups of images containing the complete calibration template, and at the same time record the position of each degree of freedom of movement when the image is acquired.
  • Information and obtain the calibration results of the camera element and each degree of freedom of movement through calculation by the calculation component, where the calibration results include a rotation matrix and a translation matrix of the rotation axis of each degree of freedom of movement with respect to the camera element.
  • the position acquiring device may be installed on each degree of freedom of movement to acquire motion information of the corresponding degree of freedom of movement.
  • the movable vision system mentioned in the present invention includes: a plurality of camera components, each camera component having any number of degrees of freedom of movement that can obtain position information through a position recording component (all camera elements cannot be zero freedom of movement) Degrees), and the camera elements included in each camera component can shoot continuous images; a calculation component for processing related data and calculating the required results; a control component for controlling the degree of freedom of movement in the vision system and the record from the position The component obtains the position information of each degree of freedom of movement.
  • the present invention is based on stereo calibration of a binocular movable vision system. Further, for a multi-lens movable vision system larger than two purposes, it can be simplified to separate calibration of the two objectives in the multi-lens, for example, if it is needed to be larger than the binocular For the stereo calibration results of two camera elements, it is sufficient to perform stereo calibration of the binocular movable vision system for the dual camera elements. Generally, we uniformly record a reference position, and then perform fixed dual-target calibration on the reference position that requires binocular movable stereo calibration to obtain the rotational translation position relationship of the dual camera element with fixed binocular stereo calibration, and subsequently use the present invention. The subsequent steps can obtain the real-time stereo calibration results after the dual camera element is in motion.
  • the calibration template is a calibration template capable of extracting fixed features and known relative position information of the features.
  • all kinds of artificial 2D and 3D stereo targets, or some fixed natural scenes are required to be able to extract fixed feature information through image processing algorithms, and to obtain the relative position relationship between these feature information.
  • precision-machined 3D stereo targets or 2D planar targets are often used as calibration templates.
  • the stereo calibration method of the movable vision system of the present invention may include the following steps:
  • stereo calibration is performed for multi-eyes larger than two purposes, including: placing a calibration template in the common field of view of the dual camera element that needs stereo calibration, recording the acquired image information, and using a fixed binocular camera calibration algorithm To get a set of binocular stereo calibration results (R I (x, y) , T I (x, y) ), R I (x, y) , T I (x, y) are Rotation matrix and translation matrix.
  • the calibration template is fixedly placed in the field of view of the camera element that needs to be calibrated by the movement axis, and the a movement axis is rotated several times, and the image information of each rotation and the position information of each movement axis are recorded.
  • the image sequence obtained by multiple rotations is used
  • R BCa and T BCa are the rotation matrix and translation matrix of the a-th axis of freedom of rotation relative to the coordinate system of the camera element
  • N k is the number of degrees of freedom of the k-th camera component
  • R Bai is the relative number of the a-th axis of motion
  • the rotation matrix converted from the rotation angle of the reference position, R aCi and T aCi are the rotation and translation transformation matrix of the image coordinate system of the i-th picture taken when the a-th motion axis moves relative to the calibration template position
  • P a is In the calibration process, the system rotates the number of all effective pictures obtained by the a moving axis, and finally obtains several sets of equations. Solving the optimal solution of the set of equations will obtain the calibration results of each moving axis:
  • R ′ and T ′ are the rotation matrix and translation matrix results in the external parameter results of the dual camera component after the movable binocular vision subsystem is in motion
  • R BCa and T BCa are the a-th motion axis relative to The rotation matrix and translation matrix of the camera element coordinate system
  • N k is the number of degrees of freedom of the k-th camera component
  • x and y are binocular numbers
  • x is the "left eye" serial number in the binocular sub-vision system.
  • R y is the “right eye” serial number in the binocular vision system
  • R a is the rotation matrix transformed by the rotation angle of the ath motion axis relative to the reference position
  • R I (x, y) is the relative rotation matrix and translation matrix of the "left eye” and "right eye” at the reference position, respectively.
  • steps (2) and (3) can be changed.
  • step (1) of the three-dimensional calibration method of the calibration system of the present invention the initial information is acquired: a certain position of each degree of freedom of movement is selected as a reference position, and the reference position should satisfy: Certain public view areas.
  • Recording the position information of each degree of freedom of each camera component to obtain the reference position information ⁇ Ia ⁇ , k represents the serial number of the camera component, K is the number of camera components of the entire system, N k is the number of degrees of freedom of the k-th camera component, and the total number of degrees of freedom of the entire system is
  • stereo calibration As mentioned earlier, the calibration of the present invention is based on binocular stereo calibration, so stereo calibration is performed for multi-eyes larger than two goals.
  • the binocular serial number is set to x, y, 1 ⁇ x, y ⁇ K, and x is the "left eye” in the binocular vision system, and y is the binocular vision system
  • the “right eye”) places a calibration template in the public field of view to record the acquired image information (here, according to the algorithm used later, it may be necessary to obtain multiple sets of recorded image information by transforming the calibration template pose).
  • R I (x, y) , T I (x, y) ⁇ , 1 ⁇ x, y ⁇ K, R I (x, y) , T I (x, y) are the rotation matrix and translation matrix between the two camera elements, respectively.
  • the calibration template is fixedly placed in the field of view of the camera element that needs to be calibrated by the motion axis.
  • R BCa and T BCa are the rotation translation matrices of the a-th freedom axis of rotation and the camera element
  • N is the number of degrees of freedom of the vision system
  • R Bai is the rotation matrix transformed by the rotation angle of the a-th motion axis relative to the reference position
  • R aCi , T aCi are the rotation and translation transformation matrices of the coordinate system of the i-th picture taken during the movement of the a-th motion axis relative to the calibration template position
  • P a is the rotation of the a-th motion axis of the system during the calibration process.
  • the calibration result is calculated: the reference information ⁇ Ia ⁇ is obtained by using the above steps, Stereo calibration results ⁇ R I (x, y) , T I (x, y) ⁇ , 1 ⁇ x, y ⁇ K, and the rotation translation matrix ⁇ R BCa , T BCa ⁇ of each axis of freedom of movement and the camera element, Obtain the binocular that requires stereo calibration results for the entire movable vision system in practical applications (binocular numbers are x, y, 1 ⁇ x, y ⁇ K, and let x be the "left eye” in the binocular sub-vision system , Y is the "right eye” in the binocular vision system)
  • Position information of each degree of freedom of movement ⁇ a ⁇ Combine the reference position information to obtain the rotation angles ⁇ Ia- ⁇ a ⁇ of each degree of freedom, Convert it to a matrix form ⁇ R a ⁇ , Substitute the model of the relationship between the rotation axis coordinate system of the freedom of
  • the obtained R ′ and T ′ are the rotation matrix and translation matrix results in the external parameter results of the dual camera component after the movable binocular vision subsystem is in motion, and R BCa and T BCa are respectively relative to the a-th motion axis.
  • Rotation matrix and translation matrix for the coordinate system of the camera element N k is the number of degrees of freedom of the k-th camera component, x and y are binocular numbers, and x is the "left eye” number in the binocular sub-vision system , Y is the "right eye” serial number in the binocular vision system, R a is the rotation matrix transformed by the rotation angle of the ath motion axis relative to the reference position, R I (x, y) , T I (x, y ) Are the relative rotation matrix and translation matrix of the "left eye” and "right eye” at the reference position, respectively.
  • the beneficial effects of the three-dimensional calibration system and method of the present invention include: after the camera element moves in the movable vision system, the internal and external parameter information of each camera element can still be obtained through calculation, and this information is used for calculation of related algorithms such as stereo vision;
  • the method of the present invention has good calculation real-time performance, that is, after one-dimensional calibration, subsequent calculations can be performed through position information of each degree of freedom of movement, and internal and external parameters of each camera component in the movable vision system can be obtained in real time; There are some inevitable errors in the theoretical design caused by the elimination of mechanical processing and assembly.
  • FIG. 1 is a schematic block diagram of a stereo calibration system of a movable multi-eye vision system of the present invention.
  • 2A is a stereo calibration flowchart of a movable multi-eye vision system according to the present invention
  • FIG. 2B is a real-time external parameter calculation flowchart of the movable multi-eye vision system of the present invention.
  • FIG. 3 is a perspective view of the mechanical structure of a six-degree-of-freedom movable binocular vision system in the present invention.
  • FIG. 4 is a coordinate system setting of the six-degree-of-freedom movable binocular vision system mentioned in FIG. 3.
  • FIG. 5 is a checkerboard calibration template diagram used by the calibration algorithm used in the present invention.
  • FIG. 6 is a schematic diagram of a rotation axis coordinate system and a camera coordinate system for each degree of freedom in the present invention.
  • the movable multi-eye vision system mentioned in the present invention includes at least two sets of camera components, at least one computing component, and at least one control component.
  • Each set of camera components can be individually equipped with a computing component and a control component, or multiple sets of camera components can be shared.
  • a computing unit and a control unit, and the camera unit, the computing unit, and the control unit are signal-connected.
  • the system includes a control component, a computing component, and several sets of camera components.
  • the control component can control the movement of each degree of freedom of the imaging component, and can also obtain the position information of each degree of freedom of movement.
  • the calculation unit can calculate and process image information and motion information of each degree of freedom of motion.
  • the imaging unit can acquire continuous images and transmit the image information to the computing unit.
  • Each of the at least two sets of imaging components may have any number of degrees of freedom (including zero degrees of freedom, that is, fixed), and at least one of the sets of imaging components is a movable component having at least one degree of freedom.
  • the three-dimensional calibration method of the movable multi-eye vision system of the present invention mainly includes a calibration step and an external parameter result calculation step.
  • the calibration step specifically includes:
  • step S15 Use the information collected in step S14 to calculate and solve the rotation and translation matrix of each motion axis relative to the camera element coordinate system;
  • steps S12 and S13 and steps S14 and S15 can be exchanged, that is, the steps S11, S14, S15, S12, S13, and S16 may be executed in the order.
  • the external parameter result calculation step may include:
  • FIG. 3 and 4 A perspective view of a movable binocular six-degree-of-freedom vision system mechanical mechanism as shown in Figs. 3 and 4. It can be seen from Figs. 3 and 4 that each eye (equivalent to an eyeball) is composed of a set of camera components and two sets of cameras. The components are arranged side by side equivalent to left and right eyeballs. Each of the left and right eyeballs has three degrees of freedom of motion: pitch, yaw, and roll. This corresponds to the spatial Cartesian coordinate system. Spin. The three "degrees of freedom" of the same "eyeball" are rigidly connected, so that the motion of one degree of freedom will not affect the motion of the other degrees of freedom.
  • the left “eyeball” and the right “eyeball” Pitch motors are fixed at corresponding positions through the casing, for example, on the base, and the two "eyeball” roll degrees of freedom are each connected to a camera element capable of continuous shooting.
  • Each of the six degrees of freedom of movement is driven by a motor, and each motor is provided with a position recording device to obtain real-time output position information of each motor.
  • the output position information of each motor is represented by an angle value.
  • FIG. 4 shows the coordinate system setting of the movable binocular six-degree-of-freedom vision system of FIG. 3.
  • the present invention can perform stereo calibration for a multi-eye system with any number of degrees of freedom of movement, and the freedom of movement between the multiple eyes can be symmetrical or asymmetric, including that several eyes in the system are fixed (i.e. zero degrees of freedom) In the case where the other items are movable (with more than zero degrees of freedom of movement), the present invention is also applicable.
  • the present invention will be described in detail with reference to the six-degree-of-freedom movable binocular vision system shown in FIGS. 3 and 4 as an example.
  • more than one calibration template that can extract fixed features and known feature distance information is required.
  • a single-plane checkerboard camera calibration algorithm proposed by Professor Zhang Zhengyou in 1998 is used, and the flat checkerboard picture shown in FIG. 5 is taken. Print and paste on a flat plate as a calibration template.
  • the calibration template is placed in the public field of view of the binocular camera. It is necessary to ensure that the binocular camera can fully see the entire calibration template.
  • each degree of freedom is independently calibrated, taking the calibration of the left, "eyeball” roll, pitch, and yaw degrees of freedom (that is, corresponding to motors 1, 2, and 3) as an example.
  • the calibration template is fixedly placed in front of the movable binocular vision system to ensure that the left "eyeball” camera element can capture the complete calibration template; then rotate the motors 1, 2 and 3 several times, each time to make the left "eyeball” ”
  • the camera element performs a picture fetching operation to obtain a picture M i , and simultaneously records the position information of the motion axes of the three motors ⁇ 1i , ⁇ 2i , ⁇ 3i ⁇ , and binds the two to obtain ⁇ M i , ⁇ 1i , ⁇ 2i , ⁇ 3i ⁇ ; use the image sequence (M 1 , M 2 , M 3 , ..., M P ) obtained by multiple rotations, and use the plane checkerboard calibration template and
  • the calibration pictures will be acquired multiple times with the cameras in the same position, that is, the plane checkerboard posture is changed in front of the dual cameras, and multiple sets of images are collected.
  • the optical center cannot be guaranteed on the rotation axis, nor can the rotation axis be parallel to the camera coordinate system.
  • the output value of the rotation axis encoder needs to be used to calculate the pose transformation of each camera coordinate system.
  • the relative relationship between the camera and the rotation axis remains unchanged.
  • a mathematical model is established. As shown in FIG.
  • C ⁇ O c -x c y c z c ⁇ is the camera coordinate system (hereinafter referred to as the C system), and the camera
  • the coordinate system optical center O c is the starting point, and the vertical line intersects the roll DOF axis l at the point O b and extends the vertical line as the ray O b x b .
  • O b as the origin
  • O b x b as the x axis
  • the rotation axis as the y axis
  • the right-hand rule determines that the z axis establishes the rotation axis coordinate system B ⁇ O b -x b y b z b ⁇ (hereinafter referred to as the B system).
  • the result obtained by the calibration which represents the transformation matrix of any point P in space from B system to C system.
  • a new camera coordinate system C ′ is obtained after ⁇ angle rotation.
  • the relative position T BC remains unchanged during rotation of the B series and the C series. Points in the same space still satisfy equation (1) in the rotated coordinate systems B ′ and C ′.
  • the position information is obtained with 6 degrees of freedom of movement ( ⁇ p1 , ⁇ p2 , ⁇ p3 , ⁇ p4 , ⁇ p5 , ⁇ p6 ) , Then the rotation angles of the 6 degrees of freedom are respectively ( ⁇ p1 - ⁇ I1 , ⁇ p2 - ⁇ I2 , ⁇ p3 - ⁇ I3 , ⁇ p4 - ⁇ I4 , ⁇ p5 - ⁇ I5 , ⁇ p6 - ⁇ I6 ).
  • the movable binocular vision system and the stereo calibration method mentioned in the present invention are not limited to the embodiments described above, and some additional steps are added to the embodiment in order to improve the accuracy or reduce the complexity of the solution. In the case of the principle of the invention, various modifications and improvements can also be made.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)
  • Measurement Of Optical Distance (AREA)
  • Stereoscopic And Panoramic Photography (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

一种可动视觉***的立体标定方法,其提及的可动视觉***包括至少两套摄像部件、至少一个计算部件、至少一个控制部件,通过在摄像部件前方放置标定模板,转动各个摄像部件各个运动自由度,获取一组或多组包含标定模板特征的图像,同时得到获取该图像时相应的各个运动自由度的位置信息,经计算获得摄像元件与各个运动自由度的标定结果,即各运动自由度的转轴相对于摄像元件的旋转矩阵和平移矩阵,之后可利用上述获得的标定结果结合视觉***中各个运动自由度位置信息实时获取立体标定结果。该方法实现了可动多目视觉***中摄像元件处于运动状态下的立体标定,具有良好的计算实时性,消除因机械加工或组装造成的误差问题,具有广泛应用前景。

Description

一种可动视觉***的立体标定方法 技术领域
本发明涉及一种视觉***的立体标定方法,特别涉及视觉***中至少一目独立可动的多目(两目及以上)视觉***的立体标定方法。
背景技术
现有视觉***以相机数目区分主要有单目、多目视觉***这两种,其中又以相机位置是否可动分为固定视觉***和可动视觉***。单目视觉***因结构简单、成本低廉而应用广泛,但是单一相机无法获取视野物体的深度信息。多目相机又以双目应用最为广泛,多目相机可以获得立体影像,但是为了计算视野中物体的深度信息需要事先标定各个相机的内参信息以及各相机间的外参信息。
固定双目相机之间相对位置固定,只需要进行一次标定就可以计算双目相机之间的相对位置关系,其相关标定算法已非常成熟,应用也很广泛,例如基于3D立体靶标的相机标定算法、基于2D平面靶标的相机标定算法等,以此衍生到固定多目视觉***,可通过两两标定多目之间相对位置关系,即可获得多目相机之间的相对位置关系。
可动多目视觉***由于使用过程中多摄像机会发生运动,虽然各个相机的内参信息不随相机运动而发生变化,但是外参信息会随着相机的运动发生变化,这时候如果想要计算视野中物体的深度信息,就需要知道摄像机运动后精确的外参信息。
此外,可动多目视觉***结构中存在机械运动的部分,受限于现有机械加工水平以及安装工艺的限制,无法保证各个机械转轴严格精准的按照设计要求运动,同时摄像元件与机械转轴的相对位置也难以保证,而立体视觉对于多相机之间的位置关系精度有很高的要求,此时如果需要有好的立体视觉效果,需要尽可能消除这方面的误差。
目前基于固定双目及衍生的固定多目的相关标定算法已比较成熟,应用也很广泛,例如著名的张氏标定法(张正友1998年发表的论文“A Flexible New  Technique for Camera Calibration”),但是基于可动多目视觉***的标定并无相关成熟通用的方法。
发明内容
为了克服现有技术中存在的在可动多目视觉***工作时摄像机会发生运动的情形下传统立体标定算法无法实时获取摄像机的外参信息等问题,本发明提供一种可动视觉***立体标定方法,即使多个摄像机部件相对位置发生了变化且各个轴、摄像部件的相对位置由于机械加工、组装等环节导致的存在误差的情况下,仍能根据各个轴上的位置记录组件实时计算摄像部件之间相对位置关系。
本发明提出的可动视觉***的立体标定方法,通过获取的一系列影像信息以及各个运动自由度的位置信息计算出可动多目***的标定参数,之后可通过该标定参数以及各个运动自由度的运动信息实时计算摄像部件的外参信息。
具体的,本发明提供的可动多目视觉***的立体标定方法,所述可动多目视觉***包括:
至少两套摄像部件,每一套所述摄像部件包括至少一个可获取连续图像的摄像元件;每一套所述摄像部件具有任意数目运动自由度;每个所述运动自由度安装有可获取转动或平移信息的位置获取装置;
至少一个计算部件,所述计算部件可计算处理图像信息以及各个运动自由度的运动信息;
至少一个控制部件,所述控制部件可控制各个运动自由度的运动;
所述立体标定方法可包括:记录基准位置处各个摄像部件的每个运动自由度的位置信息;在基准位置处进行立体标定获取摄像元件间精确的相对位置参数;在所述摄像元件前方放置所述标定模板,驱动各个摄像元件在各个运动自由度下运动,使用相应的摄像元件记录下一组或多组包含完整标定模板的图像,同时记录下获取该图像时相应的各个运动自由度的位置信息,通 过所述计算部件计算获得所述摄像元件与各个运动自由度的标定结果,所述标定结果包括各个运动自由度的转轴相对于摄像元件的旋转矩阵和平移矩阵。
其中,所述位置获取装置可安装在每个运动自由度上,获取相应的运动自由度的运动信息。
具体地,本发明提及的可动视觉***包括:若干个摄像部件,每个摄像部件具有任意数目可通过位置记录组件获取位置信息的运动自由度(全部的摄像元件不能都为零个运动自由度),且各个摄像部件所包含的摄像元件可以拍摄连续影像;一个计算部件,用于处理相关数据并计算出需要的结果;一个控制部件,用于控制视觉***中运动自由度及从位置记录组件获取各个运动自由度的位置信息。
本发明以双目可动视觉***立体标定为基础,进一步地,对于大于二目的多目可动视觉***,可以简化为多目中二目的分别标定,例如,如果需要大于双目的多目中某两个摄像元件的立体标定结果,对该双摄像元件进行双目可动视觉***的立体标定即可。一般我们统一记录一个基准位置,然后对该基准位置中后面需要双目可动立体标定的先进行固定双目标定,得到固定双目立体标定的双摄像元件的旋转平移位置关系,后续利用本发明后续步骤即可得到该双摄像元件发生运动后的实时立体标定结果。
其中,所述标定模板为可提取固定特征并已知特征相对位置信息的标定模板。例如人工制作的各类2D、3D立体靶标,或者一些固定的自然场景,要求能够通过图像处理算法提取固定的特征信息,并且可以获知这些特征信息之间的相对位置关系。一般在实际应用中,为提高标定结果精度和降低标定过程难度,多使用精密加工的3D立体靶标或者2D平面靶标作为标定模板。
本发明可动视觉***的立体标定方法,可包括以下步骤:
(1)基准信息获取:
确定各个运动自由度的某个位置作为基准位置、记录下各个摄像元 件的各个运动自由度的位置信息,得到基准位置信息{θ Ia},
Figure PCTCN2019096546-appb-000001
k代表摄像部件序号,a代表运动自由度序号,按照各个运动自由度连接顺序依次编号,K为整个***中的摄像部件数目,N k为第k个摄像部件的运动自由度数目,整个***总的运动自由度数目为
Figure PCTCN2019096546-appb-000002
(2)立体标定:
以双目立体标定为基础,所以对大于二目的多目分别进行立体标定,包括:在需要立体标定的双摄像元件公共视野区域放置标定模板,记录获取的图像信息,使用固定双目相机标定算法来得到一组双目立体标定结果(R I(x,y),T I(x,y)),R I(x,y),T I(x,y)分别为双摄像元件之间的旋转矩阵和平移矩阵。如果有多个需要进行立体标定的双摄像元件,重复上述立体标定步骤,得到多组双目立体标定结果{R I(x,y),T I(x,y)},1≤x,y≤K,K为整个***中的摄像部件数目。
(3)运动轴标定:
将标定模板固定放置于需要运动轴标定的摄像元件视野区域,转动第a个运动轴若干次,记录下每次转动的图像信息和各个运动轴的位置信息,利用多次转动得到的图像序列使用标定算法进行计算,得到拍摄的每张图片相对于标定模板位置的旋转和平移变换矩阵{R aCi,T aCi},i=1...P a,然后重复上述步骤,转动另一个运动轴,直至将所有运动轴全部完成上述步骤。将标定结果中运动轴相对于基准位置的转动角度{θ ai1I1,...,θ aiNIN},i=1...P a转化为旋转矩阵 {R Bai},i=1...P a,
Figure PCTCN2019096546-appb-000003
将上述结果代入各个运动自由度转轴坐标系与摄像元件坐标系关系模型:
Figure PCTCN2019096546-appb-000004
Figure PCTCN2019096546-appb-000005
R BCa,T BCa分别为第a个运动自由度转轴相对于摄像元件坐标系的旋转矩阵和平移矩阵,N k为第k个摄像部件的运动自由度数目,R Bai为第a个运动轴相对于基准位置的转动角度转化的旋转矩阵,R aCi,T aCi分别为第a个运动轴运动时拍摄的第i张图片的图像坐标系相对于标定模板位置的旋转和平移变换矩阵,P a为***在标定过程第a个运动轴转动取到的全部有效图片数目,最后得到若干组等式,求解该组等式最优解即得到每个运动轴的标定结果:
Figure PCTCN2019096546-appb-000006
(4)标定结果计算:
利用上述步骤得到基准信息{θ Ia},
Figure PCTCN2019096546-appb-000007
立体标定结果{R I(x,y),T I(x,y)},1≤x,y≤K和各个运动自由度转轴与摄像元件的旋转平移矩阵{R BCa,T BCa},
Figure PCTCN2019096546-appb-000008
获取实际应用中整个可动视觉***的需要立体标定结果的双目(双目序号分别为x,y,1≤x,y≤K,并且设x为双目子视觉***中的“左眼”,y为双目子视觉***中的“右眼”)各个运动自由度的位置信息{θ a},
Figure PCTCN2019096546-appb-000009
结合基准位置信息,得到各个自由度转角{θ Iaa},
Figure PCTCN2019096546-appb-000010
将其转化为矩阵形式{R a},
Figure PCTCN2019096546-appb-000011
代入运动自由度转轴坐标系与摄像元件坐标系关系模型(本发明独创):
Figure PCTCN2019096546-appb-000012
Figure PCTCN2019096546-appb-000013
其中,R′和T′分别是该可动双目视觉子***发生运动后的双摄像部件外参结果中的旋转矩阵和平移矩阵结果,R BCa,T BCa分别为第a个运动轴相对于摄像元件坐标系的旋转矩阵和平移矩阵,N k为第k个摄像部件的运动自由度数目,x,y为双目序号,并且设x为双目子视觉***中的“左眼”序号,y为双目子视觉***中的“右眼”序号,R a为第a个运动轴相对于基准位置的转动角度转化的旋转矩阵,R I(x,y),T I(x,y)分别为基准位置处“左眼”与“右眼”的相对旋转矩阵和平移矩阵。
其中,步骤(2)、(3)的次序可以变换。
本发明所述标定***立体标定方法中所述步骤(1)中,初始信息获取:选取各个运动自由度的某个位置作为基准位置,基准位置应当满足:需要进行立体标定的摄像元件之间具有一定的公共视野区域。记录下各个摄像部件各个运动自由度的位置信息得到基准位置信息{θ Ia},
Figure PCTCN2019096546-appb-000014
k代表摄像部件序号,K为整个***摄像部件数目,N k为第k个摄像部件的运动自由度数目, 整个***总的运动自由度数目为
Figure PCTCN2019096546-appb-000015
所述步骤(2)中,立体标定:前文提及本发明标定以双目立体标定为基础,所以对大于二目的多目分别进行立体标定。在需要立体标定的双摄像元件(双目序号设为x,y,1≤x,y≤K,并且设x为双目子视觉***中的“左眼”,y为双目子视觉***中的“右眼”)公共视野区域放置标定模板,记录获取的图像信息(此处根据后面具体使用的算法可能需要通过变换标定模板姿态来获取多组记录的图像信息)。然后使用固定双目相机标定算法(本发明中举例使用张正友教授1998年提出的单平面棋盘格的摄像机标定方法,或其他此类算法,部分算法对图像数据数量有要求,可根据要求变换标定模板姿态来获取多组图像数据)来得到双目立体标定结果(R I(x,y),T I(x,y)),R I(x,y),T I(x,y)分别为双摄像元件之间的旋转矩阵和平移矩阵,如果有多个需要进行立体标定的双摄像元件,重复上述立体标定步骤,得到多组双目立体标定结果{R I(x,y),T I(x,y)},1≤x,y≤K,R I(x,y),T I(x,y)分别为双摄像元件之间的旋转矩阵和平移矩阵。
所述步骤(3)中,运动轴标定:将标定模板固定放置于需要运动轴标定的摄像元件视野区域。转动第a个运动轴若干次,每转动一次,让该运动轴所连接的摄像元件进行取图操作得到图片M ai,同时记录下此时该运动轴的位置信息得到{M aiai1,...,θ aiN},利用多次转动得到图像序列(M a1,M a2,M a3,...M aP),P a为第a个运动轴标定过程取到的全部有效图片数目。并使用标定算法(这里本发明同样以张正友标定法为例)进行计算得到拍摄的每张图片相对于标定模板位置的旋转和平移变换矩阵{R aCi,T aCi},i=1...P a,结合拍摄该图片时记录下的运动轴位置信息得到{R aCi,T aCiai},i=1...P a。然后重复上述步骤,转动另一个运 动轴,直至将运动轴
Figure PCTCN2019096546-appb-000016
全部完成上述步骤。将标定结果中运动轴相对于基准位置的转动角度{θ ai1I1,...,θ aiNIN},i=1...P a转化为旋转矩阵{R Bai},i=1...P a,
Figure PCTCN2019096546-appb-000017
将上述结果代入各个运动自由度转轴坐标系与摄像元件坐标系关系模型:
Figure PCTCN2019096546-appb-000018
Figure PCTCN2019096546-appb-000019
R BCa,T BCa分别为第a个运动自由度转轴与摄像元件的旋转平移矩阵,N为视觉***运动自由度数目,R Bai为第a个运动轴相对于基准位置的转动角度转化的旋转矩阵,R aCi,T aCi分别为第a个运动轴运动时拍摄的第i张图片图像坐标系相对于标定模板位置的旋转和平移变换矩阵,P a为***在标定过程第a个运动轴转动取到的全部有效图片数目、最后得到若干组等式,求解该组等式最优解即得到每个运动轴的标定结果:
Figure PCTCN2019096546-appb-000020
所述步骤(4)中,标定结果计算:利用上述步骤得到基准信息{θ Ia},
Figure PCTCN2019096546-appb-000021
立体标定结果{R I(x,y),T I(x,y)},1≤x,y≤K,和各个运动自由度转轴与摄像元件的旋转平移矩阵{R BCa,T BCa},
Figure PCTCN2019096546-appb-000022
获取实际应用中整个可动视觉***的需要立体标定结果的双目(双目序号分别为x,y,1≤x,y≤K,并且设x为双目子视觉***中的“左眼”,y为双目子视觉***中的“右眼”)各个运动自由度的位置信息{θ a},
Figure PCTCN2019096546-appb-000023
结合基准位置信 息,得到各个自由度转角{θ Iaa},
Figure PCTCN2019096546-appb-000024
将其转化为矩阵形式{R a},
Figure PCTCN2019096546-appb-000025
代入运动自由度转轴坐标系与摄像元件坐标系关系模型:
Figure PCTCN2019096546-appb-000026
Figure PCTCN2019096546-appb-000027
其中,所得R′和T′分别是该可动双目视觉子***发生运动后的双摄像部件外参结果中的旋转矩阵和平移矩阵结果,R BCa,T BCa分别为第a个运动轴相对于摄像元件坐标系的旋转矩阵和平移矩阵,N k为第k个摄像部件的运动自由度数目,x,y为双目序号,并且设x为双目子视觉***中的“左眼”序号,y为双目子视觉***中的“右眼”序号,R a为第a个运动轴相对于基准位置的转动角度转化的旋转矩阵,R I(x,y),T I(x,y)分别为基准位置处“左眼”与“右眼”的相对旋转矩阵和平移矩阵。
本发明标定方法中,上述步骤(2)、(3)的次序可以互相调换。
本发明立体标定***及方法的有益效果包括:使得在可动视觉***中摄像元件发生运动后,仍然可以通过计算获得各个摄像元件的内外参数信息,该信息用于立体视觉等相关算法的计算;本发明方法具有良好的计算实时性,即在一次立体标定之后,后续可通过各个运动自由度的位置信息进行计算,实时地获取可动视觉***中各个摄像部件的内外参数;本发明可以很好的消除在机械加工、组装过程中带来的与理论设计存在一些不可避免的误差问题。
附图说明
图1是本发明可动多目视觉***的立体标定***的示意框图。
图2A是本发明可动多目视觉***的立体标定流程图;
图2B是本发明可动多目视觉***的实时外参计算流程图。
图3是本发明中的一种六自由度的可动双目视觉***机械结构透视图。
图4是图3提及的六自由度的可动双目视觉***的坐标***设置。
图5是本发明中采用的标定算法用到的棋盘格标定模板图。
图6是本发明中每个运动自由度的转轴坐标系与摄像机坐标系的示意图。
具体实施方式
结合以下具体实施例和附图,对本发明作进一步的详细说明。为了简洁,在描述本发明实施例的各流程、条件、实验方法等时,省略了一些本领域已知的内容,本发明对此类内容没有特别的限制。
以下将结合图1-6对本发明可动多目视觉***立体标定方法作进一步的详细描述。
本发明提及的可动多目视觉***包括至少两套摄像部件,至少一个计算部件,至少一个控制部件,每套摄像部件可以单独配备一个计算部件和一个控制部件,也可以多套摄像部件共用一个计算部件和一个控制部件,所述摄像部件、计算部件、控制部件之间信号连接。如图1所示,该***包括一个控制部件,一个计算部件、若干套摄像部件。其中,控制部件可控制摄像部件的各个运动自由度的运动,也可获取各个运动自由度的位置信息。计算部件可计算处理图像信息以及各个运动自由度的运动信息。摄像部件可获取连续图像并将图像信息传送给计算部件。所述至少两套摄像部件中的每一个可具有任意数目的自由度(包括零自由度,即固定不动),且其中至少一套摄像部件为具有至少一个自由度的可动部件。
本发明可动多目视觉***的立体标定方法,如图2A所示,主要包括标定 步骤和外参结果计算步骤,标定步骤具体包括:
S11:视觉***初始信息获取;
S12:对于需要双目立体标定结果的以双目为基础,在基准位置处采集若干组标定模板图像信息;
S13:使用传统固定双目标定算法计算获得基准位置处双目之间的旋转与平移矩阵;
S14:分别转动各个运动轴并采集相应标定模板图像信息,并记录各个运动轴的位置信息;
S15:利用步骤S14采集的信息计算求解各个运动轴相对于摄像元件坐标系的旋转平移矩阵;
S16:得到多目可动视觉***标定结果。标定目的实现。
其中,步骤S12、S13与步骤S14、S15可调换位置,即按照步骤S11、S14、S15、S12、S13、S16的顺序执行也可以。
所述外参结果计算步骤可包括:
S21:获取可动视觉***标定结果;
S22:获取当前需要立体标定结果的双摄像部件(双目)的各个运动自由度的位置信息;
S23:将相关信息代入可动视觉***外参求解模型计算出双摄像部件之间的旋转平移矩阵信息。
如图3、4所示的一种可动双目六自由度视觉***机械机构透视图,从图3、4可以看出每一目(相当于一个眼球)由一套摄像部件构成,两套摄像部件并排设置相当于左右眼球,左右眼球各自有pitch、yaw、roll三个运动自由度,对应到空间直角坐标系中,pitch是围绕Y轴旋转,yaw是围绕Z轴旋转, roll是围绕X轴旋转。同一“眼球”三个自由度间刚性相连,使得其中一个自由度的运动不会对其他自由度的运动产生影响。可选的,左“眼球”和右“眼球”的Pitch电机通过机壳固定在相应位置,例如基座上,两“眼球”roll自由度各自连接一个可以连续摄像的摄像元件。六个运动自由度各自通过一个电机驱动,每个电机均带有位置记录装置以获取各个电机的实时输出位置信息,在本实施例中,各电机的输出位置信息以角度值表示。图4所示的为图3可动双目六自由度视觉***的坐标***设置。本发明可针对具有任意数目的运动自由度的多目***进行立体标定,多目之间的运动自由度可以对称也可以不对称,包括***中的若干目是固定(即零个自由度)的、而其它目是可动(有大于零个的运动自由度)的情形,本发明也同样适用。
实施例
结合图3和图4所示的六自由度可动双目视觉***为例详述本发明。
1)标定初始数据
本发明中,需要一个以上可提取固定特征并已知特征距离信息的标定模板,本实施例选用张正友教授1998年提出的单平面棋盘格的摄像机标定算法,取图5所示的平面棋盘格图片打印粘贴在一个平面板上作为标定模板。将该标定模板放置于双目相机的公共视野区域,需要保证双目相机均能够完整看到整个标定模板。
本实施例中,为降低求解难度,对每个自由度分别进行独立标定,以标定左“眼球”的roll、pitch、yaw自由度(即对应1、2、3号电机)为例。将标定模板固定放置于可动双目视觉***前方,保证左“眼球”摄像元件能够拍摄到完整的标定模板;然后分别转动1、2、3号电机若干次,每转动一次,让左“眼 球”摄像元件进行取图操作获得图片M i,同时记录下此时三个电机的运动轴的位置信息{θ 1i2i3i},将二者绑定得到{M i1i2i3i};利用多次转动得到的图像序列(M 1,M 2,M 3,...,M P),并使用图3平面棋盘格标定模板和张正友标定算法进行计算得到拍摄的每张图片相对于标定模板位置的旋转和平移变换矩阵{R Ci,T Ci},结合拍摄该图片时记录下的各运动轴的位置信息得到{R Ci,T Ci1i2i3i},i=1...P,P为该运动轴在标定过程取到的全部有效图片数目。
选取一个双目摄像机有相当的公共视野区域的位置作为基准立体标定位置,将双目六电机的位置信息记录为(θ I1I2I3I4I5I6),下标1至6依次对应左眼的roll、yaw、pitch和右眼的roll、yaw、pitch自由度(后续采集图像也采用相同序号下标)。
实际应用中,优选地,为了提高基准立体标定结果的精度,会在摄像机处于相同位置的情况下,多次采集标定图片,即在双摄像机前面变换平面棋盘格姿态,采集多组图像,优选地,一般采集20组图像即可。
2)可动双目视觉***的立体标定
使用上述步骤中在基准立体标定位置(θ I1I2I3I4I5I6)处采集的若干组立体标定模板图片组,然后,使用现有技术中张氏立体标定算法计算出在基准位置下双摄像机之间的相对位置关系(旋转矩阵R I和平移向量T I)(具体计算步骤参考张正友1998年发表的论文“A Flexible New Technique for Camera Calibration”)。
3)运动轴标定
可动双摄像机在机械加工过程中无法保证光心在转轴上,也不能保证转轴与摄像机坐标系平行,需要通过转轴编码器的输出值来计算每个摄像机坐 标系的位姿变换需要知道转轴与摄像机坐标系的位置关系。而且对于一个刚体来说,摄像机与转轴之间的相对关系保持不变。为了求解每个转轴与摄像机坐标系之间的位置关系,建立数学模型,如图6所示,C{O c-x cy cz c}为摄像机坐标系(以下简称C系),以摄像机坐标系光心O c为起点,作垂线交roll自由度转轴l于点O b并延长垂线作射线O bx b。以O b为原点,O bx b为x轴,转轴为y轴,右手定则确定z轴建立转轴坐标系B{O b-x by bz b}(以下简称B系)。
设O cO b=d(d为定值,由机械加工工艺决定),则O c在B系下的坐标为t=(-d,0,0) T。设摄像机坐标系C相对于转轴坐标系B的旋转矩阵为R BC,则对于空间中任意一点P在C系中的坐标PC和在B系中的坐标PB满足变换关系:P B=R BCP C+t,在齐次坐标下表示为:
Figure PCTCN2019096546-appb-000028
Figure PCTCN2019096546-appb-000029
即为所要标定得到的结果,其表示空间中任意一点P从B系变换到C系的变换矩阵。
当绕转轴转动θ角后,转轴坐标系B和摄像机坐标系C均变成新的坐标系B′和C′。绕转轴转动时,B系相当于绕z b轴转动θ角到B′,则对同一点P,在B系中的坐标P B和在B′系中的坐标P B′满足:
Figure PCTCN2019096546-appb-000030
其中
Figure PCTCN2019096546-appb-000031
(根据旋转角度调整此值)
同样,对于摄像机坐标系,在转动θ角后得到新的摄像机坐标系C′。在标定过程中,可以通过固定的棋盘格计算摄像机坐标系的变换。设空间点P在 棋盘格的世界坐标系中坐标为x w,则在C系和C′系中计算出的棋盘格外参分别为T CW和T C′W,有P C=T CWx w,P C′=T C′Wx w,则
Figure PCTCN2019096546-appb-000032
根据刚体的性质,B系和C系在转动过程中保持相对位置关系T BC不变。同一个空间中的点在转动后的坐标系B′和C′中依然满足式(1)
Figure PCTCN2019096546-appb-000033
由式(2)、(3)、(4)得,
Figure PCTCN2019096546-appb-000034
该式中,
Figure PCTCN2019096546-appb-000035
为转轴标定过程需要求解的量。
Figure PCTCN2019096546-appb-000036
是每次转动根据位置传感器输出的矩阵,
Figure PCTCN2019096546-appb-000037
为每次转动过程中相机计算出来的矩阵T C′C,通过每次转动计算出T BC来标定转轴与摄像机坐标系的关系。
取一组数据{R Ci,T Ci1i2i3i},i=1...P,计算该组数据相对于基准位置电机转动角度为{θ 1iI12iI23iI3},i=1...P,计算其旋转矩阵得:
Figure PCTCN2019096546-appb-000038
将其代入式(5)可得到一组方程:
Figure PCTCN2019096546-appb-000039
全部数据代入式(7)可得到P组方程,求解该组方程得最优解{R BCa,T BCa},a=1...3。
重复上述步骤标定右“眼球”4~6号电机可以得到六个电机运动轴标定结果{R BCa,T BCa},a=1...6。
4)实时标定结果计算:
整个标定结果有:基准位置信息(θ I1I2I3I4I5I6),立体标定结果{R I,T I}和运动轴标定结果{R BCa,T BCa},a=1...6;在可动双目视觉***各个摄像部件发生运动后,6个运动自由度得到位置信息(θ p1p2p3p4p5p6),则6个自由度的转动角度分别为(θ p1I1p2I2p3I3p4I4p5I5p6I6),转化为旋转矩阵(R p1,R p2,R p3,R p4,R p5,R p6),最后代入下式可以得到六自由度可动双目视觉***运动后的外参结果(旋转矩阵R′和平移矩阵T′),即获得实时标定结果:
Figure PCTCN2019096546-appb-000040
Figure PCTCN2019096546-appb-000041
本发明提及的可动双目视觉***以及立体标定方法并不限于上述说明的实施例,实施例中增加了一些在实际应用中为了提高精度或者降低求解复杂度的额外步骤,在不脱离本发明原理的情况下,还可以做各种变形与改进。
本发明的保护内容不局限于以上实施例。在不背离本发明构思的精神和范围下,本领域技术人员能够想到的变化和优点都被包括在本发明中,并且以所附的权利要求书为保护范围。

Claims (8)

  1. 一种可动多目视觉***的立体标定方法,其特征在于,所述可动多目视觉***包括:
    至少两套摄像部件,每一套所述摄像部件包括至少一个可获取连续图像的摄像元件;每一套所述摄像部件具有任意数目运动自由度;每个所述运动自由度安装有可获取转动或平移信息的位置获取装置;
    至少一个计算部件,所述计算部件可计算处理图像信息以及各个运动自由度的运动信息;
    至少一个控制部件,所述控制部件可控制各个运动自由度的运动;
    上述可动多目视觉***的立体标定方法包括:在所述摄像元件前方放置标定模板,驱动各个摄像元件在各个运动自由度下运动,使用相应的摄像元件记录下包含完整标定模板的图像,同时记录下获取该图像时相应的各个运动自由度的位置信息,通过所述计算部件计算获得所述摄像元件与各个运动自由度的标定结果,所述标定结果包括各个运动自由度的转轴相对于摄像元件的旋转矩阵和平移矩阵。
  2. 如权利要求1所述的可动多目视觉***的立体标定方法,其特征在于,所述位置获取装置安装在运动自由度上,用于获取相应的运动自由度的运动信息。
  3. 如权利要求1所述的可动多目视觉***的立体标定方法,其特征在于,至少一套摄像部件包括一个或一个以上的运动自由度。
  4. 如权利要求1所述的可动多目视觉***的立体标定方法,其特征在于,所述标定模板包括自然静态场景和人造的标准靶标中的一种,且所述自然静态场景作为标定模板必须能提供固定的图像特征和已知的图像特征间的位置信息,所述人造的标准靶标包括2D平面靶标、3D立体靶标中的至少一种。
  5. 如权利要求1所述的可动多目视觉***的立体标定方法,其特征在于, 所述方法包括以下步骤:
    记录基准位置处各个摄像部件的每个运动自由度的位置信息;在基准位置处进行立体标定获取摄像元件间精确的相对位置参数;分别标定各个运动自由度并利用计算部件计算得到各个运动自由度的转轴相对于摄像元件坐标系的旋转平移矩阵结果。
  6. 如权利要求1所述的可动多目视觉***的立体标定方法,其特征在于,所述立体标定结果包括基准位置信息、摄像元件处于基准位置处的立体标定结果、以及各个运动自由度的转轴相对于摄像元件坐标系的旋转平移关系,其中,摄像元件处于基准位置处的立体标定结果包括摄像元件间的旋转平移关系。
  7. 如权利要求1所述的可动多目视觉***的立体标定方法,其特征在于,还包括将所述标定结果与各个运动自由度的位置信息一同代入各个运动自由度的转轴坐标系与摄像元件坐标系之间的关系模型,以计算获得表示各个摄像元件之间的相对位置关系的旋转矩阵与平移矩阵。
  8. 如权利要求7所述的可动多目视觉***的立体标定方法,其特征在于,所述关系模型为:
    Figure PCTCN2019096546-appb-100001
    Figure PCTCN2019096546-appb-100002
    其中,R′和T′分别是所述可动多目视觉***中任意两套摄像部件组成的可动双目视觉子***在当前位置下的两套摄像部件的外参结果中的旋转矩阵和平移矩阵,R BCa,T BCa分别为第a个运动自由度的转轴相对于摄像元件坐标系的旋转矩阵和平移矩阵,N k为第k个摄像部件的运动自由度数目,x,y为所述两套摄像部件的序号,并且设x为所述可动双目子视觉***中的“左眼”序 号,y为所述可动双目子视觉***中的“右眼”序号,R a为第a个运动自由度的转轴相对于基准位置的转动角度转化的旋转矩阵,R I(x,y),T I(x,y)分别为基准位置处“左眼”与“右眼”之间的相对旋转矩阵和平移矩阵。
PCT/CN2019/096546 2018-09-28 2019-07-18 一种可动视觉***的立体标定方法 WO2020063059A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US17/280,747 US11663741B2 (en) 2018-09-28 2019-07-18 Stereo calibration method for movable vision system
JP2021516446A JP7185821B2 (ja) 2018-09-28 2019-07-18 可動視覚システムの立体キャリブレーション方法
DE112019004844.9T DE112019004844T5 (de) 2018-09-28 2019-07-18 Stereokalibrierungsverfahren für ein bewegliches Bildverarbeitungssystem

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811141498.8 2018-09-28
CN201811141498.8A CN109242914B (zh) 2018-09-28 2018-09-28 一种可动视觉***的立体标定方法

Publications (1)

Publication Number Publication Date
WO2020063059A1 true WO2020063059A1 (zh) 2020-04-02

Family

ID=65054001

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/096546 WO2020063059A1 (zh) 2018-09-28 2019-07-18 一种可动视觉***的立体标定方法

Country Status (5)

Country Link
US (1) US11663741B2 (zh)
JP (1) JP7185821B2 (zh)
CN (1) CN109242914B (zh)
DE (1) DE112019004844T5 (zh)
WO (1) WO2020063059A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112720469A (zh) * 2020-12-18 2021-04-30 北京工业大学 显微立体视觉用于三轴平移运动***零点校准方法
CN114147728A (zh) * 2022-02-07 2022-03-08 杭州灵西机器人智能科技有限公司 通用的机器人眼在手上标定方法和***

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109242914B (zh) * 2018-09-28 2021-01-01 上海爱观视觉科技有限公司 一种可动视觉***的立体标定方法
CN109360243B (zh) * 2018-09-28 2022-08-19 安徽爱观视觉科技有限公司 一种多自由度可动视觉***的标定方法
CN111080717A (zh) * 2019-12-23 2020-04-28 吉林大学 一种摄像头参数自动标定装置及方法
CN111836036A (zh) * 2020-06-15 2020-10-27 南京澳讯人工智能研究院有限公司 一种视频图像获取和处理装置及***
CN114255287B (zh) * 2022-03-01 2022-07-26 杭州灵西机器人智能科技有限公司 一种小景深相机的单目标定方法、***、装置和介质
CN114708335B (zh) * 2022-03-20 2023-03-14 元橡科技(苏州)有限公司 双目立体相机的外参标定***、标定方法、应用、存储介质
CN114795079B (zh) * 2022-05-06 2023-04-14 广州为实光电医疗科技有限公司 一种医用内窥镜双摄模组的匹配校准方法及装置

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100165116A1 (en) * 2008-12-30 2010-07-01 Industrial Technology Research Institute Camera with dynamic calibration and method thereof
CN103854291A (zh) * 2014-03-28 2014-06-11 中国科学院自动化研究所 四自由度双目视觉***中的摄像机标定方法
CN105588581A (zh) * 2015-12-16 2016-05-18 南京航空航天大学 一种在轨服务相对导航实验平台及工作方法
CN106097367A (zh) * 2016-06-21 2016-11-09 北京格灵深瞳信息技术有限公司 一种双目立体相机的标定方法及装置
CN106097300A (zh) * 2016-05-27 2016-11-09 西安交通大学 一种基于高精度运动平台的多相机标定方法
CN106887023A (zh) * 2017-02-21 2017-06-23 成都通甲优博科技有限责任公司 用于双目摄像机标定的标定板及其标定方法和标定***
CN106981083A (zh) * 2017-03-22 2017-07-25 大连理工大学 双目立体视觉***摄像机参数的分步标定方法
CN107292927A (zh) * 2017-06-13 2017-10-24 厦门大学 一种基于双目视觉的对称运动平台位姿测量方法
CN109242914A (zh) * 2018-09-28 2019-01-18 上海爱观视觉科技有限公司 一种可动视觉***的立体标定方法

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6915008B2 (en) * 2001-03-08 2005-07-05 Point Grey Research Inc. Method and apparatus for multi-nodal, three-dimensional imaging
EP1958458B1 (en) * 2005-11-30 2016-08-17 Telecom Italia S.p.A. Method for determining scattered disparity fields in stereo vision
GB2477333B (en) 2010-01-29 2014-12-03 Sony Corp A method and apparatus for creating a stereoscopic image
JP5588812B2 (ja) * 2010-09-30 2014-09-10 日立オートモティブシステムズ株式会社 画像処理装置及びそれを用いた撮像装置
US9188973B2 (en) * 2011-07-08 2015-11-17 Restoration Robotics, Inc. Calibration and transformation of a camera system's coordinate system
US20130016186A1 (en) 2011-07-13 2013-01-17 Qualcomm Incorporated Method and apparatus for calibrating an imaging device
FR3026591B1 (fr) 2014-09-25 2016-10-21 Continental Automotive France Procede de calibration extrinseque de cameras d'un systeme de formation d'images stereos embarque
JP6121063B1 (ja) 2014-11-04 2017-04-26 エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd カメラ較正方法、デバイス及びシステム
US9369689B1 (en) * 2015-02-24 2016-06-14 HypeVR Lidar stereo fusion live action 3D model video reconstruction for six degrees of freedom 360° volumetric virtual reality video
WO2016154557A1 (en) * 2015-03-26 2016-09-29 Universidade De Coimbra Methods and systems for computer-aided surgery using intra-operative video acquired by a free moving camera
US10176554B2 (en) 2015-10-05 2019-01-08 Google Llc Camera calibration using synthetic images
EP3374967B1 (en) * 2015-11-11 2023-01-04 Zhejiang Dahua Technology Co., Ltd Methods and systems for binocular stereo vision
JP6694281B2 (ja) * 2016-01-26 2020-05-13 株式会社日立製作所 ステレオカメラおよび撮像システム
EP3411755A4 (en) * 2016-02-03 2019-10-09 Sportlogiq Inc. SYSTEMS AND METHODS FOR AUTOMATED CALIBRATION OF PHOTOGRAPHIC APPARATUS
CN107666606B (zh) * 2016-07-29 2019-07-12 东南大学 双目全景图像获取方法及装置
US10853942B1 (en) * 2016-08-29 2020-12-01 Amazon Technologies, Inc. Camera calibration in a mobile environment
JP6396516B2 (ja) * 2017-01-12 2018-09-26 ファナック株式会社 視覚センサのキャリブレーション装置、方法及びプログラム
EP3382646A1 (en) * 2017-03-29 2018-10-03 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus for providing calibration data, camera system and method for obtaining calibration data
US11308645B2 (en) * 2017-05-12 2022-04-19 The Board Of Trustees Of The Leland Stanford Junior University Apparatus and method for wide-range optical tracking during medical imaging
US10482626B2 (en) * 2018-01-08 2019-11-19 Mediatek Inc. Around view monitoring systems for vehicle and calibration methods for calibrating image capture devices of an around view monitoring system using the same
US11508091B2 (en) * 2018-06-29 2022-11-22 Komatsu Ltd. Calibration device for imaging device, monitoring device, work machine and calibration method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100165116A1 (en) * 2008-12-30 2010-07-01 Industrial Technology Research Institute Camera with dynamic calibration and method thereof
CN103854291A (zh) * 2014-03-28 2014-06-11 中国科学院自动化研究所 四自由度双目视觉***中的摄像机标定方法
CN105588581A (zh) * 2015-12-16 2016-05-18 南京航空航天大学 一种在轨服务相对导航实验平台及工作方法
CN106097300A (zh) * 2016-05-27 2016-11-09 西安交通大学 一种基于高精度运动平台的多相机标定方法
CN106097367A (zh) * 2016-06-21 2016-11-09 北京格灵深瞳信息技术有限公司 一种双目立体相机的标定方法及装置
CN106887023A (zh) * 2017-02-21 2017-06-23 成都通甲优博科技有限责任公司 用于双目摄像机标定的标定板及其标定方法和标定***
CN106981083A (zh) * 2017-03-22 2017-07-25 大连理工大学 双目立体视觉***摄像机参数的分步标定方法
CN107292927A (zh) * 2017-06-13 2017-10-24 厦门大学 一种基于双目视觉的对称运动平台位姿测量方法
CN109242914A (zh) * 2018-09-28 2019-01-18 上海爱观视觉科技有限公司 一种可动视觉***的立体标定方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112720469A (zh) * 2020-12-18 2021-04-30 北京工业大学 显微立体视觉用于三轴平移运动***零点校准方法
CN114147728A (zh) * 2022-02-07 2022-03-08 杭州灵西机器人智能科技有限公司 通用的机器人眼在手上标定方法和***

Also Published As

Publication number Publication date
JP7185821B2 (ja) 2022-12-08
CN109242914B (zh) 2021-01-01
DE112019004844T5 (de) 2021-07-08
CN109242914A (zh) 2019-01-18
US11663741B2 (en) 2023-05-30
US20220044444A1 (en) 2022-02-10
JP2022500956A (ja) 2022-01-04

Similar Documents

Publication Publication Date Title
WO2020063059A1 (zh) 一种可动视觉***的立体标定方法
CN110728715B (zh) 一种智能巡检机器人摄像机角度自适应调整方法
CN106846415B (zh) 一种多路鱼眼相机双目标定装置及方法
CN104154875B (zh) 基于两轴旋转平台的三维数据获取***及获取方法
WO2018076154A1 (zh) 一种基于鱼眼摄像机空间位姿标定的全景视频生成方法
TWI555379B (zh) 一種全景魚眼相機影像校正、合成與景深重建方法與其系統
CN105118055B (zh) 摄影机定位修正标定方法及***
CN111801198B (zh) 一种手眼标定方法、***及计算机存储介质
JP7185860B2 (ja) 多軸可動視覚システムのキャリブレーション方法
CN109658460A (zh) 一种机械臂末端相机手眼标定方法和***
CN109118545A (zh) 基于旋转轴和双目摄像头的三维成像***标定方法及***
JP2017108387A (ja) パノラマ魚眼カメラの画像較正、スティッチ、および深さ再構成方法、ならびにそのシステム
CN106910243A (zh) 基于转台的自动化数据采集与三维建模的方法及装置
CN106504321A (zh) 使用照片或视频重建三维牙模的方法及使用rgbd图像重建三维牙模的方法
CN111461963B (zh) 一种鱼眼图像拼接方法及装置
CN111780682B (zh) 一种基于伺服电机的3d图像采集控制方法
CN206460516U (zh) 一种多路鱼眼相机双目标定装置
CN113724337B (zh) 一种无需依赖云台角度的相机动态外参标定方法及装置
CN206460515U (zh) 一种基于立体标定靶的多路鱼眼相机标定装置
CN111768449A (zh) 一种双目视觉结合深度学习的物体抓取方法
CN112585956A (zh) 轨迹复演方法、***、可移动平台和存储介质
JP2005338977A (ja) 立体画像処理システム
CN116977449B (zh) 一种基于闪烁棋盘格的复眼事件相机主动标定方法
CN115457142B (zh) 一种mr混合摄影相机的标定方法和***
CN108616744B (zh) 一种仿生双眼视觉校准***及校准方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19865368

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021516446

Country of ref document: JP

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 19865368

Country of ref document: EP

Kind code of ref document: A1