WO2020063058A1 - 一种多自由度可动视觉***的标定方法 - Google Patents

一种多自由度可动视觉***的标定方法 Download PDF

Info

Publication number
WO2020063058A1
WO2020063058A1 PCT/CN2019/096545 CN2019096545W WO2020063058A1 WO 2020063058 A1 WO2020063058 A1 WO 2020063058A1 CN 2019096545 W CN2019096545 W CN 2019096545W WO 2020063058 A1 WO2020063058 A1 WO 2020063058A1
Authority
WO
WIPO (PCT)
Prior art keywords
freedom
degree
movement
calibration
camera
Prior art date
Application number
PCT/CN2019/096545
Other languages
English (en)
French (fr)
Inventor
王开放
杨冬冬
张晓林
Original Assignee
上海爱观视觉科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海爱观视觉科技有限公司 filed Critical 上海爱观视觉科技有限公司
Priority to US17/279,289 priority Critical patent/US11847797B2/en
Priority to DE112019004853.8T priority patent/DE112019004853T5/de
Priority to JP2021516454A priority patent/JP7185860B2/ja
Publication of WO2020063058A1 publication Critical patent/WO2020063058A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/44Analysis of texture based on statistical description of texture using image operators, e.g. filters, edge density metrics or local histograms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/579Depth or shape recovery from multiple images from motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • G06T2207/30208Marker matrix
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • the invention relates to a method for calibrating a vision system, in particular to a method for calibrating a multi-degree-of-freedom movable vision system.
  • Multi-degree-of-freedom vision systems are widely used, and their structural features are vision systems with multiple degrees of freedom of movement, such as common handheld stabilization cameras or quadrotor aerial cameras.
  • three degrees of freedom are combined with a camera module.
  • the camera module can Rotation is controlled by three rotational degrees of freedom, or a PTZ surveillance camera visible on the street, with two degrees of freedom of motion and a zoomable camera for tracking or monitoring with a large field of view.
  • Such application scenarios often do not require the precise relative positional relationship between the camera and the base.
  • a vision module installed at the end, it is often necessary to accurately know the base and the vision module due to the need for precise operation. Precise relative position relationship.
  • the present invention proposes a method for calibrating a multi-degree-of-freedom movable vision system.
  • the multi-degree of freedom is realized.
  • the actual relative position of the camera element and the base is obtained during the actual use of the movable vision system.
  • the method for calibrating a multi-degree-of-freedom movable vision system mentioned in the present invention includes: a camera component, and the camera component includes at least one camera element capable of acquiring a continuous image;
  • the camera unit has at least one degree of freedom of movement and is mounted on the base; each of the degree of freedom of movement is provided with a position acquisition device capable of acquiring rotation or translation information; a calculation unit that calculates and processes image information and each movement Degree of freedom motion information; a control component that can control the movement of each degree of freedom of movement;
  • the calibration method includes: placing a calibration template in front of the camera element to drive the camera element to move at each degree of freedom of movement, using The camera element records several images containing the characteristics of the calibration template. At the same time, it records the position information of each degree of freedom of movement when the corresponding image is obtained, and uses the calculation component to calculate the calibration results of the camera element and each degree of freedom of movement.
  • the camera component may be, for example, a motor in series with a motor and a camera element at the end, so that multiple motor degrees of freedom are provided by the motors connected in series.
  • the rotation angle information of course, for a case where a linear motor is used to generate rotation through translation of the motor output, translation information can also be obtained.
  • the calibration method specifically includes the following steps: 1) recording positional information of each degree of freedom at the reference position; 2) placing a calibration template in front of the camera component, and sequentially making several changes in each degree of freedom of movement and collecting them separately The position information of the image and each degree of freedom of movement; 3) The calibration result of each degree of freedom of movement is calculated using a calibration algorithm using the collected image and the position information of each degree of freedom of movement.
  • the calibration results (including the reference position information and the calibration results of each degree of freedom of movement) are obtained.
  • the position information of the current position of the camera element relative to the reference position can be obtained by combining the obtained degrees of freedom of movement of the vision system Rotation translation relationship.
  • the calibration method of the present invention includes the following steps:
  • a represents the serial number of motion degrees of freedom, which are numbered in order according to the connection order of each degree of freedom of motion.
  • the calibration template is fixedly placed in front of the movable system to ensure that the camera can capture the complete calibration template; then the a-th movement axis is rotated several times, each time the movement axis is connected
  • the image capturing part of the camera performs a picture fetching operation to obtain a picture M ai , and at the same time records the position information of the motion axis at this time to obtain ⁇ M ai , ⁇ ai1 , ..., ⁇ aiN ⁇ ; using multiple rotations to obtain an image sequence (M a1 , M a2 , M a3 , ).
  • R BCa , T BCa 1 ... N
  • R BCa , T BCa are the rotations of the a-th motion axis relative to the coordinate system of the camera element, respectively.
  • Matrix and translation matrix N is the number of degrees of freedom of the vision system
  • R Bai is the rotation matrix transformed by the rotation angle of the ath motion axis relative to the reference position
  • R aCi and T aCi are taken when the ath motion axis is in motion.
  • P a is the total number of valid pictures obtained by the system's a-axis rotation during the calibration process.
  • R ′ and T ′ are the rotation matrix and translation matrix results of the camera relative to the reference position after the movable system moves
  • R BCa and T BCa are the rotation matrix and Translation matrix
  • N is the number of degrees of freedom of the vision system
  • R pa is a rotation matrix transformed by the rotation angle of the a-th motion axis relative to the reference position.
  • the camera component is used to obtain continuous images; the camera component has at least one degree of freedom of movement, and is connected and fixed to the base via each degree of freedom of movement; and each of the degrees of freedom of movement is installed to obtain rotation or translation Location information acquisition device.
  • the imaging component has at least one imaging element and at least one lens group.
  • the calibration template is a calibration template capable of extracting fixed features and known relative position information of the features.
  • artificially produced 2D and 3D targets, or some fixed natural scenes require fixed feature information to be extracted through image processing algorithms, and the relative positional relationship between these feature information can be obtained.
  • precision-machined 3D stereo targets or 2D planar targets are often used as calibration templates.
  • the calibration result refers to a rotation matrix and a translation matrix of each rotation axis with respect to the optical center of the imaging element.
  • the calibration result can be used to calculate the relative positional relationship between the camera element and the reference position, that is, the rotation matrix and the translation matrix, in combination with the position information of each degree of freedom of movement.
  • the method for calibrating a multi-degree-of-freedom movable vision system provided by the present invention can be used even when the position of a camera component changes and there are errors caused by the relative positions of each axis and the camera component due to mechanical processing, assembly, etc.
  • a calibration algorithm that calculates the relative position relationship between the camera component and the reference position in real time on the position recording component on each axis.
  • the present invention uses a camera calibration algorithm based on a 3D stereo target or a camera calibration algorithm based on a 2D planar target in the prior art as a basis for calibration.
  • the beneficial effects of the present invention include: solving the problem that the accurate rotation and translation parameter information of the camera element relative to the reference position can still be obtained through calculation even in the case where the camera element moves in the movable system; the invention proposes The method has good real-time calculation, that is, after one calibration, subsequent position information of each degree of freedom of movement can be used to calculate the parameter information of the position change of the camera element in real time; the present invention can significantly and effectively eliminate mechanical processing and assembly There are some force majeure errors in the process and theoretical design.
  • FIG. 1 is a schematic block diagram of a multi-degree-of-freedom movable vision system according to the present invention.
  • 2A is a calibration flowchart of a multi-degree-of-freedom movable vision system according to the present invention
  • FIG. 2B is a real-time external parameter calculation flowchart of the multi-degree-of-freedom movable vision system of the present invention.
  • FIG. 3 is a checkerboard calibration template diagram used by the calibration algorithm described in a specific embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a camera coordinate system and a rotation axis coordinate system of various degrees of freedom in the present invention.
  • Embodiment 1 of the present invention takes the three-degree-of-freedom movable vision system as an example to describe the present invention in detail.
  • the multi-degree-of-freedom movable vision system of the present invention includes a camera component, a calculation component, and a control component.
  • the calculation component is signal-connected to the imaging component, and the influence of the motion of each degree of freedom on the imaging component can be calibrated according to the images acquired by the imaging component in various poses and corresponding pose information.
  • the control unit is signal-connected to the imaging unit and the computing unit, and is used to control the imaging unit to acquire images in various postures, and to control the computing unit to perform a calibration operation.
  • the method for calibrating a multi-degree-of-freedom movable vision system according to the present invention is shown in FIG. 2A.
  • the calibration steps mainly include:
  • Step S11 Record position information of each degree of freedom of movement at the initial position to determine a reference position
  • Step S12 driving each degree of freedom in turn, using the camera component to separately capture images, and recording position information corresponding to each degree of freedom of movement when the images are collected;
  • Step S13 Calculate a calibration result using a calculation component.
  • the posture of the camera component relative to the initial position can be accurately calculated according to the calibration result and the position information of each degree of freedom of movement.
  • the above-mentioned actual application calculation steps mainly include:
  • Step S21 Obtain a calibration result of the multi-degree-of-freedom movable vision system
  • Step S22 Acquire position information of each current degree of freedom of movement
  • Step S23 Substituting the information obtained in steps S21 and S22 into a multi-degree-of-freedom movable vision system external parameter solution model to calculate the rotation and translation information of the current camera component relative to the recording reference position.
  • the system has three degrees of freedom of movement, which are defined as pitch, yaw, and roll degrees of freedom in turn, corresponding to the space rectangular coordinate system.
  • the pitch is rotated around the X axis.
  • yaw rotates around the Y axis, and roll rotates around the Z axis.
  • the three degrees of freedom are provided by three single-degree-of-freedom kinematic elements. In other embodiments, they may also be provided by a single-degree-of-freedom plus one two-degree-of-freedom kinematic element. Motion elements provided.
  • the three moving elements are rigidly connected, so that the motion of one degree of freedom does not affect the motion of the other degrees of freedom.
  • the moving element that provides pitch freedom can be fixed on the base by a rigid connection, and one or more cameras that can continuously take pictures can be connected to the moving element that provides roll freedom.
  • Each of the above three degrees of freedom motion is driven by a motor, and each of the motors is provided with a position recording device to obtain real-time output position information of the motor.
  • the output position information of each motor is represented by an angle value.
  • the output position information of a motor used to drive the movement of a roll degree of freedom (hereinafter also referred to as a first degree of freedom) is represented as ⁇ 1, which represents a camera.
  • the rotation angle of the main axis of Z relative to the Z axis; the output position information of the motor used to drive the movement of yaw degrees of freedom (hereinafter also referred to as the second degree of freedom) is represented as ⁇ 2, which represents the rotation angle of the camera's main axis with respect to the Y axis;
  • Output position information of a motor for driving pitch freedom (hereinafter also referred to as a third degree of freedom) motion is represented as ⁇ 3, which represents the rotation angle of the main axis of the camera with respect to the X axis.
  • the calibration method includes the following steps:
  • the first degree of freedom roll As an example.
  • the three-degree-of-freedom movable vision system cannot guarantee that the optical center is on the rotation axis during the machining process, nor can it ensure that each rotation axis is parallel to the corresponding coordinate axis in the camera coordinate system. Posture transformation. And for a rigid body, the relative positional relationship between the camera's main axis and the respective rotation axis remains unchanged. In order to solve the positional relationship between each axis of rotation and the camera coordinate system, a mathematical model is established as shown in Figure 4.
  • C ⁇ O c -x c y c z c ⁇ is the camera coordinate system (hereinafter referred to as the C system), and the camera coordinates are
  • the optical center O c is the starting point, and the vertical line intersects the roll DOF axis l at the point O b and extends the vertical line as the ray O b x b .
  • O b as the origin
  • O b x b as the x axis
  • the rotation axis as the y axis
  • the right-hand rule determines that the z axis establishes the rotation axis coordinate system B ⁇ O b -x b y b z b ⁇ (hereinafter referred to as the B system).
  • the result obtained by the calibration which represents the transformation matrix of any point P in space from B system to C system.
  • the rotation axis coordinate system B and the camera coordinate system C both become new coordinate systems B ′ and C ′.
  • the B system is equivalent to rotating the angle ⁇ around the z b axis to B ′.
  • the coordinates P B in the B system and the coordinates P B ′ in the B ′ system satisfy:
  • a new camera coordinate system C ′ is obtained after ⁇ angle rotation.
  • the relative position T BC remains unchanged during rotation of the B series and the C series. Points in the same space still satisfy equation (1) in the coordinate systems B ′ and C ′ after rotation.
  • the position information ( ⁇ p1 , ⁇ p2 , ⁇ p3 ) is obtained for the three degrees of freedom, and the rotation angles of the three degrees of freedom are ( ⁇ p1 - ⁇ I1 , ⁇ p2 - ⁇ I2 , ⁇ p3 - ⁇ I3 ), converted into a rotation matrix (R p1 , R p2 , R p3 ), and substituted into the model to obtain the external parameter results of the three-degree-of-freedom movable vision system relative to the reference camera position:
  • R ′ and T ′ are the rotation matrix and translation matrix results of the camera with respect to the reference position after the movable three-degree-of-freedom vision system moves.
  • the method for calibrating the movable vision system mentioned in the present invention is not limited to the embodiments described above, and various modifications and improvements can be made without departing from the principle of the present invention.
  • the protection content of the present invention is not limited to the above embodiments. Without departing from the spirit and scope of the inventive concept, variations and advantages that can be conceived by those skilled in the art are included in the present invention, and the scope of protection is the appended claims.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Algebra (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Probability & Statistics with Applications (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

一种多自由度可动视觉***的标定方法,在摄像部件前方放置标定模板,转动摄像部件各个运动自由度,使用摄像部件记录下若干包含标定模板特征的图像,同时记录下获取相应图像时的各个运动自由度的位置信息,使用计算部件计算出摄像部件与各个运动自由度的标定结果。可动视觉***包括摄像部件、计算部件、控制部件。本方法具有良好的计算实时性,可通过各个运动自由度的位置信息进行计算,实时获取摄像部件位置变化的参数信息,并可以解决因机械加工组装所造成的与理论设计存在的误差问题,具有广泛应用前景。

Description

一种多自由度可动视觉***的标定方法 技术领域
本发明涉及一种视觉***的标定方法,特别涉及多自由度可动视觉***的标定方法。
背景技术
多自由度视觉***应用广泛,其结构特征是具有多个运动自由度的视觉***,例如常见的手持增稳相机或者四旋翼航拍摄像机,一般是三个旋转自由度结合一个相机模块,相机模块可以受三个旋转自由度控制发生转动,或者如街头可见的PTZ监控相机,有两个自由度的运动和可变焦的相机,实现跟踪或者大视场角的监控。此类应用场景往往不需要已知相机与基座之间的精确相对位置关系,但是对于例如在末端安装有视觉模块的机器手臂,往往由于需要精密操作,是需要精确已知基座与视觉模块间精确的相对位置关系。
影响可动视觉***的摄像元件与基座的精确位置获取主要有两大问题,一是在任意自由度发生运动后如何得到摄像元件的位置变化,还有一个问题就是受限于现有机械加工水平以及安装工艺的限制,无法保证各个运动自由度机械转轴严格精准的按照设计要求运动,同时摄像部件与机械转轴的相对位置也难以保证,这就给直接计算可动视觉***的摄像元件与基座的精确位置带来了很大的困难。
发明内容
为克服现有技术存在的问题,本发明提出了一种多自由度可动视觉***的标定方法,通过事先标定出摄像元件与基座是如何受各个自由度运动影响的,来实现在多自由度可动视觉***的实际使用过程中获取摄像元件与基座的精确相对位置。
本发明提及的一种多自由度可动视觉***的标定方法,其中,所述多自由度可动视觉***包括:摄像部件,所述摄像部件包括至少一个可获取连续图像的摄像元件;所述摄像部件有至少一个运动自由度且安装于基座;每个所述运 动自由度安装有可获取转动或平移信息的位置获取装置;计算部件,所述计算部件可计算处理图像信息以及各个运动自由度的运动信息;控制部件,所述控制部件可控制各个运动自由度的运动;所述标定方法包括:在所述摄像元件前方放置标定模板,驱动摄像元件在各个运动自由度下运动,使用摄像元件记录下若干包含标定模板特征的图像,同时记录下获取相应图像时的各个运动自由度的位置信息,使用计算部件计算出摄像元件与各个运动自由度的标定结果。
可选的,所述摄像部件例如可以是由一个电机串着一个电机,末端带有一个摄像元件,从而通过相串联的电机提供多个运动自由度,针对每个运动自由度主要是获取各自的转角信息,当然,对于采用线性马达并通过马达输出的平移来产生转动的情况,还可以获取平移信息。
可选的,所述标定方法具体包括以下步骤:1)记录基准位置处各运动自由度的位置信息;2)在摄像部件前方放置标定模板,依次使各个运动自由度发生若干次改变,分别采集图像与各运动自由度的位置信息;3)利用采集的图像与各运动自由度的位置信息使用标定算法计算出各运动自由度的标定结果。
完成标定过程后获得标定结果(包括基准位置信息、各运动自由度标定结果),在实际应用中,结合获取的视觉***各运动自由度位置信息,即可获取当前位置摄像元件相对于基准位置处的旋转平移关系。
在一具体实施方案中,本发明标定方法包括如下步骤:
(1)基准信息确定与获取:选择任意一个位置作为基准位置,记录下基准位置处各运动自由度位置信息{θ Ia},a=1...N,N为视觉***运动自由度数目,a代表运动自由度序号,按照各个运动自由度连接顺序依次编号。
(2)运动轴标定:将标定模板固定放置于可动***前方,保证摄像部件能够拍摄到完整的标定模板;然后使第a个运动轴转动若干次,每转动一次,让该运动轴所连接的摄像部件进行取图操作得到图片M ai,同时记录下此时该运动轴的位置信息得到{M aiai1,...,θ aiN};利用多次转动得到图像序列(M a1,M a2,M a3,...)。使用现有成熟的相机标定算法(例如张氏标定法)进行计算得 到拍摄的每张图片相对于标定模板位置的旋转和平移变换矩阵{R aCi,T aCi},结合拍摄该图片时记录下的运动轴位置信息得到{R aCi,T aCiai},i=1...P a,P a为***在标定过程第a个运动轴转动取到的全部有效图片数目;将运动轴标定结果中运动轴相对于基准位置的转动角度{θ ai1I1,...,θ aiNIN},i=1...P a转化为旋转矩阵{R B1i,...,R BNi},i=1...P a,将上述结果代入本发明各个运动自由度转轴坐标系与摄像元件坐标系关系模型:
Figure PCTCN2019096545-appb-000001
Figure PCTCN2019096545-appb-000002
得到若干组等式,求解该组等式最优解{R BCa,T BCa},a=1...N,R BCa,T BCa分别为第a个运动轴相对于摄像元件坐标系的旋转矩阵和平移矩阵,N为视觉***运动自由度数目,R Bai为第a个运动轴相对于基准位置的转动角度转化的旋转矩阵,R aCi,T aCi分别为第a个运动轴运动时拍摄的第i张图片图像坐标系相对于标定模板位置的旋转和平移变换矩阵,P a为***在标定过程第a个运动轴转动取到的全部有效图片数目。
(3)标定结果计算:(1)、(2)步骤得到了基准位置信息{θ Ia},a=1...N,和运动轴标定结果{R BCa,T BCa},a=1...N;在可动***摄像部件发生运动后,各个运动自由度得到位置信息(θ p1p2p3,...,θ pN),则各个运动自由度的转动角度为(θ p1I1p2I2p3I3,...,θ pNIN),转化为旋转矩阵(R p1,R p2,R p3,...,R pN),代入本发明运动自由度转轴坐标系与摄像元件坐标系关系模型(即下式)得到运动后的外参结果:
Figure PCTCN2019096545-appb-000003
Figure PCTCN2019096545-appb-000004
所得R′和T′分别是可动***发生运动后的摄像机相对基准位置时的旋转矩阵和平移矩阵结果,R BCa,T BCa分别为第a个运动轴相对于摄像元件坐标系的旋转 矩阵和平移矩阵,N为视觉***运动自由度数目,R pa为第a个运动轴相对于基准位置的转动角度转化的旋转矩阵。
本发明中,所述摄像部件用以获取连续图像;所述摄像部件有至少一个运动自由度,经由各个运动自由度连接固定于基座;所述每个运动自由度安装有可获取转动或平移信息的位置获取装置。所述摄像部件有至少一个摄像元和至少一个镜头组。
本发明中,所述标定模板为可提取固定特征并已知特征相对位置信息的标定模板。例如人工制作的各类2D、3D靶标,或者一些固定的自然场景,要求能够通过图像处理算法提取固定的特征信息,并且可以获知这些特征信息之间的相对位置关系。一般在实际应用中,为提高标定结果精度和降低标定过程难度,多使用精密加工的3D立体靶标或者2D平面靶标作为标定模板。
本发明中,所述标定结果是指各转轴相对于摄像元件光心的旋转矩阵和平移矩阵。所述标定结果结合各个运动自由度位置信息即可计算出实际应用中需要的摄像元件与基准位置处的相对位置关系,即旋转矩阵与平移矩阵。
现有技术中,由于在可动***工作时摄像机会发生运动且各个轴由于加工组装带来的误差影响,无法通过直接计算来获取摄像机精确的变化的外参信息。而本发明提供的多自由度可动视觉***的标定方法,即使摄像机部件位置发生了变化且各个轴、摄像部件的相对位置由于机械加工、组装等环节导致的存在误差的情况下,也能根据各个轴上的位置记录组件实时计算摄像部件与基准位置处的相对位置关系的标定算法。且,还可由此衍生出可得到摄像部件处于任意两处位置之间的精确的旋转与平移关系。本发明使用了一些现有技术中基于3D立体靶标的相机标定算法或基于2D平面靶标的相机标定算法等作为标定的基础。
与现有技术相比,本发明有益效果包括:解决了即使在可动***中摄像元件发生运动的情形下,仍然可以通过计算获取摄像元件相对于基准位置的精确旋转平移参数信息;本发明提出的方法具有良好的计算实时性,即在一次标定之后,后续可通过各个运动自由度的位置信息进行计算,实时获取摄像元件位 置变化的参数信息;本发明能显著有效地消除在机械加工、组装过程中带来的与理论设计存在一些不可抗力的误差问题。
附图说明
图1是本发明多自由度可动视觉***的示意框图。
图2A是本发明多自由度可动视觉***的标定流程图;
图2B是本发明多自由度可动视觉***的实时外参计算流程图。
图3是本发明具体实施例中所述标定算法用到的棋盘格标定模板图。
图4是本发明中各个运动自由度的摄像机坐标系与转轴坐标系示意图。
具体实施方式
结合以下具体实施例和附图,对本发明作进一步的详细说明。为了简洁,在描述本发明实施例的各流程、条件、实验方法等时,省略了一些本领域已知的内容,本发明对此类内容没有特别的限制。以下将结合图1~图4对本发明多自由度可动视觉***的标定方法作进一步的详细描述,需要说明的是本发明的标定方法可应用于具有任意数目的运动自由度的多自由度可动视觉***,以下实施例仅为举例说明,而并非对本发明的限制。
实施例1
本发明实施例1以三自由度可动视觉***为例详述本发明。
如图1所示,本发明多自由度可动视觉***包括摄像部件、计算部件以及控制部件。所述计算部件与摄像部件信号连接,可根据摄像部件在各种位姿下获取的图像及相应的位姿信息标定出各个自由度的运动对摄像部件的影响。所述控制部件与所述摄像部件和计算部件信号连接,用于控制摄像部件在各种位姿下获取图像,以及控制所述计算部件进行标定运算。
本发明多自由度可动视觉***的标定方法,如图2A所示,其标定步骤主要包括:
步骤S11、记录初始位置下各个运动自由度的位置信息以确定基准位置;
步骤S12、依次驱动各个自由度,使用摄像部件分别采集图像,并记录采集图像时相应各个运动自由度的位置信息;
步骤S13、利用计算部件计算出标定结果。
当标定完成后,可根据标定结果及各个运动自由度的位置信息精确计算出摄像部件相对于初始位置的位姿。如图2B所示,上述实际应用计算步骤主要包括:
步骤S21、获取多自由度可动视觉***标定结果;
步骤S22、获取当前各个运动自由度的位置信息;
步骤S23、将步骤S21及S22获取的信息代入多自由度可动视觉***外参求解模型计算出当前摄像部件相对于记录基准位置的旋转平移信息。
以一种三自由度的可动视觉***来举例说明,该***有三个运动自由度,依次定义为pitch、yaw、roll运动自由度,对应到空间直角坐标系中,pitch是围绕X轴旋转,yaw是围绕Y轴旋转,roll是围绕Z轴旋转。本实施例中,所述三个自由度通过三个单自由度运动元件提供,在其他实施例中,也可以通过一个单自由度加一个双自由度运动元件提供,还可以通过一个三自由度运动元件提供。优选的,三个所述运动元件之间刚性相连,使得其中一个自由度的运动不会对其他自由度的运动产生影响。优选的,可将提供pitch自由度的运动元件通过刚性连接方式固定在基座上,可在提供roll自由度的运动元件上连接一个或多个可以连续摄像的摄像机。上述三个自由度的运动各自通过一个电机驱动,每个所述电机均带有位置记录装置以获取电机的实时输出位置信息。在本实施例中,各电机的输出位置信息以角度值表示,具体的,用于驱动roll自由度(以下也称为第一自由度)运动的电机的输出位置信息表示为θ1,其代表摄像机的主轴相对于Z轴的旋转角度;用于驱动yaw自由度(以下也称为第二自由度)运动的电机的输出位置信息表示为θ2,其代表摄像机的主轴相对于Y轴的旋转角度;用于驱动pitch自由度(以下也称为第三自由度)运动的电机的输出位置信息表示为θ3,其代表摄像机的主轴相对于X轴的旋转角度。
标定方法包括以下步骤:
1)获取基准位置信息
保持标定模板在可动视觉***的摄相机前方,保证摄相机能够拍摄到完整的标定模板,记录下此时三个电机的初始位置信息(θ I1I2I3),作为基准位置信息。
2)运动轴标定
保持标定模板的位置不变,依次驱动各个运动自由度,即当其中一个自由度发生变化时,另外两个自由度保持不变。先以第一自由度roll为例,每转动一次运动轴(对应第i位置),使用摄相机在相应的位置下拍摄Q1个有效图像M 1ij,j=1,2,…,Q1,同时记录下此时该运动轴的位置信息得到{M 1ij,θ 11i,θ 12i,θ 13i},i=1,2,…,P1,其中P1为第一自由度运动轴在标定过程中的转动次数。利用每个转动位置下得到的一组图像并使用与图3所示的平面棋盘格标定模板相应的标定算法(例如张正友标定算法)进行计算,得到各个转动位置下所拍摄的图像相对于标定模板的旋转和平移结果{R 1Ci,T 1Ci},结合拍摄该图像时记录下的运动轴位置信息得到第一自由度标定中间结果{R 1Ci,T 1Ci11i12i13i},i=1...P 1。重复上述过程以分别获取第二运动自由度yaw、第三运动自由度pitch的相关标定中间结果,并汇总为{R aCi,T aCia1ia2ia3i},i=1...P a,a=1,2,3。
三自由度可动视觉***在机械加工过程中无法保证光心在转轴上,也不能保证各转轴与摄像机坐标系中的相应坐标轴平行,需要通过转轴编码器的输出值来计算摄像机坐标系的位姿变换。而且对于一个刚体来说,摄像机主轴与各自由度转轴之间的相对位置关系保持不变。为了求解每个转轴与摄像机坐标系之间的位置关系,建立数学模型如图4所示,C{O c-x cy cz c}为摄像机坐标系(以下简称C系),以摄像机坐标系光心O c为起点,作垂线交roll自由度转轴l于点O b并延长垂线作射线O bx b。以O b为原点,O bx b为x轴,转轴为y轴,右手定则确定z轴建立转轴坐标系B{O b-x by bz b}(以下简称B系)。
设O cO b=d(d为定值,由机械加工工艺决定),则O c在B系下的坐标为t=(-d,0,0) T。设摄像机坐标系C相对于转轴坐标系B的旋转矩阵为R BC,则对于空间中任意一点P在C系中的坐标PC与在B系中的坐标PB满足变换关系:P B=R BCP C+t,在齐次坐标下表示为:
Figure PCTCN2019096545-appb-000005
Figure PCTCN2019096545-appb-000006
即为所要标定得到的结果,其表示空间中任意一点P从B系变换到C系的变换矩阵。
当绕转轴转动θ角后,转轴坐标系B和摄像机坐标系C均变成新的坐标系B′和C′。绕转轴转动时,B系相当于绕z b轴转动θ角到B′,则对同一点P,在B系中的坐标P B和在B′系中的坐标P B′满足:
Figure PCTCN2019096545-appb-000007
其中
Figure PCTCN2019096545-appb-000008
(根据旋转角度调整此值)
同样,对于摄像机坐标系,在转动θ角后得到新的摄像机坐标系C′。在标定过程中,可以通过固定的棋盘格计算摄像机坐标系的变换。设空间点P在棋盘格的世界坐标系中坐标为x w,则在C系和C′系中计算出的棋盘格外参分别为T CW和T C′W,有P C=T CWx w,P C′=T C′Wx w,则
Figure PCTCN2019096545-appb-000009
根据刚体的性质,B系和C系在转动过程中保持相对位置关系T BC不变。同一个空间中的点在转动后的坐标系B′和C′中依然满足(1)式
Figure PCTCN2019096545-appb-000010
由式(2)、(3)、(4)得,
Figure PCTCN2019096545-appb-000011
该式中,
Figure PCTCN2019096545-appb-000012
为转轴标定过程需要求解的量。
Figure PCTCN2019096545-appb-000013
是每次转动根据 位置传感器输出的矩阵,
Figure PCTCN2019096545-appb-000014
为每次转动过程中相机计算出来的矩阵T C′C,通过每次转动计算出T BC来标定转轴与摄像机坐标系的关系。
本发明需要对每个自由度进行独立标定,取一组数据{R aCi,T aCia1ia2ia3i},i=1...P a,a=1...3,计算该组数据相对于基准位置电机转动角度为{θ a1iI1a2iI2a3iI3},i=1...P a,a=1...3,计算其旋转矩阵得:
Figure PCTCN2019096545-appb-000015
将其代入式(5)可得到一组方程:
Figure PCTCN2019096545-appb-000016
全部数据代入式(7)可得到P组方程,求解该组方程得最优解{R BCa,T BCa},a=1...3。
3)实时标定结果计算:整个标定结果有:基准位置信息(θ I1I2I3)和运动轴标定结果{R BCa,T BCa},a=1...3;在可动视觉***摄像部件发生运动后,3个运动自由度得到位置信息(θ p1p2p3),则3个自由度的转动角度分别为(θ p1I1p2I2p3I3),转化为旋转矩阵(R p1,R p2,R p3),代入模型得到三自由度可动视觉***运动后相对于基准相机位置的外参结果:
Figure PCTCN2019096545-appb-000017
Figure PCTCN2019096545-appb-000018
所得R′和T′分别是该可动三自由度视觉***发生运动后的摄像机相对基准位置时的旋转矩阵和平移矩阵结果。
本发明提及的可动视觉***标定方法并不限于上述说明的实施例,在不脱离本发明原理的情况下,还可以做各种变形与改进。本发明的保护内容不局限于以上实施例。在不背离本发明构思的精神和范围下,本领域技术人员能够想到的变化和优点都被包括在本发明中,并且以所附的权利要求书为保护范围。

Claims (6)

  1. 一种多自由度可动视觉***的标定方法,其特征在于,所述多自由度可动视觉***包括:
    摄像部件,所述摄像部件包括至少一个可获取连续图像的摄像元件;所述摄像部件有至少一个运动自由度且安装于基座;每个所述运动自由度安装有可获取转动或平移信息的位置获取装置;
    计算部件,所述计算部件可计算处理图像信息以及各个运动自由度的运动信息;
    控制部件,所述控制部件可控制各个运动自由度的运动;
    所述标定方法包括:在所述摄像元件前方放置标定模板,驱动摄像元件在各个运动自由度下运动,使用摄像元件记录下若干包含标定模板特征的图像,同时记录下获取相应图像时的各个运动自由度的位置信息,使用计算部件计算出摄像元件与各个运动自由度的标定结果。
  2. 如权利要求1所述的多自由度可动视觉***的标定方法,其特征在于,所述标定方法具体包括以下步骤:
    1)记录基准位置处各运动自由度的位置信息;
    2)在摄像部件前方放置标定模板,依次使各个运动自由度发生若干次改变,分别采集图像与各运动自由度的位置信息;
    3)利用采集的图像与各运动自由度的位置信息使用标定算法计算出各运动自由度的标定结果。
  3. 如权利要求1或2所述的多自由度可动视觉***的标定方法,其特征在于,所述标定模板包括自然静态场景和人造的标准靶标中的一种,且所述自然静态场景作为标定模板必须能提供固定的图像特征和已知的图像特征间的位置信息,所述人造的标准靶标包括2D平面靶标、3D立体靶标中的至少一种。
  4. 如权利要求1或2所述的多自由度可动视觉***的标定方法,其特征在于,所述标定结果包括各个运动自由度的转轴相对于摄像元件坐标系的旋转矩 阵和平移矩阵。
  5. 如权利要求1或2所述的多自由度可动视觉***的标定方法,其特征在于,根据所述标定结果与各个运动自由度的当前位置信息,结合建立的各个运动自由度的转轴坐标系与摄像元件坐标系之间的关系模型即可计算出摄像元件的当前位置与基准位置之间的相对位置关系,所述相对位置关系由旋转矩阵与平移矩阵表示。
  6. 如权利要求5所述的多自由度可动视觉***的标定方法,其特征在于,所述关系模型为:
    Figure PCTCN2019096545-appb-100001
    Figure PCTCN2019096545-appb-100002
    其中R′和T′分别是摄像元件的当前位置相对于基准位置的旋转矩阵和平移矩阵,R BCa,T BCa分别为第a个运动自由度的转轴相对于摄像元件坐标系的旋转矩阵和平移矩阵,N为所述可动视觉***的运动自由度数目,R pa为第a个运动自由度的转轴相对于基准位置的转动角度转化的旋转矩阵。
PCT/CN2019/096545 2018-09-28 2019-07-18 一种多自由度可动视觉***的标定方法 WO2020063058A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US17/279,289 US11847797B2 (en) 2018-09-28 2019-07-18 Calibration method for multi-degree-of-freedom movable vision system
DE112019004853.8T DE112019004853T5 (de) 2018-09-28 2019-07-18 Kalibrierungsverfahren für ein Bildverarbeitungssystem mit mehreren Freiheitsgraden
JP2021516454A JP7185860B2 (ja) 2018-09-28 2019-07-18 多軸可動視覚システムのキャリブレーション方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811141453.0 2018-09-28
CN201811141453.0A CN109360243B (zh) 2018-09-28 2018-09-28 一种多自由度可动视觉***的标定方法

Publications (1)

Publication Number Publication Date
WO2020063058A1 true WO2020063058A1 (zh) 2020-04-02

Family

ID=65348274

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/096545 WO2020063058A1 (zh) 2018-09-28 2019-07-18 一种多自由度可动视觉***的标定方法

Country Status (5)

Country Link
US (1) US11847797B2 (zh)
JP (1) JP7185860B2 (zh)
CN (1) CN109360243B (zh)
DE (1) DE112019004853T5 (zh)
WO (1) WO2020063058A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113487677A (zh) * 2021-06-07 2021-10-08 电子科技大学长三角研究院(衢州) 一种基于任意分布式配置的多ptz相机的室外中远距场景标定方法

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109360243B (zh) 2018-09-28 2022-08-19 安徽爱观视觉科技有限公司 一种多自由度可动视觉***的标定方法
CN113489945A (zh) * 2020-12-18 2021-10-08 深圳市卫飞科技有限公司 一种目标定位方法、装置、***及计算机可读存储介质
CN113379846B (zh) * 2021-05-28 2022-08-09 上海汇像信息技术有限公司 基于方向指示标志点标定模板的转台转轴标定方法
CN115793698A (zh) * 2023-02-07 2023-03-14 北京四维远见信息技术有限公司 自动姿态控制***和方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100165116A1 (en) * 2008-12-30 2010-07-01 Industrial Technology Research Institute Camera with dynamic calibration and method thereof
CN103854291A (zh) * 2014-03-28 2014-06-11 中国科学院自动化研究所 四自由度双目视觉***中的摄像机标定方法
CN104354167A (zh) * 2014-08-29 2015-02-18 广东正业科技股份有限公司 一种机器人手眼标定方法及装置
CN106335061A (zh) * 2016-11-11 2017-01-18 福州大学 一种基于四自由度机器人的手眼关系标定方法
CN107498558A (zh) * 2017-09-19 2017-12-22 北京阿丘科技有限公司 全自动手眼标定方法及装置
CN107883929A (zh) * 2017-09-22 2018-04-06 中冶赛迪技术研究中心有限公司 基于多关节机械臂的单目视觉定位装置及方法
CN109360243A (zh) * 2018-09-28 2019-02-19 上海爱观视觉科技有限公司 一种多自由度可动视觉***的标定方法

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1766580A2 (en) * 2004-07-14 2007-03-28 Braintech Canada, Inc. Method and apparatus for machine-vision
CN201355241Y (zh) * 2008-10-30 2009-12-02 北京航空航天大学 基于视觉的空间目标位姿测量装置
US9801539B2 (en) * 2013-05-23 2017-10-31 Stiftung Caesar—Center Of Advanced European Studies And Research Ocular Videography System
CN104298244B (zh) * 2013-07-17 2016-12-28 刘永 一种工业机器人三维实时高精度定位装置的定位方法
CN103759716B (zh) * 2014-01-14 2016-08-17 清华大学 基于机械臂末端单目视觉的动态目标位置和姿态测量方法
CN104236456B (zh) * 2014-09-04 2016-10-26 中国科学院合肥物质科学研究院 一种基于两自由度3d视觉传感器的机器人手眼标定方法
CN105981074B (zh) * 2014-11-04 2018-02-02 深圳市大疆创新科技有限公司 用于标定成像装置的***、方法和装置
CN105014667B (zh) * 2015-08-06 2017-03-08 浙江大学 一种基于像素空间优化的相机与机器人相对位姿标定方法
CN105234943B (zh) * 2015-09-09 2018-08-14 大族激光科技产业集团股份有限公司 一种基于视觉识别的工业机器人示教装置及方法
US10812778B1 (en) * 2015-11-09 2020-10-20 Cognex Corporation System and method for calibrating one or more 3D sensors mounted on a moving manipulator
EP3374967B1 (en) * 2015-11-11 2023-01-04 Zhejiang Dahua Technology Co., Ltd Methods and systems for binocular stereo vision
CN105809689B (zh) * 2016-03-09 2018-10-26 哈尔滨工程大学 基于机器视觉的船体六自由度测量方法
CN105869150A (zh) * 2016-03-24 2016-08-17 杭州南江机器人股份有限公司 一种基于视觉识别的可移动平台标定装置及标定方法
CN106156425B (zh) * 2016-07-05 2019-07-09 北京邮电大学 一种模块化机械臂的快速通用运动学建模方法
US10723022B2 (en) * 2016-09-16 2020-07-28 Carbon Robotics, Inc. System and calibration, registration, and training methods
CN107081755A (zh) * 2017-01-25 2017-08-22 上海电气集团股份有限公司 一种机器人单目视觉引导***的自动标定装置
CN107256568B (zh) * 2017-05-08 2020-10-27 西安交通大学 一种高精度机械臂手眼相机标定方法及标定***
US11308645B2 (en) * 2017-05-12 2022-04-19 The Board Of Trustees Of The Leland Stanford Junior University Apparatus and method for wide-range optical tracking during medical imaging
EP3511122B1 (en) * 2017-11-07 2020-04-29 Dalian University of Technology Monocular vision six-dimensional measurement method for high-dynamic large-range arbitrary contouring error of cnc machine tool
US11338441B2 (en) * 2017-12-01 2022-05-24 Delta Electronics, Inc. Calibration system for robot tool and calibration method for the same
CN108447097B (zh) * 2018-03-05 2021-04-27 清华-伯克利深圳学院筹备办公室 深度相机标定方法、装置、电子设备及存储介质
JP7203105B2 (ja) * 2018-06-29 2023-01-12 株式会社小松製作所 撮像装置の校正装置、監視装置、作業機械および校正方法
CN109242913B (zh) * 2018-09-07 2020-11-10 百度在线网络技术(北京)有限公司 采集器相对参数的标定方法、装置、设备和介质
CN109242914B (zh) * 2018-09-28 2021-01-01 上海爱观视觉科技有限公司 一种可动视觉***的立体标定方法
CN114765667A (zh) * 2021-01-13 2022-07-19 安霸国际有限合伙企业 用于多视图拼接的固定图案校准

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100165116A1 (en) * 2008-12-30 2010-07-01 Industrial Technology Research Institute Camera with dynamic calibration and method thereof
CN103854291A (zh) * 2014-03-28 2014-06-11 中国科学院自动化研究所 四自由度双目视觉***中的摄像机标定方法
CN104354167A (zh) * 2014-08-29 2015-02-18 广东正业科技股份有限公司 一种机器人手眼标定方法及装置
CN106335061A (zh) * 2016-11-11 2017-01-18 福州大学 一种基于四自由度机器人的手眼关系标定方法
CN107498558A (zh) * 2017-09-19 2017-12-22 北京阿丘科技有限公司 全自动手眼标定方法及装置
CN107883929A (zh) * 2017-09-22 2018-04-06 中冶赛迪技术研究中心有限公司 基于多关节机械臂的单目视觉定位装置及方法
CN109360243A (zh) * 2018-09-28 2019-02-19 上海爱观视觉科技有限公司 一种多自由度可动视觉***的标定方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113487677A (zh) * 2021-06-07 2021-10-08 电子科技大学长三角研究院(衢州) 一种基于任意分布式配置的多ptz相机的室外中远距场景标定方法
CN113487677B (zh) * 2021-06-07 2024-04-12 电子科技大学长三角研究院(衢州) 一种基于任意分布式配置的多ptz相机的室外中远距场景标定方法

Also Published As

Publication number Publication date
DE112019004853T5 (de) 2021-07-08
CN109360243B (zh) 2022-08-19
JP2022500793A (ja) 2022-01-04
US20210407135A1 (en) 2021-12-30
CN109360243A (zh) 2019-02-19
JP7185860B2 (ja) 2022-12-08
US11847797B2 (en) 2023-12-19

Similar Documents

Publication Publication Date Title
WO2020063058A1 (zh) 一种多自由度可动视觉***的标定方法
CN110728715B (zh) 一种智能巡检机器人摄像机角度自适应调整方法
WO2020024178A1 (zh) 一种手眼标定方法、***及计算机存储介质
WO2020063059A1 (zh) 一种可动视觉***的立体标定方法
TWI555379B (zh) 一種全景魚眼相機影像校正、合成與景深重建方法與其系統
JP4825980B2 (ja) 魚眼カメラの校正方法。
CN107471218B (zh) 一种基于多目视觉的双臂机器人手眼协调方法
WO2021012124A1 (zh) 机器人手眼标定方法、装置、计算设备、介质以及产品
CN107578450B (zh) 一种用于全景相机装配误差标定的方法及***
CN111461963B (zh) 一种鱼眼图像拼接方法及装置
CN112949478A (zh) 基于云台相机的目标检测方法
KR101111503B1 (ko) 전방향 피티지 카메라 제어 장치 및 그 방법
US9990739B1 (en) Method and device for fisheye camera automatic calibration
WO2018209592A1 (zh) 一种机器人的运动控制方法、机器人及控制器
CN113724337B (zh) 一种无需依赖云台角度的相机动态外参标定方法及装置
CN105469412A (zh) 一种ptz摄像机装配误差的标定方法
WO2022040983A1 (zh) 基于cad模型的投影标记和机器视觉的实时注册方法
CN111768449A (zh) 一种双目视觉结合深度学习的物体抓取方法
CN115345942A (zh) 空间标定方法、装置、计算机设备和存储介质
Wang et al. LF-VIO: A visual-inertial-odometry framework for large field-of-view cameras with negative plane
CN113436267B (zh) 视觉惯导标定方法、装置、计算机设备和存储介质
WO2019100216A1 (zh) 3d建模方法、电子设备、存储介质及程序产品
JP2014096761A (ja) 画像処理装置、その制御方法、および制御プログラム
US20230123629A1 (en) 3d computer-vision system with variable spatial resolution
JP2006013854A (ja) カメラ制御装置及びカメラ制御方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19865367

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021516454

Country of ref document: JP

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 19865367

Country of ref document: EP

Kind code of ref document: A1