WO2022120567A1 - Système d'étalonnage automatique reposant sur un guidage visuel - Google Patents

Système d'étalonnage automatique reposant sur un guidage visuel Download PDF

Info

Publication number
WO2022120567A1
WO2022120567A1 PCT/CN2020/134521 CN2020134521W WO2022120567A1 WO 2022120567 A1 WO2022120567 A1 WO 2022120567A1 CN 2020134521 W CN2020134521 W CN 2020134521W WO 2022120567 A1 WO2022120567 A1 WO 2022120567A1
Authority
WO
WIPO (PCT)
Prior art keywords
calibration
camera
cameras
faceted
calibration plate
Prior art date
Application number
PCT/CN2020/134521
Other languages
English (en)
Chinese (zh)
Inventor
程俊
张能波
郭海光
Original Assignee
深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳先进技术研究院 filed Critical 深圳先进技术研究院
Priority to PCT/CN2020/134521 priority Critical patent/WO2022120567A1/fr
Publication of WO2022120567A1 publication Critical patent/WO2022120567A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Definitions

  • the invention relates to the technical field of graphics and image processing, and more particularly, to an automatic calibration system based on vision guidance.
  • the multi-camera joint calibration technique is used to solve the relative pose relationship (R, T) between the multi-robots.
  • camera calibration technology can be roughly divided into two categories:
  • the first category use a specially made calibration board to determine the camera parameters.
  • Commonly used traditional camera calibration methods include: Faugeras calibration method, Tsai two-step method, and Zhang Zhengyou plane calibration method.
  • the linear model camera calibration of the Faugeras calibration method is based on the least squares problem of a system of linear equations.
  • the Tsai calibration method needs to obtain some parameter values in advance, first solve some parameters by linear method, and then solve the remaining camera parameters by nonlinear optimization.
  • the Zhang Zhengyou calibration method utilizes multiple images of the plane calibration plate at different viewing angles, and calibrates the camera parameters according to the designed homography matrix.
  • the second category self-calibration method, that is, calibration is performed according to the corresponding relationship between the two images generated during the camera movement.
  • self-calibration method based on infinite plane, absolute quadratic surface, self-calibration method based on Kruppa equation, etc.
  • the self-calibration method does not depend on the calibration reference, and has high flexibility, but the constraints are strong, and the calibration accuracy is low and the robustness is insufficient.
  • patent application CN110689585A discloses a joint calibration method, device, device and medium for multi-camera external parameters.
  • the specific implementation scheme is as follows: determine the common viewing area of each camera, and obtain a 2D verification point set in the image of the common viewing area; perform external parameter calibration for each camera respectively; The 3D coordinates of the points are verified, and the loss function is calculated according to the 3D coordinates; the joint calibration is performed according to the loss function to obtain the final external parameters of each camera.
  • the patent application CN110766759A discloses a multi-camera calibration method and device without overlapping fields of view.
  • the specific implementation scheme is: place the calibration board in the field of view of each camera; calculate the pose relationship between each camera coordinate system and the calibration board; use the dual theodolite three-dimensional coordinate measurement system to measure any n points on each calibration board respectively The three-dimensional coordinates under the theodolite coordinates; then use the 3D-3D pose estimation iterative closest point method to solve the pose relationship between the two calibration boards; use the camera to measure the pose relationship between the camera and the calibration board, and the theodolite calibration The pose relationship between the two calibration plates is calculated to calculate the relationship between the two cameras.
  • multi-camera joint calibration is a technology that uses computer vision technology to obtain camera internal parameters and relative poses between multiple cameras.
  • This technology has been widely used in multi-robot collaborative systems, but the existing solutions are often limited to specific scenarios, complex calibration processes, time-consuming and labor-intensive; and can not well meet the situation where there is no overlapping field of view between cameras.
  • the purpose of the present invention is to overcome the above-mentioned defects of the prior art, and to provide an automatic calibration system based on vision guidance. For the joint calibration of multiple cameras, there is no need for overlapping fields of view between all cameras, and the automation of the calibration process is realized.
  • an automated calibration system based on vision guidance includes: a calibration vehicle, a rotary control pan-tilt, a multi-faceted calibration board, multiple cameras, and a data acquisition control module, wherein the multi-sided calibration board is connected to the rotary control pan-tilt, and the data acquisition control module is used to control the calibration vehicle to be loaded with the multi-sided calibration board Move, and control the corresponding camera to shoot the multi-faceted calibration plate when moving to the overlapping field of view of the adjacent two cameras, and obtain the image data of the multi-faceted calibration plate; the data acquisition control module will take the obtained image data according to the camera number and multi-faceted calibration.
  • the board position number is stored, and the corner points of the multi-faceted calibration board are detected to obtain the relative pose between cameras.
  • an automated calibration method based on vision guidance includes the following steps:
  • the image data obtained by shooting is stored according to the camera number and the position number of the multi-faceted calibration plate, and the corner points of the multi-faceted calibration plate are detected to obtain the relative posture between the cameras.
  • the present invention has the advantages that it can solve the problem of no overlapping field of view between cameras, realize the automation of the entire calibration process, and obtain high calibration accuracy while the calibration process is convenient and fast.
  • FIG. 1 is a schematic diagram of an automated calibration system based on vision guidance according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a calibration trolley according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a five-sided three-dimensional checkerboard calibration plate according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of solving relative poses between cameras with overlapping fields of view according to an embodiment of the present invention
  • FIG. 6 is a schematic diagram of solving the relative pose between cameras without overlapping fields of view according to an embodiment of the present invention
  • FIG. 7 is a schematic diagram of an image partition according to an embodiment of the present invention.
  • the system as a whole includes: a calibration vehicle 10 (taking a four-wheeled trolley as an example), Rotation control pan/tilt 20, multi-faceted calibration plate 30, multiple cameras (shown as 3 cameras, high-resolution industrial cameras can be used) and data acquisition control module (not shown), wherein the multi-faceted calibration plate 30 is connected to the rotation control cloud
  • the stage 20 the data acquisition control module is used to control the calibration vehicle 10 to move with the multi-faceted calibration plate 30, and control the corresponding camera to photograph the multi-faceted calibration plate when moving to the overlapping field of view of the adjacent two cameras, and obtain the multi-faceted calibration plate.
  • Image data the data acquisition control module stores the image data obtained by shooting according to the camera number and the position number of the multi-faceted calibration plate, and detects the corners of the multi-faceted calibration plate to obtain the relative posture between the cameras.
  • the calibration process of the provided automatic calibration system mainly includes: step S1, collecting and obtaining a calibration image; step S2, solving the initial relative posture between cameras; Attitude calculation re-projection error; step S4, the attitude error is established on the global data and optimized, and the final relative attitude between cameras is obtained.
  • step S1 collecting and obtaining a calibration image
  • step S2 solving the initial relative posture between cameras
  • step S4 the attitude error is established on the global data and optimized, and the final relative attitude between cameras is obtained.
  • Step S1 collecting and acquiring calibration images
  • the calibration vehicle is a four-wheel smart car
  • the four-wheel smart car carries a multi-faceted calibration board and a cloud control platform, wherein the multi-faceted calibration board is connected to the rotary control pan-tilt, and the signal transceiver device is also set.
  • Wireless WiFi which can send commands and control the PTZ to the automatic trolley through wireless devices, so that it can perform multi-angle and multi-face rotation on the calibration board.
  • the calibration plate is a five-sided three-dimensional checkerboard calibration plate. Each side is a 7 ⁇ 10 checkerboard calibration plate, and the angle between the four calibration plates 2, 3, 4, 5 and the calibration plate 1 is 45°, so that cameras in different orientations can detect more corners information.
  • the calibration board is followed by an intelligent pan-tilt with a changeable angle, which is fixed on the intelligent trolley for movement.
  • the camera captures the corner data of five faces at a time, and then uses the common corner data of the two cameras to calculate the relative pose. Thus, more data can be obtained in less time for later optimization.
  • the data acquisition software can integrate multiple programs such as smart car control, camera shooting control, and data storage and visualization.
  • the process of collecting and obtaining the calibration image is shown in Figure 4, which includes: the data acquisition software controls the intelligent car to move with the calibration plate loaded.
  • the calibration plate moves to the overlapping field of view of two adjacent cameras (such as camera 1 and camera 2) Control the smart car to stop moving and send a shooting command to the camera; the camera shoots the calibration board to obtain the image data of the calibration board; the data acquisition software stores the captured pictures according to the camera number and the position number of the calibration board, and uses The visualization program detects and visualizes the corners of the calibration board; controls the trolley to continue moving with the calibration board (for example, the calibration board moves to the overlapping field of view of camera 2 and camera 3), and repeats the above steps until the data collection is completed.
  • Step S2 solve the initial relative pose between cameras
  • the internal parameter matrix K of each camera and the single-camera rotation and translation matrix (R, t) between each camera and the calibration plate at different positions are obtained by Zhang Zhengyou's calibration method.
  • R 12 R 1 -1 ⁇ R 2 (1)
  • R 12 represents two cameras with overlapping fields of view, that is, the rotation and translation matrix between camera 1 and camera 2, and R 1 and R 2 respectively represent the calibration of camera 1 and camera 2 in their overlapping fields of view Rotation translation matrix between plates.
  • R 13 R 12 -1 ⁇ R 23 (2)
  • R 13 represents two cameras with no overlapping field of view, that is, the rotation and translation matrix between camera 3 and camera 1; R 12 and R 23 are the overlapping field of view cameras calculated by formula (1).
  • step S3 re-acquisition control is performed on the image and the re-projection error is calculated by using the initial relative posture between the cameras.
  • the shooting areas of the camera pictures are divided. As shown in Figure 7, the image is divided into 9 regions. Due to the existence of the perspective imaging theorem, the visual camera always has high calibration accuracy in its calibration acquisition area, but low calibration accuracy in the non-acquisition area.
  • the core idea of this embodiment is to allow the calibrated position of the trolley to traverse these nine areas. And in each area, it is guaranteed that the car can go to the designated area. Then the camera gets to take the picture.
  • C cameras can be set, and each camera contains p positions. At least C ⁇ p position picture data needs to be collected.
  • the number of p is set to 9 as an example.
  • the re-acquisition algorithm process includes:
  • Step S11 construct a corpus location queue T according to the region and the label in sequence.
  • step S12 the existing images are checked, and a queue K of the acquired positions is constructed.
  • step S13 the difference set L of the set T and the set K is obtained.
  • Step S14 traverse the set L, search for camera pictures, obtain the current position of the car, and change the car to move forward or backward at the current position.
  • the labels of all image blocks represented in T for example, there are nine blocks in one image, and ten pictures are ninety blocks.
  • an element is added to the queue K, and the uncollected set L is obtained.
  • Repeat the above process until all data is collected.
  • new calibration plate image data D add is obtained.
  • P 3D is the 3-dimensional coordinate of the target point obtained from the calibration image
  • P 2D is the 2-dimensional coordinates of the target point obtained from the single camera calibration
  • P 3D_2D is the two-dimensional coordinate of the estimated point generated by using the initial inter-camera RT matrix and the camera internal parameter matrix K
  • R represents the rotation matrix
  • t is the offset matrix
  • P 3D_2D is generated by using the initial inter-camera RT matrix and the camera internal parameter matrix K.
  • the two-dimensional coordinates of the estimated point, u , v represent the pixel coordinates on the image, and xw, yw and zw represent the three-dimensional coordinates.
  • Step S4 establish attitude error and optimize the global data
  • the calculated projection error is relatively large.
  • the obtained relative poses between multiple cameras are globally modeled, and then the Levenberg-Marquardt algorithm is used to iteratively optimize the reprojection error to improve the accuracy of the calibration.
  • the calibration accuracy is used to obtain the final relative pose between cameras.
  • f(x) is the two-dimensional coordinate of the generated estimated point P 3D_2D :
  • P 3D_2D K[R,t]P 3D
  • y is the two-dimensional coordinate of the target point P 2D obtained by single camera calibration. Since the present invention has a picture resolution of 600 ⁇ 800 when collecting an image, 0 ⁇ f(x) ⁇ 800, and the specific situation is determined according to the image data format during the actual calibration.
  • Levenberg-Marquardt algorithm is used to optimize the projection error.
  • the execution process of the algorithm is as follows:
  • step S21 an initial value x 0 and an initial optimized radius ⁇ are given.
  • Step S22 for the k-th iteration process, solve:
  • is the radius of the confidence region and D is the coefficient matrix.
  • Step S27 it is judged whether the algorithm has converged. If not converged, return to step S22, otherwise end.
  • x k represents the initial relative attitude data after k optimizations
  • x k+1 represents the initial relative attitude data after k+1 optimizations
  • ⁇ x k represents the k+1 optimization process
  • the obtained pair x k The correction amount of , f(x k ) represents the two-dimensional coordinates of the estimated point after the kth optimization; J[x k ] represents the first derivative of f(x k ) with respect to x, and D is the coefficient matrix.
  • the given initial value x 0 is the initial relative pose data R 12 between the first camera and the second camera.
  • the set initial optimization radius ⁇ can be specifically set according to the actual situation.
  • is greater than the preset threshold
  • x k+1 x k + ⁇ x k ; at this time, the reprojection error after iterative optimization can be compared with the preset re-projection error.
  • the size of the projection error threshold is used to judge whether the algorithm converges; if the reprojection error after iterative optimization is less than or equal to the preset reprojection error threshold, it is determined that the iterative optimization is complete, and the reprojection error after iterative optimization is used as the second reprojection error.
  • x k+1 is the relative pose data between the first camera and the second camera. If the reprojection error after the iterative optimization is greater than the preset reprojection error threshold, continue to perform the k+2 th iterative optimization.
  • the finally obtained x k+1 is the required multi-camera (R, t) matrix.
  • the present invention uses the optimized relative posture between cameras to calculate the reprojection error again. It has been verified that compared with the initial relative posture between cameras obtained in step S2, the optimized error is reduced to about 1/10 of the initial error, and the calibration accuracy Huge improvements.
  • the present invention utilizes the five-sided stereo calibration plate and the intelligent trolley, which greatly simplifies the calibration process, saves time and effort; utilizes the movement of the trolley load calibration plate and the relationship between multiple cameras to solve the problem of no overlapping fields of view between cameras; Using global modeling and optimization, the calibration accuracy is significantly improved. It can solve the situation that there is no overlapping field of view between cameras; realize the automation of the whole calibration process, and obtain high calibration accuracy while being easy to operate.
  • the proposed multi-camera joint calibration system calibrates multiple image sensors by using the image sensors installed on the robot, and then obtains the relative pose between the robots. relation.
  • the present invention may be a system, method and/or computer program product.
  • the computer program product may include a computer-readable storage medium having computer-readable program instructions loaded thereon for causing a processor to implement various aspects of the present invention.
  • a computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Non-exhaustive list of computer readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) or flash memory), static random access memory (SRAM), portable compact disk read only memory (CD-ROM), digital versatile disk (DVD), memory sticks, floppy disks, mechanically coded devices, such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • flash memory static random access memory
  • SRAM static random access memory
  • CD-ROM compact disk read only memory
  • DVD digital versatile disk
  • memory sticks floppy disks
  • mechanically coded devices such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
  • Computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (eg, light pulses through fiber optic cables), or through electrical wires transmitted electrical signals.
  • the computer readable program instructions described herein may be downloaded to various computing/processing devices from a computer readable storage medium, or to an external computer or external storage device over a network such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • the computer program instructions for carrying out the operations of the present invention may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state setting data, or instructions in one or more programming languages.
  • Source or object code written in any combination, including object-oriented programming languages, such as Smalltalk, C++, etc., and conventional procedural programming languages, such as the "C" language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through the Internet connect).
  • LAN local area network
  • WAN wide area network
  • custom electronic circuits such as programmable logic circuits, field programmable gate arrays (FPGAs), or programmable logic arrays (PLAs)
  • FPGAs field programmable gate arrays
  • PDAs programmable logic arrays
  • Computer readable program instructions are executed to implement various aspects of the present invention.
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processor of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium on which the instructions are stored includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • Computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more functions for implementing the specified logical function(s) executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or actions , or can be implemented in a combination of dedicated hardware and computer instructions. It is well known to those skilled in the art that implementation in hardware, implementation in software, and implementation in a combination of software and hardware are all equivalent.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

La présente invention concerne un système et un procédé d'étalonnage automatique reposant sur un guidage visuel. Le système comprend : un véhicule d'étalonnage, un cardan de commande de rotation, une cible d'étalonnage à faces multiples, une pluralité de caméras et un module de commande d'acquisition de données. La cible d'étalonnage à faces multiples est reliée au cardan de commande de rotation, et le module de commande d'acquisition de données est utilisé pour commander au véhicule d'étalonnage de transporter la cible d'étalonnage à faces multiples afin de la déplacer et commander à des caméras correspondantes de prendre des vues de la cible d'étalonnage à faces multiples lorsque la cible d'étalonnage à faces multiples a été déplacée jusqu'à un champ de vision chevauchant de deux caméras adjacentes de façon à obtenir des données d'image de la cible d'étalonnage à faces multiples. Le module de commande d'acquisition de données stocke, en fonction de numéros de caméra et du numéro de position de la cible d'étalonnage à faces multiples, les données d'image obtenues au moyen de la prise de vues, et détecte des points de coin de la cible d'étalonnage à faces multiples pour obtenir des poses relatives entre les caméras. Au moyen de la présente invention, l'automatisation de l'ensemble du procédé d'étalonnage peut être réalisée, le problème d'absence de champ de vision chevauchant entre caméras peut être résolu, l'opération est simple et commode, et une précision d'étalonnage relativement élevée est également obtenue.
PCT/CN2020/134521 2020-12-08 2020-12-08 Système d'étalonnage automatique reposant sur un guidage visuel WO2022120567A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/134521 WO2022120567A1 (fr) 2020-12-08 2020-12-08 Système d'étalonnage automatique reposant sur un guidage visuel

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/134521 WO2022120567A1 (fr) 2020-12-08 2020-12-08 Système d'étalonnage automatique reposant sur un guidage visuel

Publications (1)

Publication Number Publication Date
WO2022120567A1 true WO2022120567A1 (fr) 2022-06-16

Family

ID=81973937

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/134521 WO2022120567A1 (fr) 2020-12-08 2020-12-08 Système d'étalonnage automatique reposant sur un guidage visuel

Country Status (1)

Country Link
WO (1) WO2022120567A1 (fr)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114897997A (zh) * 2022-07-13 2022-08-12 星猿哲科技(深圳)有限公司 相机标定方法、装置、设备及存储介质
CN114964316A (zh) * 2022-07-27 2022-08-30 湖南科天健光电技术有限公司 位置姿态校准方法及装置、测量待测目标的方法、***
CN115170674A (zh) * 2022-07-20 2022-10-11 禾多科技(北京)有限公司 基于单张图像的相机主点标定方法、装置、设备和介质
CN115345943A (zh) * 2022-08-08 2022-11-15 恩纳基智能科技无锡有限公司 一种基于差模概念的标定方法
CN115541611A (zh) * 2022-09-29 2022-12-30 武汉大学 混凝土墙体外观图像采集***参数检校方法及装置
CN115564847A (zh) * 2022-11-17 2023-01-03 歌尔股份有限公司 视觉装配***的视觉标定方法和装置、存储介质
CN115830148A (zh) * 2023-02-23 2023-03-21 深圳佑驾创新科技有限公司 一种标定板及标定方法
CN116060269A (zh) * 2022-12-08 2023-05-05 中晟华越(郑州)智能科技有限公司 回型产品喷涂方法
CN116071438A (zh) * 2023-03-06 2023-05-05 航天宏图信息技术股份有限公司 一种无人机RigCamera影像的增量SfM方法及装置
CN116228831A (zh) * 2023-05-10 2023-06-06 深圳市深视智能科技有限公司 耳机接缝处的段差测量方法及***、校正方法、控制器
CN116503493A (zh) * 2023-06-27 2023-07-28 季华实验室 一种多相机标定方法、高精度装备及计算机可读存储介质
CN117095065A (zh) * 2023-09-18 2023-11-21 合肥埃科光电科技股份有限公司 线光谱共聚焦位移传感器标定方法、***及设备
CN116912333B (zh) * 2023-09-12 2023-12-26 安徽炬视科技有限公司 一种基于作业围栏标定杆的相机姿态自标定方法
CN117830439A (zh) * 2024-03-05 2024-04-05 南昌虚拟现实研究院股份有限公司 一种多相机***位姿标定方法及装置
CN117876554A (zh) * 2024-03-12 2024-04-12 中南建筑设计院股份有限公司 一种基于凸包的板件最小包围盒计算方法和***

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106097300A (zh) * 2016-05-27 2016-11-09 西安交通大学 一种基于高精度运动平台的多相机标定方法
US20160343136A1 (en) * 2014-01-27 2016-11-24 Xylon d.o.o. Data-processing system and method for calibration of a vehicle surround view system
CN107527336A (zh) * 2016-06-22 2017-12-29 北京疯景科技有限公司 镜头相对位置标定方法及装置
CN109118547A (zh) * 2018-11-01 2019-01-01 百度在线网络技术(北京)有限公司 多摄像头联合标定***和方法
CN111758120A (zh) * 2019-10-18 2020-10-09 深圳市大疆创新科技有限公司 摄像装置的标定方法、***、立体标定装置及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160343136A1 (en) * 2014-01-27 2016-11-24 Xylon d.o.o. Data-processing system and method for calibration of a vehicle surround view system
CN106097300A (zh) * 2016-05-27 2016-11-09 西安交通大学 一种基于高精度运动平台的多相机标定方法
CN107527336A (zh) * 2016-06-22 2017-12-29 北京疯景科技有限公司 镜头相对位置标定方法及装置
CN109118547A (zh) * 2018-11-01 2019-01-01 百度在线网络技术(北京)有限公司 多摄像头联合标定***和方法
CN111758120A (zh) * 2019-10-18 2020-10-09 深圳市大疆创新科技有限公司 摄像装置的标定方法、***、立体标定装置及存储介质

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114897997A (zh) * 2022-07-13 2022-08-12 星猿哲科技(深圳)有限公司 相机标定方法、装置、设备及存储介质
CN115170674A (zh) * 2022-07-20 2022-10-11 禾多科技(北京)有限公司 基于单张图像的相机主点标定方法、装置、设备和介质
CN115170674B (zh) * 2022-07-20 2023-04-14 禾多科技(北京)有限公司 基于单张图像的相机主点标定方法、装置、设备和介质
CN114964316A (zh) * 2022-07-27 2022-08-30 湖南科天健光电技术有限公司 位置姿态校准方法及装置、测量待测目标的方法、***
CN114964316B (zh) * 2022-07-27 2022-11-01 湖南科天健光电技术有限公司 位置姿态校准方法及装置、测量待测目标的方法、***
CN115345943B (zh) * 2022-08-08 2024-04-16 恩纳基智能装备(无锡)股份有限公司 一种基于差模概念的标定方法
CN115345943A (zh) * 2022-08-08 2022-11-15 恩纳基智能科技无锡有限公司 一种基于差模概念的标定方法
CN115541611A (zh) * 2022-09-29 2022-12-30 武汉大学 混凝土墙体外观图像采集***参数检校方法及装置
CN115541611B (zh) * 2022-09-29 2024-04-16 武汉大学 混凝土墙体外观图像采集***参数检校方法及装置
CN115564847A (zh) * 2022-11-17 2023-01-03 歌尔股份有限公司 视觉装配***的视觉标定方法和装置、存储介质
CN116060269A (zh) * 2022-12-08 2023-05-05 中晟华越(郑州)智能科技有限公司 回型产品喷涂方法
CN115830148A (zh) * 2023-02-23 2023-03-21 深圳佑驾创新科技有限公司 一种标定板及标定方法
CN116071438A (zh) * 2023-03-06 2023-05-05 航天宏图信息技术股份有限公司 一种无人机RigCamera影像的增量SfM方法及装置
CN116228831B (zh) * 2023-05-10 2023-08-22 深圳市深视智能科技有限公司 耳机接缝处的段差测量方法及***、校正方法、控制器
CN116228831A (zh) * 2023-05-10 2023-06-06 深圳市深视智能科技有限公司 耳机接缝处的段差测量方法及***、校正方法、控制器
CN116503493B (zh) * 2023-06-27 2023-10-20 季华实验室 一种多相机标定方法、高精度装备及计算机可读存储介质
CN116503493A (zh) * 2023-06-27 2023-07-28 季华实验室 一种多相机标定方法、高精度装备及计算机可读存储介质
CN116912333B (zh) * 2023-09-12 2023-12-26 安徽炬视科技有限公司 一种基于作业围栏标定杆的相机姿态自标定方法
CN117095065A (zh) * 2023-09-18 2023-11-21 合肥埃科光电科技股份有限公司 线光谱共聚焦位移传感器标定方法、***及设备
CN117095065B (zh) * 2023-09-18 2024-06-11 合肥埃科光电科技股份有限公司 线光谱共聚焦位移传感器标定方法、***及设备
CN117830439A (zh) * 2024-03-05 2024-04-05 南昌虚拟现实研究院股份有限公司 一种多相机***位姿标定方法及装置
CN117876554A (zh) * 2024-03-12 2024-04-12 中南建筑设计院股份有限公司 一种基于凸包的板件最小包围盒计算方法和***
CN117876554B (zh) * 2024-03-12 2024-05-28 中南建筑设计院股份有限公司 一种基于凸包的板件最小包围盒计算方法和***

Similar Documents

Publication Publication Date Title
WO2022120567A1 (fr) Système d'étalonnage automatique reposant sur un guidage visuel
Asadi et al. Real-time image localization and registration with BIM using perspective alignment for indoor monitoring of construction
CN106408612B (zh) 机器视觉***校准
US20180066934A1 (en) Three-dimensional measurement apparatus, processing method, and non-transitory computer-readable storage medium
US9547802B2 (en) System and method for image composition thereof
CN110111388B (zh) 三维物***姿参数估计方法及视觉设备
CN110070564B (zh) 一种特征点匹配方法、装置、设备及存储介质
US10825249B2 (en) Method and device for blurring a virtual object in a video
JP6324025B2 (ja) 情報処理装置、情報処理方法
Albl et al. Rolling shutter absolute pose problem with known vertical direction
CN109993798B (zh) 多摄像头检测运动轨迹的方法、设备及存储介质
CN104715479A (zh) 基于增强虚拟的场景复现检测方法
CN111801198A (zh) 一种手眼标定方法、***及计算机存储介质
JP5781682B2 (ja) 共線変換ワープ関数を用いて第1の画像の少なくとも一部と第2の画像の少なくとも一部を位置合わせする方法
EP2984627A1 (fr) Réétalonnage de caméra à multiples capteurs
CN112669389A (zh) 一种基于视觉引导的自动化标定***
Yan et al. Joint camera intrinsic and lidar-camera extrinsic calibration
CN109613974B (zh) 一种大场景下的ar家居体验方法
Zhou et al. Semi-dense visual odometry for RGB-D cameras using approximate nearest neighbour fields
Yahyanejad et al. Incremental, orthorectified and loop-independent mosaicking of aerial images taken by micro UAVs
CN111829522B (zh) 即时定位与地图构建方法、计算机设备以及装置
US12020443B2 (en) Virtual production based on display assembly pose and pose error correction
US20230326098A1 (en) Generating a digital twin representation of an environment or object
Silveira Photogeometric direct visual tracking for central omnidirectional cameras
CN113709441A (zh) 一种扫描设备、相机的位姿确定方法、装置及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20964512

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20964512

Country of ref document: EP

Kind code of ref document: A1