WO2024046390A1 - Marine equipment underwater damage three-dimensional reconstruction method based on combination of vision and imus - Google Patents

Marine equipment underwater damage three-dimensional reconstruction method based on combination of vision and imus Download PDF

Info

Publication number
WO2024046390A1
WO2024046390A1 PCT/CN2023/115908 CN2023115908W WO2024046390A1 WO 2024046390 A1 WO2024046390 A1 WO 2024046390A1 CN 2023115908 W CN2023115908 W CN 2023115908W WO 2024046390 A1 WO2024046390 A1 WO 2024046390A1
Authority
WO
WIPO (PCT)
Prior art keywords
imu
underwater
laser sensor
platform
binocular camera
Prior art date
Application number
PCT/CN2023/115908
Other languages
French (fr)
Chinese (zh)
Inventor
王振民
迟鹏
廖海鹏
田济语
张芩
Original Assignee
华南理工大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华南理工大学 filed Critical 华南理工大学
Publication of WO2024046390A1 publication Critical patent/WO2024046390A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the present invention relates to the technical field of underwater three-dimensional reconstruction, and more specifically, to a three-dimensional reconstruction method of underwater damage to marine equipment based on the fusion of vision and IMU.
  • Marine equipment such as ships, offshore oil and gas platforms, offshore wind power equipment, etc.
  • are long-term affected by adverse factors such as huge waves, humid environment, seawater erosion, collisions, etc., and are prone to structural damage.
  • Traditional methods require returning to port for repairs or manually launching into the water for maintenance. While it consumes a lot of time and economic costs, it also creates many unsafe factors.
  • the use of underwater mobile platforms for damage location can well solve the above problems. Through the autonomous positioning and three-dimensional reconstruction technology of the underwater mobile platform, a clear and accurate underwater damage model can be established, and the above repair work can be completed in conjunction with the autonomous repair system.
  • Hydroacoustic positioning systems include ultra-short baseline, short baseline, and long baseline positioning.
  • the equipment is expensive and difficult to install.
  • Common SLAM methods include methods based on sonar and cameras. Sonar equipment is expensive and is an acoustic method with low resolution and is more suitable for deep-sea positioning.
  • camera-based methods need to overcome underwater optical refraction and are sensitive to light. At feature points, If it is not obvious, it is easy to cause positioning failure.
  • the current camera 3D reconstruction system steps include camera distortion correction and 3D reconstruction.
  • Camera distortion correction includes methods based on single-viewpoint models and methods based on calibration boards or auxiliary hardware.
  • the single-viewpoint model only considers the perspective model and does not consider the underwater refraction model, resulting in lower accuracy.
  • the method based on calibration plates and auxiliary hardware takes into account the underwater refraction model, so it has higher accuracy.
  • Three-dimensional reconstruction mainly obtains three-dimensional point clouds directly or indirectly based on camera parameters, and then superposes three-dimensional point clouds through positioning data.
  • the above-mentioned simple use of cameras for underwater positioning has low accuracy, which leads to a reduction in three-dimensional reconstruction accuracy.
  • the purpose of the present invention is to provide a three-dimensional reconstruction method of underwater damage to marine equipment based on the fusion of vision and IMU; this method can provide high-precision three-dimensional reconstruction of the damage area.
  • the repair and reconstruction results can assist other equipment in autonomous repair and improve the operating efficiency of marine equipment.
  • a three-dimensional reconstruction method of underwater damage of marine equipment based on the fusion of vision and IMU which is characterized in that: it is realized by an underwater damage three-dimensional reconstruction system; the underwater damage The three-dimensional damage reconstruction system includes an underwater mobile platform and a computing host; the underwater mobile platform includes the underwater mobile platform body, as well as binocular cameras, platform IMU, laser sensors, laser drive systems, and communication systems mounted on the underwater mobile platform body. and drainage system; each axis of the laser drive system is equipped with a drive IMU; the communication system is used for communication between the underwater mobile platform and the computing host;
  • the three-dimensional reconstruction method of underwater damage to marine equipment includes the following steps:
  • S4 Plan the drainage system trajectory, and the drainage system drains the damaged area; based on the external parameter matrix obtained in S2, use the driving IMU data to determine the position of the laser sensor laser, thereby achieving precise three-dimensional reconstruction of the laser sensor data in the damaged area.
  • the S1 includes the following steps:
  • the binocular camera is fixed on the front end of the underwater mobile platform body, and the visual direction is between 10° and 30° obliquely downward;
  • the platform IMU is fixed on the middle part of the underwater mobile platform body, which is equivalent to the center of mass of the underwater mobile platform;
  • the underwater mobile platform moves the underwater mobile platform so that the calibration plates are distributed in various positions of the left and right camera fields of the binocular camera; record multiple sets of binocular camera image data; the communication system transmits the multiple sets of binocular camera image data to the computing host; the computing host Carry out relevant calibration calculations, including the internal parameter calibration of the left and right cameras of the binocular camera, and the external parameter calibration of the left and right cameras of the binocular camera and the platform IMU.
  • the S12 Preferably, the S12,
  • the internal parameter calibration of the left and right cameras of the binocular camera refers to:
  • l represents the left camera
  • r represents the right camera
  • K l and K r respectively represent the left and right camera internal parameter matrices
  • f xl , f yl , f xr and f yr respectively represent the use of pixels to represent the left and right cameras in the x-axis and y-axis directions.
  • the length of the focal length; (u 0l , v 0l ), (u 0r , v 0r ) respectively represent the actual pixel coordinates of the principal points of the left and right camera image plane coordinate systems;
  • the external parameter calibration of the left and right cameras of the binocular camera and the platform IMU refers to:
  • S2 aligning the drive IMU coordinate system and the laser sensor coordinate system on each axis of the drive system, refers to:
  • the conversion relationship between the underwater mobile platform's center of mass coordinate system and the drainage system coordinate system, and the conversion relationship between the drainage system coordinate system and the laser drive system coordinate system are obtained;
  • the communication system connects the laser sensor and the drive IMU to obtain the data and send it to the computing host.
  • the computing host completes the calibration calculation and obtains the laser drive system coordinate system and the laser sensor coordinate system. conversion relationship;
  • the method of aligning the four coordinate systems of the laser sensor, laser drive system, drainage system, and underwater mobile platform centroid is:
  • a and B represent two coordinate systems respectively, X represents a 4*4 external parameter matrix, R represents a 3*3 rotation matrix, and T represents a 1*3 translation vector.
  • the attitude observation in the platform IMU coordinate system refers to:
  • Vk +1 Vk +a ⁇ t
  • V k and V k+1 are the speeds at time k and k+1 respectively; a is the acceleration; ⁇ t is the time interval; T k and T k+1 are the translations at time k and k+1 respectively.
  • Vector; R k and R k+1 are the rotation matrices at time k and k+1 respectively; ⁇ is the angular velocity; is the Kronecker product.
  • attitude observation in the binocular camera coordinate system refers to:
  • C represents the centroid of the circular area
  • represents the direction vector of the feature point
  • m pq represents the moment of the circular area
  • R represents the radius of the circular area
  • x, y represents the x-axis coordinate and y-axis coordinate
  • I(x, y) represents the grayscale equation
  • the three-dimensional point cloud generation in the binocular camera coordinate system refers to:
  • x, y, d are the x-axis coordinate, y-axis coordinate, and parallax respectively; i, j are the change values in the x-axis and y-axis directions respectively; m, n are the maximum values in the x-axis and y-axis directions respectively; I 1 (x, y), I 2 (x, y) represent the grayscale equation;
  • Three-dimensional point cloud data is generated through parallax and original coordinates.
  • the three-dimensional coordinates are expressed as:
  • B is the baseline length
  • f is the camera focal length
  • d is the left and right image disparity
  • the S4 refers to: planning a motion trajectory for the underwater mobile platform according to the location of the damaged area, so that the drainage system covers the damaged area and draining water to form a dry space; using the laser drive system to control the laser sensor to perform three-dimensional scanning in the dry space ;
  • the laser sensor data and driving IMU data are transmitted to the computing host through the communication system;
  • the computing host uses the driving IMU data to obtain the attitude of the laser driving system based on the external parameter matrix obtained by S2, and transforms the position of the laser sensor.
  • Point cloud data can be used to obtain precise three-dimensional reconstruction of laser sensor data; damage locations can be detected based on the three-dimensional reconstruction results.
  • the three-dimensional reconstruction of laser sensor data refers to:
  • the laser sensor emits laser pulses at a fixed frequency, and the receiver receives the returned reflected light for judgment. distance, and roughly distinguish the target material based on the reflection intensity.
  • L is the target distance
  • t is the return time
  • c is the speed of light
  • the driving IMU is used to predict the pose of the laser sensor, and then the three-dimensional reconstruction result of the laser sensor is obtained through the rotation matrix R and the translation vector T.
  • the present invention has the following advantages and beneficial effects:
  • the present invention can independently detect and three-dimensionally reconstruct underwater damage to marine equipment, solving labor costs and economic costs while improving safety;
  • the present invention improves positioning accuracy and underwater three-dimensional reconstruction accuracy through the fusion of vision and IMU.
  • the damage detection method based on image and point cloud fusion verification can more accurately locate the damage location of underwater marine equipment;
  • the water near the damaged area can be accurately evacuated, thereby achieving high-precision laser three-dimensional reconstruction and damage identification, and also providing convenience for other autonomous repair equipment.
  • Figure 1 is a schematic flow chart of the three-dimensional reconstruction method of underwater damage of marine equipment based on the fusion of vision and IMU according to the present invention
  • Figure 2 is a schematic structural diagram of the underwater damage three-dimensional reconstruction system used in the present invention.
  • Figure 3 is a communication schematic diagram of the three-dimensional reconstruction method of underwater damage of marine equipment based on the fusion of vision and IMU according to the present invention
  • Figure 4 is a schematic diagram of the coordinate system transformation in the three-dimensional reconstruction method of underwater damage to marine equipment based on the fusion of vision and IMU according to the present invention.
  • This embodiment is a three-dimensional reconstruction method for underwater damage of marine equipment based on the fusion of vision and IMU. The specific process is shown in Figure 1, which is implemented through a three-dimensional underwater damage reconstruction system.
  • the underwater damage three-dimensional reconstruction system includes an underwater mobile platform and a computing host, as shown in Figure 2; the underwater mobile platform includes an underwater mobile platform body 1, and a binocular camera mounted on the underwater mobile platform body 1 Machine 4, platform IMU 6, laser sensor 5, laser drive system 3, communication system and drainage system 2.
  • Each axis of the laser driving system 3 is provided with a driving IMU respectively;
  • the communication system is used for communication between the underwater mobile platform and the computing host; specifically, the communication system is fixed in the middle of the underwater mobile platform body and is used for binocular acquisition.
  • the image data and IMU data are sent to the computing host, and at the same time, it receives relevant control instructions from the computing host to drive the underwater mobile platform.
  • the three-dimensional reconstruction method of underwater damage to marine equipment includes the following steps:
  • S1 includes the following steps:
  • the binocular camera is fixed on the front end of the underwater mobile platform body, and the visual direction is between 10° and 30° obliquely downward;
  • the platform IMU is fixed on the middle part of the underwater mobile platform body, which is equivalent to the center of mass of the underwater mobile platform;
  • the underwater mobile platform moves the underwater mobile platform so that the calibration plates are distributed in various positions of the left and right camera fields of the binocular camera; record multiple sets of binocular camera image data; the communication system transmits the multiple sets of binocular camera image data to the computing host; the computing host Carry out relevant calibration calculations, including the internal parameter calibration of the left and right cameras of the binocular camera, and the external parameter calibration of the left and right cameras of the binocular camera and the platform IMU.
  • l represents the left camera
  • r represents the right camera
  • K l and K r respectively represent the left and right camera internal parameter matrices
  • f xl , f yl , f xr and f yr respectively represent the use of pixels to represent the left and right cameras in the x-axis and y-axis directions.
  • the length of the focal length degree; (u 0l ,v 0l ), (u 0r ,v 0r ) respectively represent the actual pixel coordinates of the principal points of the left and right camera image plane coordinate systems;
  • the external parameter calibration of the left and right cameras of the binocular camera and the platform IMU refers to:
  • the communication system connects the laser sensor and the drive IMU to obtain the data and send it to the computing host, which completes the calibration calculation. Obtain the conversion relationship between the coordinate system of the laser drive system and the coordinate system of the laser sensor;
  • the method to align the four coordinate systems of the laser sensor, laser drive system, drainage system, and underwater mobile platform center of mass is:
  • a and B represent two coordinate systems respectively, X represents a 4*4 external parameter matrix, R represents a 3*3 rotation matrix, and T represents a 1*3 translation vector.
  • the computing host first collects the binocular camera image data and the platform IMU data; and reads the previous internal and external parameter calibration results; then fuses the platform IMU data and the binocular camera image data to obtain the positioning result in the left camera coordinate system.
  • the damage information is detected in the left camera coordinate system and a three-dimensional point cloud of the current frame image is generated; the positioning results and the three-dimensional point cloud information are combined to filter and superimpose the point cloud of each frame to generate a continuous three-dimensional reconstruction.
  • the result is verified based on the three-dimensional reconstructed point cloud to verify the damage location detected by binocular detection.
  • control signals are issued to the communication system of the underwater mobile platform through the communication bus; during the movement of the underwater mobile platform, local obstacle avoidance is performed based on the three-dimensional information saved by the real-time three-dimensional reconstruction results until the underwater mobile platform moves. to the area of damage.
  • attitude observation in the platform IMU coordinate system refers to:
  • Vk +1 Vk +a ⁇ t
  • V k and V k+1 are the speeds at time k and k+1 respectively; a is the acceleration; ⁇ t is the time interval; T k and T k+1 are the translations at time k and k+1 respectively.
  • Vector; R k and R k+1 are the rotation matrices at time k and k+1 respectively; ⁇ is the angular velocity; is the Kronecker product.
  • Attitude observation in the binocular camera coordinate system refers to:
  • C represents the centroid of the circular area
  • represents the direction vector of the feature point
  • m pq represents the moment of the circular area
  • R represents the radius of the circular area
  • x, y represents the x-axis coordinate and y-axis coordinate
  • I(x, y) represents the grayscale equation
  • Three-dimensional point cloud generation in the binocular camera coordinate system refers to:
  • x, y, d are the x-axis coordinate, y-axis coordinate, and parallax respectively; i, j are the change values in the x-axis and y-axis directions respectively; m, n are the maximum values in the x-axis and y-axis directions respectively; I 1 (x, y), I 2 (x, y) represent the grayscale equation;
  • Three-dimensional point cloud data is generated through parallax and original coordinates.
  • the three-dimensional coordinates are expressed as:
  • B is the baseline length
  • f is the camera focal length
  • d is the left and right image disparity
  • the motion trajectory is planned for the underwater mobile platform so that the drainage system covers the damaged area and drains water to form a dry space; the laser drive system is used to control the laser sensor to perform three-dimensional scanning in the dry space; the laser sensor data is compared with The driving IMU data is transmitted to the computing host through the communication system; the computing host uses the driving IMU data to obtain the attitude of the laser driving system based on the external parameter matrix obtained by S2, and transforms it to obtain the position of the laser sensor.
  • the computing host uses the driving IMU data to obtain the attitude of the laser driving system based on the external parameter matrix obtained by S2, and transforms it to obtain the position of the laser sensor.
  • Three-dimensional reconstruction of laser sensor data refers to:
  • the laser sensor emits laser pulses at a fixed frequency, receives the returned reflected light through the receiver to determine the distance, and roughly distinguishes the target material based on the reflection intensity.
  • L is the target distance
  • t is the return time
  • c is the speed of light
  • the driving IMU is used to predict the pose of the laser sensor, and then the three-dimensional reconstruction result of the laser sensor is obtained through the rotation matrix R and the translation vector T. At this time, precise damage location detection can be achieved, and the error can be controlled within 0.2mm, providing high-precision positioning results for other autonomous repair equipment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Optics & Photonics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

Provided is a marine equipment underwater damage three-dimensional reconstruction method based on a combination of vision and IMUs, which comprises the following steps: in an underwater environment, calibrating the intrinsic parameters of left and right cameras of a binocular camera, and calibrating the extrinsic parameters of the left and right camera of the binocular camera and a platform IMU; calibrating an extrinsic parameter matrix of a laser sensor coordinate system and a drive IMU coordinate system on each axis of a drive system; performing damaged area identification and coarse positioning on the basis of underwater vision three-dimensional reconstruction; planning an optimal path for an underwater mobile platform, performing local obstacle avoidance on the basis of a three-dimensional reconstruction point cloud, and controlling the underwater mobile platform to move to the vicinity of the damaged area; planning a water removal system trajectory, and removing water by the water removal system; and utilizing drive IMU data to determine the position of a laser of a laser sensor, thereby implementing fine laser sensor data three-dimensional reconstruction of the damaged area. The present method can provide a high-precision damaged area three-dimensional reconstruction result, which can assist other devices in perform autonomous repair, and improve the work efficiency of marine devices.

Description

基于视觉和IMU融合的海洋装备水下损伤三维重建方法Three-dimensional reconstruction method of underwater damage to marine equipment based on fusion of vision and IMU 技术领域Technical field
本发明涉及水下三维重建技术领域,更具体地说,涉及一种基于视觉和IMU融合的海洋装备水下损伤三维重建方法。The present invention relates to the technical field of underwater three-dimensional reconstruction, and more specifically, to a three-dimensional reconstruction method of underwater damage to marine equipment based on the fusion of vision and IMU.
背景技术Background technique
海洋设备如舰船、海上油气平台、海上风电装备等,长期受巨浪、潮湿环境、海水侵蚀、碰撞等不利因素的影响,容易产生结构损伤,传统的方法必须返港维修或人工下水检修,花费大量时间、经济成本的同时,也产生很多不安全因素。利用水下移动平台进行损伤定位可以很好地解决上述问题,通过水下移动平台的自主定位和三维重建技术,可以建立清晰准确的水下损伤模型,配合自主修复***完成上述维修工作。Marine equipment, such as ships, offshore oil and gas platforms, offshore wind power equipment, etc., are long-term affected by adverse factors such as huge waves, humid environment, seawater erosion, collisions, etc., and are prone to structural damage. Traditional methods require returning to port for repairs or manually launching into the water for maintenance. While it consumes a lot of time and economic costs, it also creates many unsafe factors. The use of underwater mobile platforms for damage location can well solve the above problems. Through the autonomous positioning and three-dimensional reconstruction technology of the underwater mobile platform, a clear and accurate underwater damage model can be established, and the above repair work can be completed in conjunction with the autonomous repair system.
目前常用的水下定位***包括水声定位***和水下SLAM方法。水声定位***包括超短基线,短基线,长基线定位,设备昂贵且安装困难。常见的SLAM方法包括基于声呐和相机的方法,声呐设备昂贵且为声学方法,分辨率较低,更适合深海定位,而基于相机的方法需要克服水下光学折射且对光线较为敏感,在特征点不明显的情况下容易产生定位失败的问题。Currently commonly used underwater positioning systems include hydroacoustic positioning systems and underwater SLAM methods. Hydroacoustic positioning systems include ultra-short baseline, short baseline, and long baseline positioning. The equipment is expensive and difficult to install. Common SLAM methods include methods based on sonar and cameras. Sonar equipment is expensive and is an acoustic method with low resolution and is more suitable for deep-sea positioning. However, camera-based methods need to overcome underwater optical refraction and are sensitive to light. At feature points, If it is not obvious, it is easy to cause positioning failure.
目前的相机三维重建***步骤包括相机畸变校正和三维重建两步。相机畸变校正包括基于单视点模型的方法和基于标定板或辅助硬件的方法。基于单视点模型由于仅考虑透视模型而未考虑水下的折射模型导致精度较低。基于标定板和辅助硬件的方法考虑了水下的折射模型所以精度较高。三维重建主要是基于相机参数直接或间接获取三维点云,从而通过定位数据进行三维点云叠加,但是上述单纯使用相机进行水下定位精度偏低从而导致三维重建精度降低。The current camera 3D reconstruction system steps include camera distortion correction and 3D reconstruction. Camera distortion correction includes methods based on single-viewpoint models and methods based on calibration boards or auxiliary hardware. The single-viewpoint model only considers the perspective model and does not consider the underwater refraction model, resulting in lower accuracy. The method based on calibration plates and auxiliary hardware takes into account the underwater refraction model, so it has higher accuracy. Three-dimensional reconstruction mainly obtains three-dimensional point clouds directly or indirectly based on camera parameters, and then superposes three-dimensional point clouds through positioning data. However, the above-mentioned simple use of cameras for underwater positioning has low accuracy, which leads to a reduction in three-dimensional reconstruction accuracy.
发明内容Contents of the invention
为克服现有技术中的缺点与不足,本发明的目的在于提供一种基于视觉和IMU融合的海洋装备水下损伤三维重建方法;该方法可提供高精度的损伤区域三 维重建结果,可辅助其他设备进行自主修复,提高海洋设备作业效率。In order to overcome the shortcomings and deficiencies in the existing technology, the purpose of the present invention is to provide a three-dimensional reconstruction method of underwater damage to marine equipment based on the fusion of vision and IMU; this method can provide high-precision three-dimensional reconstruction of the damage area. The repair and reconstruction results can assist other equipment in autonomous repair and improve the operating efficiency of marine equipment.
为了达到上述目的,本发明通过下述技术方案予以实现:一种基于视觉和IMU融合的海洋装备水下损伤三维重建方法,其特征在于:通过水下损伤三维重建***来实现;所述水下损伤三维重建***包括水下移动平台和计算主机;水下移动平台包括水下移动平台本体,以及搭载在水下移动平台本体上的双目相机、平台IMU、激光传感器、激光驱动***、通信***和排水***;激光驱动***的各轴上分别设置有驱动IMU;通信***用于水下移动平台与计算主机之间的通信;In order to achieve the above objectives, the present invention is realized through the following technical solutions: a three-dimensional reconstruction method of underwater damage of marine equipment based on the fusion of vision and IMU, which is characterized in that: it is realized by an underwater damage three-dimensional reconstruction system; the underwater damage The three-dimensional damage reconstruction system includes an underwater mobile platform and a computing host; the underwater mobile platform includes the underwater mobile platform body, as well as binocular cameras, platform IMU, laser sensors, laser drive systems, and communication systems mounted on the underwater mobile platform body. and drainage system; each axis of the laser drive system is equipped with a drive IMU; the communication system is used for communication between the underwater mobile platform and the computing host;
海洋装备水下损伤三维重建方法包括如下步骤:The three-dimensional reconstruction method of underwater damage to marine equipment includes the following steps:
S1、将双目相机和平台IMU分别固定到水下移动平台上;在水下环境下进行双目相机的左右相机内参标定,以及双目相机的左右相机与平台IMU外参标定;S1. Fix the binocular camera and platform IMU to the underwater mobile platform respectively; perform internal parameter calibration of the left and right cameras of the binocular camera, and calibrate the external parameters of the left and right cameras of the binocular camera and the platform IMU in the underwater environment;
S2、将激光传感器固定在驱动***末端;标定驱动***各轴上的驱动IMU坐标系和激光传感器坐标系的外参矩阵;S2. Fix the laser sensor at the end of the drive system; calibrate the external parameter matrix of the drive IMU coordinate system and the laser sensor coordinate system on each axis of the drive system;
S3、采集双目相机图像数据和平台IMU数据;将平台IMU数据中的加速度和角速度数据进行积分,得到平台IMU坐标系下的位置和姿态观测;将双目相机图像数据对应的一帧,分别进行双目相机坐标系下的位姿观测、损伤检测以及三维点云生成;融合位姿观测结果并叠加连续的三维点云,得到水下三维重建点云,并用以验证损伤检测结果;根据位姿观测和损伤检测结果,为水下移动平台规划最优路径,并基于三维重建点云进行局部避障,控制水下移动平台运动到损伤区域附近;S3. Collect the binocular camera image data and the platform IMU data; integrate the acceleration and angular velocity data in the platform IMU data to obtain the position and attitude observations in the platform IMU coordinate system; convert a frame corresponding to the binocular camera image data, respectively. Perform pose observation, damage detection and 3D point cloud generation in the binocular camera coordinate system; fuse the pose observation results and superimpose continuous 3D point clouds to obtain an underwater 3D reconstructed point cloud, which is used to verify the damage detection results; based on the position Based on the attitude observation and damage detection results, the optimal path is planned for the underwater mobile platform, local obstacle avoidance is performed based on the three-dimensional reconstructed point cloud, and the underwater mobile platform is controlled to move near the damage area;
S4、规划排水***轨迹,排水***对损伤区域进行排水;根据S2得到的外参矩阵,利用驱动IMU数据确定激光传感器激光的位置,从而实现损伤区域精细的激光传感器数据三维重建。S4: Plan the drainage system trajectory, and the drainage system drains the damaged area; based on the external parameter matrix obtained in S2, use the driving IMU data to determine the position of the laser sensor laser, thereby achieving precise three-dimensional reconstruction of the laser sensor data in the damaged area.
优选地,所述S1包括如下步骤:Preferably, the S1 includes the following steps:
S11、双目相机固定于水下移动平台本体的前端,且视觉方向为斜向下10°~30°之间;平台IMU固定于水下移动平台本体中部,与水下移动平台质心位置相当;S11. The binocular camera is fixed on the front end of the underwater mobile platform body, and the visual direction is between 10° and 30° obliquely downward; the platform IMU is fixed on the middle part of the underwater mobile platform body, which is equivalent to the center of mass of the underwater mobile platform;
S12、将标定板与水下移动平台同时置于水下;标定板同时出现在双目相机的左右相机视野中; S12. Place the calibration plate and the underwater mobile platform underwater at the same time; the calibration plate appears in the left and right camera fields of the binocular camera at the same time;
移动水下移动平台,使标定板分布在双目相机的左右相机视野的各个位置;记录下多组双目相机图像数据;通信***将多组双目相机图像数据传输到计算主机上;计算主机进行相关标定计算,双目相机的左右相机内参标定,以及双目相机的左右相机与平台IMU外参标定。Move the underwater mobile platform so that the calibration plates are distributed in various positions of the left and right camera fields of the binocular camera; record multiple sets of binocular camera image data; the communication system transmits the multiple sets of binocular camera image data to the computing host; the computing host Carry out relevant calibration calculations, including the internal parameter calibration of the left and right cameras of the binocular camera, and the external parameter calibration of the left and right cameras of the binocular camera and the platform IMU.
优选地,所述S12,Preferably, the S12,
双目相机的左右相机的内参标定是指:
The internal parameter calibration of the left and right cameras of the binocular camera refers to:
其中,l代表左相机;r代表右相机;Kl,Kr分别代表左右相机内参矩阵;fxl,fyl,fxr,fyr分别代表使用像素表示左右相机在x轴和y轴方向上的焦距的长度;(u0l,v0l),(u0r,v0r)分别代表左右相机像平面坐标系的主点的实际像素坐标;Among them, l represents the left camera; r represents the right camera; K l and K r respectively represent the left and right camera internal parameter matrices; f xl , f yl , f xr and f yr respectively represent the use of pixels to represent the left and right cameras in the x-axis and y-axis directions. The length of the focal length; (u 0l , v 0l ), (u 0r , v 0r ) respectively represent the actual pixel coordinates of the principal points of the left and right camera image plane coordinate systems;
双目相机的左右相机与平台IMU外参标定是指:The external parameter calibration of the left and right cameras of the binocular camera and the platform IMU refers to:
设定平台IMU坐标系为世界坐标系,则双目相机的左右相机图像点到平台IMU坐标系下的转换关系为:

Set the platform IMU coordinate system to the world coordinate system, then the conversion relationship between the left and right camera image points of the binocular camera and the platform IMU coordinate system is:

其中,分别为左右相机坐标系下的二维坐标;为平台IMU坐标系下的三维坐标;Rlr,Rri分别为右相机到左相机、左相机到平台IMU坐标系的3*3的旋转矩阵;Tlr,Tri分别为右相机到左相机、左相机到平台IMU坐标系的1*3的平移向量。in, are the two-dimensional coordinates in the left and right camera coordinate systems respectively; are the three-dimensional coordinates in the platform IMU coordinate system; R lr and R ri are respectively the 3*3 rotation matrices from the right camera to the left camera and the left camera to the platform IMU coordinate system; T lr and T ri are respectively the right camera to the left camera. , the 1*3 translation vector from the left camera to the platform IMU coordinate system.
优选地,所述S2,对齐驱动***各轴上的驱动IMU坐标系和激光传感器坐标系,是指:Preferably, S2, aligning the drive IMU coordinate system and the laser sensor coordinate system on each axis of the drive system, refers to:
根据双目相机、排水***、激光驱动***的位置关系,得到水下移动平台质心坐标系到排水***坐标系的转换关系、排水***坐标系与激光驱动***坐标系的转换关系;According to the positional relationship between the binocular camera, drainage system, and laser drive system, the conversion relationship between the underwater mobile platform's center of mass coordinate system and the drainage system coordinate system, and the conversion relationship between the drainage system coordinate system and the laser drive system coordinate system are obtained;
通过控制激光传感器激光点在参数已知的标定板上运动;通信***连接激光传感器和驱动IMU获取数据并发送到计算主机中,计算主机完成标定计算,得到激光驱动***坐标系与激光传感器坐标系的转换关系;By controlling the laser point of the laser sensor to move on the calibration plate with known parameters; the communication system connects the laser sensor and the drive IMU to obtain the data and send it to the computing host. The computing host completes the calibration calculation and obtains the laser drive system coordinate system and the laser sensor coordinate system. conversion relationship;
将激光传感器、激光驱动***、排水***、水下移动平台质心四个坐标***对齐。Align the four coordinate systems of the laser sensor, laser drive system, drainage system, and underwater mobile platform center of mass.
优选地,将激光传感器、激光驱动***、排水***、水下移动平台质心四个坐标***对齐的方法是:Preferably, the method of aligning the four coordinate systems of the laser sensor, laser drive system, drainage system, and underwater mobile platform centroid is:
标定激光传感器、激光驱动***、排水***、水下移动平台质心中任意两个坐标***的外参矩阵,包括旋转矩阵和平移向量:
Calibrate the external parameter matrices of any two coordinate systems in the centroid of the laser sensor, laser drive system, drainage system, and underwater mobile platform, including rotation matrices and translation vectors:
其中,A和B分别代表两个坐标***,X代表4*4的外参矩阵,R代表3*3的旋转矩阵,T代表1*3的平移向量。Among them, A and B represent two coordinate systems respectively, X represents a 4*4 external parameter matrix, R represents a 3*3 rotation matrix, and T represents a 1*3 translation vector.
优选地,所述S3中,平台IMU坐标系下的姿态观测是指: Preferably, in S3, the attitude observation in the platform IMU coordinate system refers to:
在k时刻到k+1时刻下平台IMU数据积分得到的速度V、平移向量T和旋转矩阵R分别表示为:
Vk+1=Vk+aΔt

The velocity V, translation vector T and rotation matrix R obtained by integrating the platform IMU data from time k to time k+1 are respectively expressed as:
Vk +1 = Vk +aΔt

其中,Vk,Vk+1分别为k时刻、k+1时刻下的速度;a为加速度;Δt为时间间隔;Tk,Tk+1分别为k时刻、k+1时刻下的平移向量;Rk,Rk+1分别为k时刻、k+1时刻下的旋转矩阵;ω为角速度;为克罗内克积。Among them, V k and V k+1 are the speeds at time k and k+1 respectively; a is the acceleration; Δt is the time interval; T k and T k+1 are the translations at time k and k+1 respectively. Vector; R k and R k+1 are the rotation matrices at time k and k+1 respectively; ω is the angular velocity; is the Kronecker product.
优选地,所述S3中,双目相机坐标系下的姿态观测是指:Preferably, in S3, attitude observation in the binocular camera coordinate system refers to:
对双目相机图像数据进行特征点提取,以特征点为圆心构建圆形区域:

θ=arctan(m01/m10)
Extract feature points from the binocular camera image data and construct a circular area with the feature points as the center:

θ=arctan(m 01 /m 10 )
其中,C代表圆形区域的质心,θ代表特征点的方向向量,mpq代表圆形区域的矩,定义为:
Among them, C represents the centroid of the circular area, θ represents the direction vector of the feature point, and m pq represents the moment of the circular area, which is defined as:
其中,R代表圆形区域的半径;x,y代表x轴坐标,y轴坐标;I(x,y)代表灰度方程;Among them, R represents the radius of the circular area; x, y represents the x-axis coordinate and y-axis coordinate; I(x, y) represents the grayscale equation;
通过对连续多帧双目相机图像数据的特征点提取和匹配,利用匹配后的像素点建立PnP求解问题,得到双目相机的旋转矩阵R和平移向量T。By extracting and matching feature points of continuous multi-frame binocular camera image data, using the matched pixels to establish a PnP solution problem, the rotation matrix R and translation vector T of the binocular camera are obtained.
优选地,所述S3中,双目相机坐标系下的三维点云生成是指:Preferably, in S3, the three-dimensional point cloud generation in the binocular camera coordinate system refers to:
对同一帧双目相机的左右相机图像进行上述特征点提取和匹配,基于灰度 误差平方累计算法进行视差计算:
Perform the above feature point extraction and matching on the left and right camera images of the same frame of binocular camera, based on grayscale The error square accumulation algorithm performs disparity calculation:
其中,x,y,d分别为x轴坐标,y轴坐标,视差;i,j分别为x轴,y轴方向的变化值;m,n分别为x轴,y轴方向的最大值;I1(x,y),I2(x,y)代表灰度方程;Among them, x, y, d are the x-axis coordinate, y-axis coordinate, and parallax respectively; i, j are the change values in the x-axis and y-axis directions respectively; m, n are the maximum values in the x-axis and y-axis directions respectively; I 1 (x, y), I 2 (x, y) represent the grayscale equation;
通过视差和原始坐标生成三维点云数据,三维坐标表示为:
Three-dimensional point cloud data is generated through parallax and original coordinates. The three-dimensional coordinates are expressed as:
其中,xl、xr分别为左右相机对应的横坐标值;yl、yr分别为左右相机纵坐标值;fx,fy分别为左右相机内参中对应的焦距;X,Y,Z分别为为三维坐标;D为深度值,可由下式计算:
D=Bf/d
Among them, x l and x r are the corresponding abscissa values of the left and right cameras respectively; y l and y r are the ordinate values of the left and right cameras respectively; f x and f y are the corresponding focal lengths of the internal parameters of the left and right cameras respectively; X, Y, Z are three-dimensional coordinates respectively; D is the depth value, which can be calculated by the following formula:
D=Bf/d
其中,B为基线长度,f为相机焦距,d为左右图像视差。Among them, B is the baseline length, f is the camera focal length, and d is the left and right image disparity.
优选地,所述S4是指:根据损伤区域位置,为水下移动平台规划运动轨迹,使排水***覆盖损伤区域并排出水分形成干燥空间;利用激光驱动***控制激光传感器在干燥空间中进行三维扫描;激光传感器数据与驱动IMU数据通过通信***传递给计算主机;计算主机根据S2得到的外参矩阵,利用驱动IMU数据获取激光驱动***姿态,变换得出激光传感器的位置,通过激光传感器的位置和点云数据,得到精细的激光传感器数据三维重建;基于三维重建结果检测损伤位置。Preferably, the S4 refers to: planning a motion trajectory for the underwater mobile platform according to the location of the damaged area, so that the drainage system covers the damaged area and draining water to form a dry space; using the laser drive system to control the laser sensor to perform three-dimensional scanning in the dry space ;The laser sensor data and driving IMU data are transmitted to the computing host through the communication system; the computing host uses the driving IMU data to obtain the attitude of the laser driving system based on the external parameter matrix obtained by S2, and transforms the position of the laser sensor. Through the position and sum of the laser sensor Point cloud data can be used to obtain precise three-dimensional reconstruction of laser sensor data; damage locations can be detected based on the three-dimensional reconstruction results.
优选地,所述S4中,激光传感器数据三维重建是指:Preferably, in S4, the three-dimensional reconstruction of laser sensor data refers to:
激光传感器以固定频率发射激光脉冲,通过接收器接收返回的反射光判断 距离,同时依据反射强度粗略区分目标材质,测距公式为:
L=tc/2
The laser sensor emits laser pulses at a fixed frequency, and the receiver receives the returned reflected light for judgment. distance, and roughly distinguish the target material based on the reflection intensity. The ranging formula is:
L=tc/2
其中,L为目标距离,t为返回时间,c为光速;Among them, L is the target distance, t is the return time, and c is the speed of light;
利用驱动IMU对激光传感器进行位姿预测,再通过旋转矩阵R和平移向量T得到激光传感器三维重建结果。The driving IMU is used to predict the pose of the laser sensor, and then the three-dimensional reconstruction result of the laser sensor is obtained through the rotation matrix R and the translation vector T.
与现有技术相比,本发明具有如下优点与有益效果:Compared with the existing technology, the present invention has the following advantages and beneficial effects:
1、本发明能够自主进行海洋装备水下损伤的检测和三维重建,解决了人工成本,经济成本的同时提高了安全性;1. The present invention can independently detect and three-dimensionally reconstruct underwater damage to marine equipment, solving labor costs and economic costs while improving safety;
2、本发明通过视觉和IMU融合的方式提升了定位精度和水下三维重建精度,基于图像和点云融合校验的损伤检测方法能够更准确地定位水下海洋装备的损伤位置;2. The present invention improves positioning accuracy and underwater three-dimensional reconstruction accuracy through the fusion of vision and IMU. The damage detection method based on image and point cloud fusion verification can more accurately locate the damage location of underwater marine equipment;
3、本发明中,能够准确地排空损伤区域附近水分,从而实现高精度的激光三维重建和损伤识别,也为其他自主修复设备提供了便利。3. In the present invention, the water near the damaged area can be accurately evacuated, thereby achieving high-precision laser three-dimensional reconstruction and damage identification, and also providing convenience for other autonomous repair equipment.
附图说明Description of drawings
图1是本发明基于视觉和IMU融合的海洋装备水下损伤三维重建方法的流程示意图;Figure 1 is a schematic flow chart of the three-dimensional reconstruction method of underwater damage of marine equipment based on the fusion of vision and IMU according to the present invention;
图2是本发明采用的水下损伤三维重建***的结构示意图;Figure 2 is a schematic structural diagram of the underwater damage three-dimensional reconstruction system used in the present invention;
图3是本发明基于视觉和IMU融合的海洋装备水下损伤三维重建方法的通讯示意图;Figure 3 is a communication schematic diagram of the three-dimensional reconstruction method of underwater damage of marine equipment based on the fusion of vision and IMU according to the present invention;
图4是本发明基于视觉和IMU融合的海洋装备水下损伤三维重建方法中的坐标系转换示意图。Figure 4 is a schematic diagram of the coordinate system transformation in the three-dimensional reconstruction method of underwater damage to marine equipment based on the fusion of vision and IMU according to the present invention.
具体实施方式Detailed ways
下面结合附图与具体实施方式对本发明作进一步详细的描述。The present invention will be described in further detail below in conjunction with the accompanying drawings and specific embodiments.
实施例Example
本实施例一种基于视觉和IMU融合的海洋装备水下损伤三维重建方法,具体流程如图1所示,其通过水下损伤三维重建***来实现。This embodiment is a three-dimensional reconstruction method for underwater damage of marine equipment based on the fusion of vision and IMU. The specific process is shown in Figure 1, which is implemented through a three-dimensional underwater damage reconstruction system.
水下损伤三维重建***包括水下移动平台和计算主机,如图2所示;水下移动平台包括水下移动平台本体1,以及搭载在水下移动平台本体1上的双目相 机4、平台IMU6、激光传感器5、激光驱动***3、通信***和排水***2。The underwater damage three-dimensional reconstruction system includes an underwater mobile platform and a computing host, as shown in Figure 2; the underwater mobile platform includes an underwater mobile platform body 1, and a binocular camera mounted on the underwater mobile platform body 1 Machine 4, platform IMU 6, laser sensor 5, laser drive system 3, communication system and drainage system 2.
激光驱动***3的各轴上分别设置有驱动IMU;通信***用于水下移动平台与计算主机之间的通信;具体地说,通信***固定于水下移动平台本体中部,用于采集双目图像数据和IMU数据并发送给计算主机,同时接收计算主机的相关控制指令,驱动水下移动平台。Each axis of the laser driving system 3 is provided with a driving IMU respectively; the communication system is used for communication between the underwater mobile platform and the computing host; specifically, the communication system is fixed in the middle of the underwater mobile platform body and is used for binocular acquisition. The image data and IMU data are sent to the computing host, and at the same time, it receives relevant control instructions from the computing host to drive the underwater mobile platform.
海洋装备水下损伤三维重建方法包括如下步骤:The three-dimensional reconstruction method of underwater damage to marine equipment includes the following steps:
S1、将双目相机和平台IMU分别固定到水下移动平台上;在水下环境下进行双目相机的左右相机内参标定,以及双目相机的左右相机与平台IMU外参标定。S1. Fix the binocular camera and platform IMU to the underwater mobile platform respectively; perform internal parameter calibration of the left and right cameras of the binocular camera, and calibrate the external parameters of the left and right cameras of the binocular camera and the platform IMU in the underwater environment.
S1包括如下步骤:S1 includes the following steps:
S11、双目相机固定于水下移动平台本体的前端,且视觉方向为斜向下10°~30°之间;平台IMU固定于水下移动平台本体中部,与水下移动平台质心位置相当;S11. The binocular camera is fixed on the front end of the underwater mobile platform body, and the visual direction is between 10° and 30° obliquely downward; the platform IMU is fixed on the middle part of the underwater mobile platform body, which is equivalent to the center of mass of the underwater mobile platform;
S12、将标定板与水下移动平台同时置于水下;标定板同时出现在双目相机的左右相机视野中;在保证双目相机视野同时能够完整地包括标定板的情况下,尽可能地在各个方向进行旋转,以确保能够与驱动IMU三轴完成标定,此步骤中的数据录制时间无需很长,双目相机每秒15帧以上,驱动IMU每秒100帧以上;S12. Place the calibration plate and the underwater mobile platform underwater at the same time; the calibration plate appears in the left and right camera fields of view of the binocular camera at the same time; while ensuring that the field of view of the binocular camera can completely include the calibration plate, try as much as possible Rotate in all directions to ensure that the calibration can be completed with the three axes of the driving IMU. The data recording time in this step does not need to be long, the binocular camera must be more than 15 frames per second, and the driving IMU must be more than 100 frames per second;
移动水下移动平台,使标定板分布在双目相机的左右相机视野的各个位置;记录下多组双目相机图像数据;通信***将多组双目相机图像数据传输到计算主机上;计算主机进行相关标定计算,双目相机的左右相机内参标定,以及双目相机的左右相机与平台IMU外参标定。Move the underwater mobile platform so that the calibration plates are distributed in various positions of the left and right camera fields of the binocular camera; record multiple sets of binocular camera image data; the communication system transmits the multiple sets of binocular camera image data to the computing host; the computing host Carry out relevant calibration calculations, including the internal parameter calibration of the left and right cameras of the binocular camera, and the external parameter calibration of the left and right cameras of the binocular camera and the platform IMU.
S12,双目相机的左右相机的内参标定是指:
S12, the internal parameter calibration of the left and right cameras of the binocular camera refers to:
其中,l代表左相机;r代表右相机;Kl,Kr分别代表左右相机内参矩阵;fxl,fyl,fxr,fyr分别代表使用像素表示左右相机在x轴和y轴方向上的焦距的长 度;(u0l,v0l),(u0r,v0r)分别代表左右相机像平面坐标系的主点的实际像素坐标;Among them, l represents the left camera; r represents the right camera; K l and K r respectively represent the left and right camera internal parameter matrices; f xl , f yl , f xr and f yr respectively represent the use of pixels to represent the left and right cameras in the x-axis and y-axis directions. the length of the focal length degree; (u 0l ,v 0l ), (u 0r ,v 0r ) respectively represent the actual pixel coordinates of the principal points of the left and right camera image plane coordinate systems;
双目相机的左右相机与平台IMU外参标定是指:The external parameter calibration of the left and right cameras of the binocular camera and the platform IMU refers to:
设定平台IMU坐标系为世界坐标系,则双目相机的左右相机图像点到平台IMU坐标系下的转换关系为:

Set the platform IMU coordinate system to the world coordinate system, then the conversion relationship between the left and right camera image points of the binocular camera and the platform IMU coordinate system is:

其中,分别为左右相机坐标系下的二维坐标;为平台IMU坐标系下的三维坐标;Rlr,Rri分别为右相机到左相机、左相机到平台IMU坐标系的3*3的旋转矩阵;Tlr,Tri分别为右相机到左相机、左相机到平台IMU坐标系的1*3的平移向量。in, are the two-dimensional coordinates in the left and right camera coordinate systems respectively; are the three-dimensional coordinates in the platform IMU coordinate system; R lr and R ri are respectively the 3*3 rotation matrices from the right camera to the left camera and the left camera to the platform IMU coordinate system; T lr and T ri are respectively the right camera to the left camera. , the 1*3 translation vector from the left camera to the platform IMU coordinate system.
S2、将激光传感器固定在驱动***末端;标定驱动***各轴上的驱动IMU坐标系和激光传感器坐标系的外参矩阵。S2. Fix the laser sensor at the end of the drive system; calibrate the external parameter matrix of the drive IMU coordinate system and the laser sensor coordinate system on each axis of the drive system.
具体地说,如图4所示,根据双目相机、排水***、激光驱动***的位置关系,得到水下移动平台质心坐标系到排水***坐标系的转换关系、排水***坐标系与激光驱动***坐标系的转换关系;Specifically, as shown in Figure 4, based on the positional relationship between the binocular camera, the drainage system, and the laser drive system, the conversion relationship between the underwater mobile platform's center of mass coordinate system and the drainage system coordinate system, the drainage system coordinate system, and the laser drive system are obtained Coordinate system conversion relationship;
通过控制激光传感器激光点在参数已知的标定板上运动;通信***连接激光传感器和驱动IMU获取数据并发送到计算主机中,计算主机完成标定计算, 得到激光驱动***坐标系与激光传感器坐标系的转换关系;By controlling the laser point of the laser sensor to move on the calibration plate with known parameters; the communication system connects the laser sensor and the drive IMU to obtain the data and send it to the computing host, which completes the calibration calculation. Obtain the conversion relationship between the coordinate system of the laser drive system and the coordinate system of the laser sensor;
将激光传感器、激光驱动***、排水***、水下移动平台质心四个坐标***对齐。离线标定完成后,图4中所有的坐标系转换关系均已知。Align the four coordinate systems of the laser sensor, laser drive system, drainage system, and underwater mobile platform center of mass. After the offline calibration is completed, all coordinate system transformation relationships in Figure 4 are known.
将激光传感器、激光驱动***、排水***、水下移动平台质心四个坐标***对齐的方法是:The method to align the four coordinate systems of the laser sensor, laser drive system, drainage system, and underwater mobile platform center of mass is:
标定激光传感器、激光驱动***、排水***、水下移动平台质心中任意两个坐标***的外参矩阵,包括旋转矩阵和平移向量:
Calibrate the external parameter matrices of any two coordinate systems in the centroid of the laser sensor, laser drive system, drainage system, and underwater mobile platform, including rotation matrices and translation vectors:
其中,A和B分别代表两个坐标***,X代表4*4的外参矩阵,R代表3*3的旋转矩阵,T代表1*3的平移向量。Among them, A and B represent two coordinate systems respectively, X represents a 4*4 external parameter matrix, R represents a 3*3 rotation matrix, and T represents a 1*3 translation vector.
S3、计算主机首先采集双目相机图像数据和平台IMU数据;并读取之前的内参、外参标定结果;随后融合平台IMU数据和双目相机图像数据,得到左相机坐标系下的定位结果,同时根据双目检测原理,在左相机坐标系下检测损伤信息并生成当前帧图像三维点云;融合定位结果和三维点云信息将每一帧的点云进行滤波和叠加,生成连续的三维重建结果,并根据三维重建的点云验证双目检测的损伤位置;随后根据左相机坐标系下的定位结果和损伤区域位置,规划水下移动平台的全局移动路径,并转化到水下移动平台质心坐标系下,通过通信总线向水下移动平台的通信***发布控制信号;在水下移动平台移动的过程中,根据实时的三维重建结果保存的三维信息进行局部避障,直至水下移动平台移动到损伤区域附近。S3. The computing host first collects the binocular camera image data and the platform IMU data; and reads the previous internal and external parameter calibration results; then fuses the platform IMU data and the binocular camera image data to obtain the positioning result in the left camera coordinate system. At the same time, according to the principle of binocular detection, the damage information is detected in the left camera coordinate system and a three-dimensional point cloud of the current frame image is generated; the positioning results and the three-dimensional point cloud information are combined to filter and superimpose the point cloud of each frame to generate a continuous three-dimensional reconstruction. The result is verified based on the three-dimensional reconstructed point cloud to verify the damage location detected by binocular detection. Then, based on the positioning results and damage area location in the left camera coordinate system, the global movement path of the underwater mobile platform is planned and converted to the center of mass of the underwater mobile platform. Under the coordinate system, control signals are issued to the communication system of the underwater mobile platform through the communication bus; during the movement of the underwater mobile platform, local obstacle avoidance is performed based on the three-dimensional information saved by the real-time three-dimensional reconstruction results until the underwater mobile platform moves. to the area of damage.
其中,平台IMU坐标系下的姿态观测是指:Among them, the attitude observation in the platform IMU coordinate system refers to:
在k时刻到k+1时刻下平台IMU数据积分得到的速度V、平移向量T和旋转矩阵R分别表示为:
Vk+1=Vk+aΔt

The velocity V, translation vector T and rotation matrix R obtained by integrating the platform IMU data from time k to time k+1 are respectively expressed as:
Vk +1 = Vk +aΔt

其中,Vk,Vk+1分别为k时刻、k+1时刻下的速度;a为加速度;Δt为时间间隔;Tk,Tk+1分别为k时刻、k+1时刻下的平移向量;Rk,Rk+1分别为k时刻、k+1时刻下的旋转矩阵;ω为角速度;为克罗内克积。Among them, V k and V k+1 are the speeds at time k and k+1 respectively; a is the acceleration; Δt is the time interval; T k and T k+1 are the translations at time k and k+1 respectively. Vector; R k and R k+1 are the rotation matrices at time k and k+1 respectively; ω is the angular velocity; is the Kronecker product.
双目相机坐标系下的姿态观测是指:Attitude observation in the binocular camera coordinate system refers to:
对双目相机图像数据进行特征点提取,以特征点为圆心构建圆形区域:

θ=arctan(m01/m10)
Extract feature points from the binocular camera image data and construct a circular area with the feature points as the center:

θ=arctan(m 01 /m 10 )
其中,C代表圆形区域的质心,θ代表特征点的方向向量,mpq代表圆形区域的矩,定义为:
Among them, C represents the centroid of the circular area, θ represents the direction vector of the feature point, and m pq represents the moment of the circular area, which is defined as:
其中,R代表圆形区域的半径;x,y代表x轴坐标,y轴坐标;I(x,y)代表灰度方程;Among them, R represents the radius of the circular area; x, y represents the x-axis coordinate and y-axis coordinate; I(x, y) represents the grayscale equation;
通过对连续多帧双目相机图像数据的特征点提取和匹配,利用匹配后的像素点建立PnP求解问题,得到双目相机的旋转矩阵R和平移向量T。By extracting and matching feature points of continuous multi-frame binocular camera image data, using the matched pixels to establish a PnP solution problem, the rotation matrix R and translation vector T of the binocular camera are obtained.
双目相机坐标系下的三维点云生成是指:Three-dimensional point cloud generation in the binocular camera coordinate system refers to:
对同一帧双目相机的左右相机图像进行上述特征点提取和匹配,基于灰度误差平方累计算法进行视差计算:
Perform the above feature point extraction and matching on the left and right camera images of the same frame of binocular cameras, and perform disparity calculation based on the grayscale error square accumulation algorithm:
其中,x,y,d分别为x轴坐标,y轴坐标,视差;i,j分别为x轴,y轴方向的变化值;m,n分别为x轴,y轴方向的最大值;I1(x,y),I2(x,y)代表灰度方程;Among them, x, y, d are the x-axis coordinate, y-axis coordinate, and parallax respectively; i, j are the change values in the x-axis and y-axis directions respectively; m, n are the maximum values in the x-axis and y-axis directions respectively; I 1 (x, y), I 2 (x, y) represent the grayscale equation;
通过视差和原始坐标生成三维点云数据,三维坐标表示为:
Three-dimensional point cloud data is generated through parallax and original coordinates. The three-dimensional coordinates are expressed as:
其中,xl、xr分别为左右相机对应的横坐标值;yl、yr分别为左右相机纵坐标值;fx,fy分别为左右相机内参中对应的焦距;X,Y,Z分别为为三维坐标;D为深度值,可由下式计算:
D=Bf/d
Among them, x l and x r are the corresponding abscissa values of the left and right cameras respectively; y l and y r are the ordinate values of the left and right cameras respectively; f x and f y are the corresponding focal lengths of the internal parameters of the left and right cameras respectively; X, Y, Z are three-dimensional coordinates respectively; D is the depth value, which can be calculated by the following formula:
D=Bf/d
其中,B为基线长度,f为相机焦距,d为左右图像视差。Among them, B is the baseline length, f is the camera focal length, and d is the left and right image disparity.
S4、规划排水***轨迹,控制排水***排水;根据S2得到的外参矩阵,利用驱动IMU数据确定激光传感器激光的位置,从而实现损伤区域精细的激光传感器数据三维重建。S4. Plan the trajectory of the drainage system and control the drainage of the drainage system; based on the external parameter matrix obtained in S2, use the driving IMU data to determine the position of the laser sensor laser, thereby achieving precise three-dimensional reconstruction of the laser sensor data in the damaged area.
具体地说,根据损伤区域位置,为水下移动平台规划运动轨迹,使排水***覆盖损伤区域并排出水分形成干燥空间;利用激光驱动***控制激光传感器在干燥空间中进行三维扫描;激光传感器数据与驱动IMU数据通过通信***传递给计算主机;计算主机根据S2得到的外参矩阵,利用驱动IMU数据获取激光驱动***姿态,变换得出激光传感器的位置,通过激光传感器的位置和点云数据,得到精细的激光传感器数据三维重建;基于三维重建结果检测损伤位置。Specifically, according to the location of the damaged area, the motion trajectory is planned for the underwater mobile platform so that the drainage system covers the damaged area and drains water to form a dry space; the laser drive system is used to control the laser sensor to perform three-dimensional scanning in the dry space; the laser sensor data is compared with The driving IMU data is transmitted to the computing host through the communication system; the computing host uses the driving IMU data to obtain the attitude of the laser driving system based on the external parameter matrix obtained by S2, and transforms it to obtain the position of the laser sensor. Through the position of the laser sensor and point cloud data, we obtain Detailed 3D reconstruction of laser sensor data; detection of damage location based on 3D reconstruction results.
激光传感器数据三维重建是指:Three-dimensional reconstruction of laser sensor data refers to:
激光传感器以固定频率发射激光脉冲,通过接收器接收返回的反射光判断距离,同时依据反射强度粗略区分目标材质,测距公式为:
L=tc/2
The laser sensor emits laser pulses at a fixed frequency, receives the returned reflected light through the receiver to determine the distance, and roughly distinguishes the target material based on the reflection intensity. The ranging formula is:
L=tc/2
其中,L为目标距离,t为返回时间,c为光速;Among them, L is the target distance, t is the return time, and c is the speed of light;
利用驱动IMU对激光传感器进行位姿预测,再通过旋转矩阵R和平移向量T得到激光传感器三维重建结果。此时可实现精细的损伤位置检测,误差可控制在0.2mm以内,为其他自主修复设备提供高精度定位结果。The driving IMU is used to predict the pose of the laser sensor, and then the three-dimensional reconstruction result of the laser sensor is obtained through the rotation matrix R and the translation vector T. At this time, precise damage location detection can be achieved, and the error can be controlled within 0.2mm, providing high-precision positioning results for other autonomous repair equipment.
上述实施例为本发明较佳的实施方式,但本发明的实施方式并不受上述实施例的限制,其他的任何未背离本发明的精神实质与原理下所作的改变、修饰、替代、组合、简化,均应为等效的置换方式,都包含在本发明的保护范围之内。 The above embodiments are preferred embodiments of the present invention, but the embodiments of the present invention are not limited to the above embodiments. Any other changes, modifications, substitutions, combinations, etc. may be made without departing from the spirit and principles of the present invention. All simplifications should be equivalent substitutions, and are all included in the protection scope of the present invention.

Claims (10)

  1. 一种基于视觉和IMU融合的海洋装备水下损伤三维重建方法,其特征在于:通过水下损伤三维重建***来实现;所述水下损伤三维重建***包括水下移动平台和计算主机;水下移动平台包括水下移动平台本体,以及搭载在水下移动平台本体上的双目相机、平台IMU、激光传感器、激光驱动***、通信***和排水***;激光驱动***的各轴上分别设置有驱动IMU;通信***用于水下移动平台与计算主机之间的通信;A three-dimensional reconstruction method of underwater damage to marine equipment based on the fusion of vision and IMU, characterized by: realizing it through an underwater three-dimensional damage reconstruction system; the three-dimensional underwater damage reconstruction system includes an underwater mobile platform and a computing host; underwater The mobile platform includes the underwater mobile platform body, as well as the binocular camera, platform IMU, laser sensor, laser drive system, communication system and drainage system mounted on the underwater mobile platform body; the laser drive system is equipped with drivers on each axis. IMU; communication system used for communication between underwater mobile platform and computing host;
    海洋装备水下损伤三维重建方法包括如下步骤:The three-dimensional reconstruction method of underwater damage to marine equipment includes the following steps:
    S1、将双目相机和平台IMU分别固定到水下移动平台上;在水下环境下进行双目相机的左右相机内参标定,以及双目相机的左右相机与平台IMU外参标定;S1. Fix the binocular camera and platform IMU to the underwater mobile platform respectively; perform internal parameter calibration of the left and right cameras of the binocular camera, and calibrate the external parameters of the left and right cameras of the binocular camera and the platform IMU in the underwater environment;
    S2、将激光传感器固定在驱动***末端;标定驱动***各轴上的驱动IMU坐标系和激光传感器坐标系的外参矩阵;S2. Fix the laser sensor at the end of the drive system; calibrate the external parameter matrix of the drive IMU coordinate system and the laser sensor coordinate system on each axis of the drive system;
    S3、采集双目相机图像数据和平台IMU数据;将平台IMU数据中的加速度和角速度数据进行积分,得到平台IMU坐标系下的位置和姿态观测;将双目相机图像数据对应的一帧,分别进行双目相机坐标系下的位姿观测、损伤检测以及三维点云生成;融合位姿观测结果并叠加连续的三维点云,得到水下三维重建点云,并用以验证损伤检测结果;根据位姿观测和损伤检测结果,为水下移动平台规划最优路径,并基于三维重建点云进行局部避障,控制水下移动平台运动到损伤区域附近;S3. Collect the binocular camera image data and the platform IMU data; integrate the acceleration and angular velocity data in the platform IMU data to obtain the position and attitude observations in the platform IMU coordinate system; convert a frame corresponding to the binocular camera image data, respectively. Perform pose observation, damage detection and 3D point cloud generation in the binocular camera coordinate system; fuse the pose observation results and superimpose continuous 3D point clouds to obtain an underwater 3D reconstructed point cloud, which is used to verify the damage detection results; based on the position Based on the attitude observation and damage detection results, the optimal path is planned for the underwater mobile platform, local obstacle avoidance is performed based on the three-dimensional reconstructed point cloud, and the underwater mobile platform is controlled to move near the damage area;
    S4、规划排水***轨迹,排水***对损伤区域进行排水;根据S2得到的外参矩阵,利用驱动IMU数据确定激光传感器激光的位置,从而实现损伤区域精细的激光传感器数据三维重建。S4: Plan the drainage system trajectory, and the drainage system drains the damaged area; based on the external parameter matrix obtained in S2, use the driving IMU data to determine the position of the laser sensor laser, thereby achieving precise three-dimensional reconstruction of the laser sensor data in the damaged area.
  2. 根据权利要求1所述的基于视觉和IMU融合的海洋装备水下损伤三维重建方法,其特征在于:所述S1包括如下步骤:The three-dimensional reconstruction method of underwater damage of marine equipment based on the fusion of vision and IMU according to claim 1, characterized in that: said S1 includes the following steps:
    S11、双目相机固定于水下移动平台本体的前端,且视觉方向为斜向下10°~30°之间;平台IMU固定于水下移动平台本体中部,与水下移动平台质心位置相当;S11. The binocular camera is fixed on the front end of the underwater mobile platform body, and the visual direction is between 10° and 30° obliquely downward; the platform IMU is fixed on the middle part of the underwater mobile platform body, which is equivalent to the center of mass of the underwater mobile platform;
    S12、将标定板与水下移动平台同时置于水下;标定板同时出现在双目相机 的左右相机视野中;S12. Place the calibration plate and the underwater mobile platform underwater at the same time; the calibration plate appears in the binocular camera at the same time in the left and right camera fields of view;
    移动水下移动平台,使标定板分布在双目相机的左右相机视野的各个位置;记录下多组双目相机图像数据;通信***将多组双目相机图像数据传输到计算主机上;计算主机进行相关标定计算,双目相机的左右相机内参标定,以及双目相机的左右相机与平台IMU外参标定。Move the underwater mobile platform so that the calibration plates are distributed in various positions of the left and right camera fields of the binocular camera; record multiple sets of binocular camera image data; the communication system transmits the multiple sets of binocular camera image data to the computing host; the computing host Carry out relevant calibration calculations, including the internal parameter calibration of the left and right cameras of the binocular camera, and the external parameter calibration of the left and right cameras of the binocular camera and the platform IMU.
  3. 根据权利要求2所述的基于视觉和IMU融合的海洋装备水下损伤三维重建方法,其特征在于:所述S12,The three-dimensional reconstruction method of underwater damage of marine equipment based on the fusion of vision and IMU according to claim 2, characterized in that: said S12,
    双目相机的左右相机的内参标定是指:
    The internal parameter calibration of the left and right cameras of the binocular camera refers to:
    其中,l代表左相机;r代表右相机;Kl,Kr分别代表左右相机内参矩阵;fxl,fyl,fxr,fyr分别代表使用像素表示左右相机在x轴和y轴方向上的焦距的长度;(u0l,v0l),(u0r,v0r)分别代表左右相机像平面坐标系的主点的实际像素坐标;Among them, l represents the left camera; r represents the right camera; K l and K r respectively represent the left and right camera internal parameter matrices; f xl , f yl , f xr and f yr respectively represent the use of pixels to represent the left and right cameras in the x-axis and y-axis directions. The length of the focal length; (u 0l , v 0l ), (u 0r , v 0r ) respectively represent the actual pixel coordinates of the principal points of the left and right camera image plane coordinate systems;
    双目相机的左右相机与平台IMU外参标定是指:The external parameter calibration of the left and right cameras of the binocular camera and the platform IMU refers to:
    设定平台IMU坐标系为世界坐标系,则双目相机的左右相机图像点到平台IMU坐标系下的转换关系为:

    Set the platform IMU coordinate system to the world coordinate system, then the conversion relationship between the left and right camera image points of the binocular camera and the platform IMU coordinate system is:

    其中,分别为左右相机坐标系下的二维坐标;为平台IMU坐标系下的三维坐标;Rlr,Rri分别为右相机到左相机、左相机到平台IMU坐标系的3*3的旋转矩阵;Tlr,Tri分别为右相机到左相机、左相机到平台IMU坐标系的1*3的平移向量。in, are the two-dimensional coordinates in the left and right camera coordinate systems respectively; are the three-dimensional coordinates in the platform IMU coordinate system; R lr and R ri are respectively the 3*3 rotation matrices from the right camera to the left camera and the left camera to the platform IMU coordinate system; T lr and T ri are respectively the right camera to the left camera. , the 1*3 translation vector from the left camera to the platform IMU coordinate system.
  4. 根据权利要求1所述的基于视觉和IMU融合的海洋装备水下损伤三维重建方法,其特征在于:所述S2,对齐驱动***各轴上的驱动IMU坐标系和激光传感器坐标系,是指:The three-dimensional reconstruction method of underwater damage to marine equipment based on the fusion of vision and IMU according to claim 1, characterized in that: S2, aligning the driving IMU coordinate system and the laser sensor coordinate system on each axis of the driving system refers to:
    根据双目相机、排水***、激光驱动***的位置关系,得到水下移动平台质心坐标系到排水***坐标系的转换关系、排水***坐标系与激光驱动***坐标系的转换关系;According to the positional relationship between the binocular camera, drainage system, and laser drive system, the conversion relationship between the underwater mobile platform's center of mass coordinate system and the drainage system coordinate system, and the conversion relationship between the drainage system coordinate system and the laser drive system coordinate system are obtained;
    通过控制激光传感器激光点在参数已知的标定板上运动;通信***连接激光传感器和驱动IMU获取数据并发送到计算主机中,计算主机完成标定计算,得到激光驱动***坐标系与激光传感器坐标系的转换关系;By controlling the laser point of the laser sensor to move on the calibration plate with known parameters; the communication system connects the laser sensor and the drive IMU to obtain the data and send it to the computing host. The computing host completes the calibration calculation and obtains the laser drive system coordinate system and the laser sensor coordinate system. conversion relationship;
    将激光传感器、激光驱动***、排水***、水下移动平台质心四个坐标***对齐。Align the four coordinate systems of the laser sensor, laser drive system, drainage system, and underwater mobile platform center of mass.
  5. 根据权利要求4所述的基于视觉和IMU融合的海洋装备水下损伤三维重建方法,其特征在于:将激光传感器、激光驱动***、排水***、水下移动平台质心四个坐标***对齐的方法是:The three-dimensional reconstruction method of underwater damage to marine equipment based on the fusion of vision and IMU according to claim 4, characterized in that: the method of aligning the four coordinate systems of the laser sensor, laser drive system, drainage system, and underwater mobile platform centroid is :
    标定激光传感器、激光驱动***、排水***、水下移动平台质心中任意两个坐标***的外参矩阵,包括旋转矩阵和平移向量:
    Calibrate the external parameter matrices of any two coordinate systems in the centroid of the laser sensor, laser drive system, drainage system, and underwater mobile platform, including rotation matrices and translation vectors:
    其中,A和B分别代表两个坐标***,X代表4*4的外参矩阵,R代表3*3 的旋转矩阵,T代表1*3的平移向量。Among them, A and B represent two coordinate systems respectively, X represents the 4*4 external parameter matrix, and R represents 3*3 The rotation matrix, T represents the 1*3 translation vector.
  6. 根据权利要求1所述的基于视觉和IMU融合的海洋装备水下损伤三维重建方法,其特征在于:所述S3中,平台IMU坐标系下的姿态观测是指:The three-dimensional reconstruction method of underwater damage to marine equipment based on the fusion of vision and IMU according to claim 1, characterized in that: in said S3, the attitude observation in the platform IMU coordinate system refers to:
    在k时刻到k+1时刻下平台IMU数据积分得到的速度V、平移向量T和旋转矩阵R分别表示为:
    Vk+1=Vk+aΔt

    The velocity V, translation vector T and rotation matrix R obtained by integrating the platform IMU data from time k to time k+1 are respectively expressed as:
    Vk +1 = Vk +aΔt

    其中,Vk,Vk+1分别为k时刻、k+1时刻下的速度;a为加速度;Δt为时间间隔;Tk,Tk+1分别为k时刻、k+1时刻下的平移向量;Rk,Rk+1分别为k时刻、k+1时刻下的旋转矩阵;ω为角速度;为克罗内克积。Among them, V k and V k+1 are the speeds at time k and k+1 respectively; a is the acceleration; Δt is the time interval; T k and T k+1 are the translations at time k and k+1 respectively. Vector; R k and R k+1 are the rotation matrices at time k and k+1 respectively; ω is the angular velocity; is the Kronecker product.
  7. 根据权利要求6所述的基于视觉和IMU融合的海洋装备水下损伤三维重建方法,其特征在于:所述S3中,双目相机坐标系下的姿态观测是指:The three-dimensional reconstruction method of underwater damage to marine equipment based on the fusion of vision and IMU according to claim 6, characterized in that: in the S3, the attitude observation in the binocular camera coordinate system refers to:
    对双目相机图像数据进行特征点提取,以特征点为圆心构建圆形区域:

    θ=arctan(m01/m10)
    Extract feature points from the binocular camera image data and construct a circular area with the feature points as the center:

    θ=arctan(m 01 /m 10 )
    其中,C代表圆形区域的质心,θ代表特征点的方向向量,mpq代表圆形区域的矩,定义为:
    Among them, C represents the centroid of the circular area, θ represents the direction vector of the feature point, and m pq represents the moment of the circular area, which is defined as:
    其中,R代表圆形区域的半径;x,y代表x轴坐标,y轴坐标;I(x,y)代表灰度方程;Among them, R represents the radius of the circular area; x, y represents the x-axis coordinate and y-axis coordinate; I(x, y) represents the grayscale equation;
    通过对连续多帧双目相机图像数据的特征点提取和匹配,利用匹配后的像 素点建立PnP求解问题,得到双目相机的旋转矩阵R和平移向量T。By extracting and matching feature points of continuous multi-frame binocular camera image data, the matched images are used The prime point establishes a PnP solution to the problem, and obtains the rotation matrix R and translation vector T of the binocular camera.
  8. 根据权利要求7所述的基于视觉和IMU融合的海洋装备水下损伤三维重建方法,其特征在于:所述S3中,双目相机坐标系下的三维点云生成是指:The three-dimensional reconstruction method of underwater damage to marine equipment based on the fusion of vision and IMU according to claim 7, characterized in that: in said S3, the three-dimensional point cloud generation in the binocular camera coordinate system refers to:
    对同一帧双目相机的左右相机图像进行上述特征点提取和匹配,基于灰度误差平方累计算法进行视差计算:
    Perform the above feature point extraction and matching on the left and right camera images of the same frame of binocular cameras, and perform disparity calculation based on the grayscale error square accumulation algorithm:
    其中,x,y,d分别为x轴坐标,y轴坐标,视差;i,j分别为x轴,y轴方向的变化值;m,n分别为x轴,y轴方向的最大值;I1(x,y),I2(x,y)代表灰度方程;Among them, x, y, d are the x-axis coordinate, y-axis coordinate, and parallax respectively; i, j are the change values in the x-axis and y-axis directions respectively; m, n are the maximum values in the x-axis and y-axis directions respectively; I 1 (x, y), I 2 (x, y) represent the grayscale equation;
    通过视差和原始坐标生成三维点云数据,三维坐标表示为:
    Three-dimensional point cloud data is generated through parallax and original coordinates. The three-dimensional coordinates are expressed as:
    其中,xl、xr分别为左右相机对应的横坐标值;yl、yr分别为左右相机纵坐标值;fx,fy分别为左右相机内参中对应的焦距;X,Y,Z分别为为三维坐标;D为深度值,可由下式计算:
    D=Bf/d
    Among them, x l and x r are the corresponding abscissa values of the left and right cameras respectively; y l and y r are the ordinate values of the left and right cameras respectively; f x and f y are the corresponding focal lengths of the internal parameters of the left and right cameras respectively; X, Y, Z are three-dimensional coordinates respectively; D is the depth value, which can be calculated by the following formula:
    D=Bf/d
    其中,B为基线长度,f为相机焦距,d为左右图像视差。Among them, B is the baseline length, f is the camera focal length, and d is the left and right image disparity.
  9. 根据权利要求1所述的基于视觉和IMU融合的海洋装备水下损伤三维重建方法,其特征在于:所述S4是指:根据损伤区域位置,为水下移动平台规划运动轨迹,使排水***覆盖损伤区域并排出水分形成干燥空间;利用激光驱动***控制激光传感器在干燥空间中进行三维扫描;激光传感器数据与驱动IMU数据通过通信***传递给计算主机;计算主机根据S2得到的外参矩阵,利用驱 动IMU数据获取激光驱动***姿态,变换得出激光传感器的位置,通过激光传感器的位置和点云数据,得到精细的激光传感器数据三维重建;基于三维重建结果检测损伤位置。The three-dimensional reconstruction method of underwater damage to marine equipment based on the fusion of vision and IMU according to claim 1, characterized in that: S4 refers to: planning a motion trajectory for the underwater mobile platform according to the location of the damage area to cover the drainage system The damaged area is drained of water to form a dry space; the laser drive system is used to control the laser sensor to perform three-dimensional scanning in the dry space; the laser sensor data and the driving IMU data are transmitted to the computing host through the communication system; the computing host uses the external parameter matrix obtained by S2. drive The IMU data is used to obtain the attitude of the laser drive system, and the position of the laser sensor is obtained through transformation. Through the position of the laser sensor and point cloud data, a precise three-dimensional reconstruction of the laser sensor data is obtained; the damage location is detected based on the three-dimensional reconstruction results.
  10. 根据权利要求9所述的基于视觉和IMU融合的海洋装备水下损伤三维重建方法,其特征在于:所述S4中,激光传感器数据三维重建是指:The three-dimensional reconstruction method of underwater damage to marine equipment based on the fusion of vision and IMU according to claim 9, characterized in that: in said S4, the three-dimensional reconstruction of laser sensor data refers to:
    激光传感器以固定频率发射激光脉冲,通过接收器接收返回的反射光判断距离,同时依据反射强度粗略区分目标材质,测距公式为:
    L=tc/2
    The laser sensor emits laser pulses at a fixed frequency, receives the returned reflected light through the receiver to determine the distance, and roughly distinguishes the target material based on the reflection intensity. The ranging formula is:
    L=tc/2
    其中,L为目标距离,t为返回时间,c为光速;Among them, L is the target distance, t is the return time, and c is the speed of light;
    利用驱动IMU对激光传感器进行位姿预测,再通过旋转矩阵R和平移向量T得到激光传感器三维重建结果。 The driving IMU is used to predict the pose of the laser sensor, and then the three-dimensional reconstruction result of the laser sensor is obtained through the rotation matrix R and the translation vector T.
PCT/CN2023/115908 2022-08-31 2023-08-30 Marine equipment underwater damage three-dimensional reconstruction method based on combination of vision and imus WO2024046390A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211051347.X 2022-08-31
CN202211051347.XA CN115471570A (en) 2022-08-31 2022-08-31 Three-dimensional reconstruction method for underwater damage of marine equipment based on fusion of vision and IMU (inertial measurement unit)

Publications (1)

Publication Number Publication Date
WO2024046390A1 true WO2024046390A1 (en) 2024-03-07

Family

ID=84368940

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/115908 WO2024046390A1 (en) 2022-08-31 2023-08-30 Marine equipment underwater damage three-dimensional reconstruction method based on combination of vision and imus

Country Status (2)

Country Link
CN (1) CN115471570A (en)
WO (1) WO2024046390A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115471570A (en) * 2022-08-31 2022-12-13 华南理工大学 Three-dimensional reconstruction method for underwater damage of marine equipment based on fusion of vision and IMU (inertial measurement unit)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110763152A (en) * 2019-10-09 2020-02-07 哈尔滨工程大学 Underwater active rotation structure light three-dimensional vision measuring device and measuring method
CN111951305A (en) * 2020-08-20 2020-11-17 重庆邮电大学 Target detection and motion state estimation method based on vision and laser radar
US20220207765A1 (en) * 2020-12-31 2022-06-30 Zg Technology Co., Ltd. Positioning method and system combining mark point positioning and intelligent reverse positioning
CN115471570A (en) * 2022-08-31 2022-12-13 华南理工大学 Three-dimensional reconstruction method for underwater damage of marine equipment based on fusion of vision and IMU (inertial measurement unit)

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110763152A (en) * 2019-10-09 2020-02-07 哈尔滨工程大学 Underwater active rotation structure light three-dimensional vision measuring device and measuring method
CN111951305A (en) * 2020-08-20 2020-11-17 重庆邮电大学 Target detection and motion state estimation method based on vision and laser radar
US20220207765A1 (en) * 2020-12-31 2022-06-30 Zg Technology Co., Ltd. Positioning method and system combining mark point positioning and intelligent reverse positioning
CN115471570A (en) * 2022-08-31 2022-12-13 华南理工大学 Three-dimensional reconstruction method for underwater damage of marine equipment based on fusion of vision and IMU (inertial measurement unit)

Also Published As

Publication number Publication date
CN115471570A (en) 2022-12-13

Similar Documents

Publication Publication Date Title
CN109035200B (en) Bolt positioning and pose detection method based on single-eye and double-eye vision cooperation
CN109283538B (en) Marine target size detection method based on vision and laser sensor data fusion
CN110728715B (en) Intelligent inspection robot camera angle self-adaptive adjustment method
CN108177143B (en) Robot positioning and grabbing method and system based on laser vision guidance
WO2024046390A1 (en) Marine equipment underwater damage three-dimensional reconstruction method based on combination of vision and imus
CN103047983B (en) The face terrain match air navigation aid of underwater robot
US20150267433A1 (en) Pool Cleaner with Laser Range Finder System and Method
Fruh et al. Fast 3D model generation in urban environments
CN110580044A (en) unmanned ship full-automatic navigation heterogeneous system based on intelligent sensing
CN102042835A (en) Autonomous underwater vehicle combined navigation system
CN106842216B (en) A kind of workpiece pose online test method cooperateed with based on Kinect with three-dimensional laser
CN108269281B (en) Obstacle avoidance technical method based on binocular vision
AU2014247986A2 (en) Underwater platform with lidar and related methods
CN109931909B (en) Unmanned aerial vehicle-based marine fan tower column state inspection method and device
CN109747824A (en) Device and barrier-avoiding method for unmanned plane avoidance inside chimney
CN101813467A (en) Blade running elevation measurement device and method based on binocular stereovision technology
CN114488164B (en) Synchronous positioning and mapping method for underwater vehicle and underwater vehicle
US20230105991A1 (en) Method of imaging a wind turbine rotor blade
CN109410234A (en) A kind of control method and control system based on binocular vision avoidance
CN111502671A (en) Comprehensive guiding device and method for guiding and carrying binocular camera by shield laser target
CN116400361A (en) Target three-dimensional reconstruction system and method based on sonar detection
CN111640177A (en) Three-dimensional modeling method based on underwater sonar detection and unmanned submersible
CN109798877B (en) Bionic underwater robotic fish binocular stereo distance measurement method
Yin et al. Study on underwater simultaneous localization and mapping based on different sensors
CN117406234A (en) Target ranging and tracking method based on single-line laser radar and vision fusion

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23859419

Country of ref document: EP

Kind code of ref document: A1