CN113313116B - Underwater artificial target accurate detection and positioning method based on vision - Google Patents

Underwater artificial target accurate detection and positioning method based on vision Download PDF

Info

Publication number
CN113313116B
CN113313116B CN202110682252.7A CN202110682252A CN113313116B CN 113313116 B CN113313116 B CN 113313116B CN 202110682252 A CN202110682252 A CN 202110682252A CN 113313116 B CN113313116 B CN 113313116B
Authority
CN
China
Prior art keywords
target
underwater
detection
feature points
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110682252.7A
Other languages
Chinese (zh)
Other versions
CN113313116A (en
Inventor
李乐
李艳丽
张文博
刘卫东
高立娥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202110682252.7A priority Critical patent/CN113313116B/en
Publication of CN113313116A publication Critical patent/CN113313116A/en
Application granted granted Critical
Publication of CN113313116B publication Critical patent/CN113313116B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to an accurate detection and positioning method of an underwater artificial target based on vision, which mainly aims at detecting the underwater target and calculating the position and the azimuth of the target, and designs an artificial underwater cooperative target. Firstly, performing coarse detection on an underwater target by adopting a target detection algorithm based on deep learning, and then adding a traditional target detection method based on shape and color to perform accurate manual underwater cooperative target detection. And then, according to the detection result of the underwater target and the geometric information of the target feature points, calculating the position and the posture of the target relative to the camera, and realizing the detection and the positioning of the underwater target. The invention fuses the traditional target detection method based on color and shape with the target detection method based on deep learning, thereby realizing accurate and rapid detection of the underwater target. According to the invention, the artificial underwater cooperative target is designed, the real-time positioning of the artificial underwater cooperative target is realized according to the geometric information of the designed target feature points, and the position information and the angle information of the target are calculated.

Description

Underwater artificial target accurate detection and positioning method based on vision
Technical Field
The invention belongs to the field of underwater target detection and positioning, and particularly relates to an accurate underwater artificial target detection and positioning method based on vision.
Background
With the increase of the development of underwater operation tasks in countries in the world, the acquisition of underwater information becomes one of the important preconditions for underwater operation. Because the information underwater collection equipment needs to be maintained regularly and data is recovered, and the underwater environment is complex, the recovery of the underwater information collection equipment is a challenging task. Technological advances in ROVs (Remote Operated Vehicle, remotely located unmanned vehicles) have made it easier to perform underwater work tasks. And the detection and the target positioning of targets in different underwater environments are one of key technologies for detecting and recycling underwater information acquisition equipment by using ROV. In CN202110051819.0, a deep learning algorithm is used to detect an underwater sonar target image, but because sonar lacks visual display of target information, there is a certain error in target detection of the sonar image. With the application of the underwater visual camera, the information extraction of the abundant light visual images provides an effective method for underwater research.
Due to the complexity of the underwater environment and the attenuation and scattering phenomena of light in the water transmission process, the problems of color distortion, low contrast, blurred edges and the like of the captured underwater image are often caused, so that the underwater target detection becomes a challenging research field. The underwater target detection can be classified into two types, a conventional method and a deep learning method according to the target detection method employed. The traditional underwater target detection method comprises image feature matching recognition, general image segmentation and detection and recognition based on colors and shapes. Traditional methods of underwater targets are researched by scholars in various countries, and mostly underwater test verification is carried out by adopting underwater artificial targets. Although these methods are fast in processing time, they are still not ideal for dynamic environments and are not accurate enough. Compared with the traditional target detection method, the target detection algorithm based on deep learning is faster, and has better robustness under the condition of partial shielding of the target. Therefore, it is becoming the dominant method of target detection. Currently, deep learning-based algorithms can be divided into single-stage end-to-end algorithms and two-stage region proposal algorithms. However, these detection methods based on deep learning can only obtain rectangular bounding boxes of the target, not accurate boundary information of the target, and cannot be used for accurate position estimation and angle calculation.
Targeting is another important study of ROV for recovery operations of underwater information gathering devices. In these operations, cooperative or artificial targets are mostly employed to improve positioning efficiency. Commonly used underwater artifacts typically have a regular shape and a specific bright color, such as a special underwater pattern under water, an active laser module, a 3D marker, etc. Common positioning methods are geometry-based, curvature-based and PnP-based.
Disclosure of Invention
The invention solves the technical problems that: in order to solve the defects existing in the prior art, accuracy and speed of target detection and positioning are key factors for accurately recovering underwater information acquisition equipment and further improving underwater operation efficiency. Meanwhile, the target position information and the attitude information are obtained, and underwater operation is facilitated. The invention aims at designing an artificial underwater target for the underwater information acquisition equipment, accurately detecting the underwater target, and calculating the position and the azimuth of the target so as to recycle the underwater information acquisition equipment.
The technical scheme of the invention is as follows: an underwater artificial target detection and positioning method based on vision comprises the following steps:
Step 1: defining the number of target feature points of the artificial underwater cooperative target as n (n is more than or equal to 4), wherein the pattern formed by n target feature points is required to be asymmetric; carrying out underwater data acquisition and data preprocessing on the target so as to carry out subsequent training;
step 2: the method for detecting the underwater target comprises a coarse detection part and a fine detection part, and specifically comprises the following substeps:
Step 2.1: firstly, training an acquired underwater target image, performing coarse detection on the underwater target through a target detection method based on deep learning, and detecting target feature points through outputting bounding boxes (u i,vi,wi,hi) of target feature points, wherein u i,vi respectively represents the left upper corner coordinates of the ith feature point bounding box, and the width and the height of the ith feature point bounding box of w i,hi.
Step 2.2: respectively expanding pixels to four sides of a surrounding frame of each characteristic point obtained by coarse detection, and then cutting; after gray level processing, performing self-adaptive binarization on the cut target image, and then performing morphological operation and circular edge detection on the binarization result; obtaining accurate pixel coordinates of the center of each target feature point in the image by carrying out region selection and roundness screening on the edge profile;
Step 3: obtaining pixel coordinates of each underwater target feature point in the image through the target detection in the step 2, and positioning the underwater target according to the geometric relationship among the feature points, wherein the method comprises the following substeps:
Step 3.1: after obtaining the two-dimensional pixel coordinates of each feature point, firstly ordering the pixel coordinates of the center of the obtained feature point according to the geometric relation information among the feature points, so that each feature point can be in one-to-one correspondence with the coordinates of each feature point;
step 3.2: knowing the two-dimensional pixel coordinates P { P i(Ui,Vi) of the feature points and the three-dimensional coordinates (X Ti,YTi,ZTi) of the feature points in the target coordinate system; the coordinates of the feature point under the camera coordinate system are (X Ci,YCi,ZCi), and the position and the gesture of the target under the camera are respectively represented by an offset matrix T and a rotation matrix R, and then the position and the gesture of the target under the camera are represented by
Wherein R C is a camera internal reference;
since Z Ci under the monocular camera is unknown, to obtain T and R, the pinhole imaging model shown in FIG. 5 is used to calculate the coordinates of the feature points under the camera coordinate system;
defining O as the camera origin, OA, OB, OC is used to calculate the camera coordinates of the feature points. A, B, C, D are spatial feature points, and a, B, C, D are feature image points. Then, according to the geometric relationship, there is a cosine equation as follows:
Order the Then there are:
Reams the The binary quadratic equation for x and y is obtained as follows:
AB, AC, BC can be obtained through coordinate calculation of the feature points under a target coordinate system, so v, w can be calculated; a, b, c are image coordinates detected by the object detection algorithm, so that cos < a, b >, cos < a, c >, cos < b, c >; obtaining OA, OB and OC by solving the binary quadratic equation, and further calculating camera coordinates A, B and C; because the equation has four sets of solutions, the re-projection error needs to be calculated by using the point D, and the real pose is calculated by using a set of solutions with the minimum error, so that the position and three pose angles of the underwater target are obtained.
The invention further adopts the technical scheme that: in the step 1, when the collected underwater target data is preprocessed, noise addition, image rotation and image overturning operations are performed on the data, so that data augmentation is realized.
The invention further adopts the technical scheme that: the underwater camera in the step 1 is a Chengyou high-definition network underwater camera SW01.
The invention further adopts the technical scheme that: in the step 2, when the conventional target detection method is performed, the bounding boxes of the detected target feature points are respectively expanded to four sides by 20 pixels and then cut, so that the detection range of the conventional method is reduced, and the detection speed is improved.
The invention further adopts the technical scheme that: when an underwater target positioning experiment is carried out, the camera needs to be recalibrated underwater, and the internal parameters R C of the camera underwater are obtained.
Effects of the invention
The invention has the technical effects that:
(1) The traditional target detection method based on color and shape and the target detection method based on deep learning are fused, so that the accurate and rapid detection of the underwater target is realized.
(2) The artificial underwater cooperative target is designed, real-time positioning is realized according to the geometric information of the designed target characteristic points, and the position information and the angle information (yaw angle, pitch angle and roll angle) of the target are calculated.
Drawings
FIG. 1 is a schematic diagram of an algorithm flow according to the present invention
FIG. 2 is a schematic representation of an artificial underwater cooperative target in accordance with the present invention
FIG. 3 is a schematic view of an underwater camera used in the present invention
FIG. 4 is a schematic flow chart of an artificial underwater target detection algorithm according to the present invention
FIG. 5 is a pinhole imaging model
Detailed Description
In the description of the present invention, it should be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings are merely for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the device or element referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention.
Referring to fig. 1 to 5, the technical scheme of the invention is as follows:
An underwater artificial target detection and positioning method based on vision comprises the following steps:
step 1: according to the application requirement, designing the artificial underwater cooperative target shown in the figure 2, wherein the number of target characteristic points is n (n is more than or equal to 4), and patterns formed by n target characteristic points are required to be asymmetric so as to judge the positive and negative values of the calculated attitude angles. And carrying out underwater data acquisition and data preprocessing on the target so as to carry out subsequent training. The adopted underwater camera is shown in figure 3, and the model is Chengyou high-definition network underwater camera SW01.
Step 2: and detecting the underwater target, wherein the detection comprises a coarse detection part and a fine detection part. Firstly, performing coarse detection on underwater target feature points by adopting a target detection algorithm based on deep learning to obtain two-dimensional bounding boxes of all the feature points in an image, and then performing fine detection on the feature point bounding boxes obtained by the coarse detection by combining a traditional target detection method to obtain pixel coordinates of the centers of all the target feature points in the image.
Step 2.1: firstly, training an acquired underwater target image, roughly detecting the underwater target through a target detection method based on deep learning, and detecting target characteristic points through a surrounding frame (u i,vi,wi,hi) of output target characteristic points in a similar process to Yolov, wherein u i,vi respectively represents the left upper corner coordinates of an ith characteristic point surrounding frame, and the width and the height of the ith characteristic point surrounding frame of w i,hi.
Step 2.2: in order to improve the subsequent positioning accuracy, a traditional detection method is adopted to carry out fine detection operation on the detection result obtained by deep learning. Firstly, expanding the bounding boxes of the characteristic points obtained through coarse detection to four sides by 20 pixels respectively, and then cutting the bounding boxes so as to avoid the situation that the bounding boxes do not completely enclose the characteristic points; after gray level processing, performing self-adaptive binarization on the cut target image, and then performing morphological operation and circular edge detection on the binarization result; and obtaining accurate pixel coordinates of the center of each target characteristic point in the image by carrying out region selection and roundness screening on the edge profile.
Step 3: and (3) obtaining pixel coordinates of each underwater target characteristic point in the image through target detection in the step (2), and positioning the underwater target according to the geometric relationship among the characteristic points.
Step 3.1: after the two-dimensional pixel coordinates of each feature point are obtained, firstly, the obtained pixel coordinates of the centers of the feature points are ordered according to the geometric relation information among the feature points, so that each feature point can be in one-to-one correspondence with the coordinates of each feature point.
Step 3.2: the two-dimensional pixel coordinates P { P i(Ui,Vi) of 5 feature points and the three-dimensional coordinates (X Ti,YTi,ZTi) of the feature points in the target coordinate system are known. The coordinates of the feature point under the camera coordinate system are (X Ci,YCi,ZCi), and the position and the gesture of the target under the camera are respectively represented by an offset matrix T and a rotation matrix R, and then the position and the gesture of the target under the camera are represented by
Wherein R C is a camera internal reference.
Since Z Ci under the monocular camera is unknown, to obtain T and R, the coordinates of the feature points under the camera coordinate system need to be calculated first using the pinhole imaging model shown in fig. 5.
In fig. 5, O is a camera origin, OA, OB, OC is used to calculate camera coordinates of feature points. A, B, C, D are spatial feature points, and a, B, C, D are feature image points. Then, according to the geometric relationship, there is a cosine equation as follows:
Order the Then there are:
Reams the The binary quadratic equation for x and y is obtained as follows:
AB, AC, BC can be calculated from the coordinates of the feature points in the target coordinate system, and thus v, w can be calculated. a, b, c are the image coordinates detected by the object detection algorithm, and can thus be found cos < a, b >, cos < a, c >, cos < b, c >. Therefore, the coordinates a, B, and C of the camera can be calculated by solving the binary quadratic equation to obtain OA, OB, and OC. Because the equation has four sets of solutions, the re-projection error needs to be calculated by using the point D, and the real pose is calculated by using a set of solutions with the minimum error, so that the position and three pose angles of the underwater target are obtained.
In the step 1, when the collected underwater target data is preprocessed, operations such as noise addition, image rotation, image overturning and the like are performed on the data, so that data augmentation is realized.
In the step2, when the conventional target detection method is performed, the bounding boxes of the detected target feature points are respectively expanded to four sides by 20 pixels and then cut, so that the detection range of the conventional method is reduced, and the detection speed is improved.
In the step 3, when an underwater target positioning experiment is performed, the camera needs to be recalibrated under the water to obtain an underwater internal parameter R C of the camera.
The invention aims to improve the accuracy and speed of underwater target detection and positioning and improve the underwater operation efficiency, and a flow chart of the invention is shown in figure 1. Referring to fig. 2 to 5, embodiments of the present invention are as follows:
step 1: according to the application requirement, designing an artificial underwater cooperative target as shown in fig. 2, and carrying out underwater data acquisition and data preprocessing on the target for subsequent training. The adopted underwater camera is shown in figure 3, and the model is Chengyou high-definition network underwater camera SW01.
Step 2: and detecting the underwater target, wherein the detection comprises a coarse detection part and a fine detection part. Firstly, performing coarse detection on underwater target feature points by adopting a target detection algorithm based on deep learning to obtain two-dimensional bounding boxes of all the feature points in an image, and then performing fine detection on the feature point bounding boxes obtained by the coarse detection by combining a traditional target detection method to obtain pixel coordinates of the centers of all the target feature points in the image.
Step 2.1: firstly, training an acquired underwater target image, roughly detecting the underwater target by a target detection method based on deep learning, and detecting target feature points by outputting a bounding box (u i,vi,wi,hi) of the target feature points in a similar process to Yolov, wherein u i,vi respectively represents the left upper corner coordinate of an ith feature point bounding box and the width and the height of the ith feature point bounding box of w i,hi.
Step 2.2: in order to improve the subsequent positioning accuracy, a traditional detection method is adopted to carry out fine detection operation on the detection result obtained by deep learning. Firstly, expanding the bounding boxes of the characteristic points obtained through coarse detection to four sides by 20 pixels respectively, and then cutting the bounding boxes so as to avoid the situation that the bounding boxes do not completely enclose the characteristic points; after gray level processing, performing self-adaptive binarization on the cut target image, and then performing morphological operation and circular edge detection on the binarization result; and obtaining accurate pixel coordinates of the center of each target characteristic point in the image by carrying out region selection and roundness screening on the edge profile.
The underwater target data set for training is obtained by shooting in a water tank, and the image size in the data set is 1920x1080 pixels. In the experiment, a metal box is used for replacing underwater information acquisition equipment, a designed artificial underwater target is stuck on the upper surface of the metal box, all characteristic points of the underwater target are positioned on the same plane, and the material is blue waterproof sticker. Because the designed underwater artificial target detection categories are fewer, only 420 underwater target images are collected for training.
The target detection of the traditional method, the target detection based on deep learning and the target detection method of the invention are respectively carried out on the same underwater target image, and the detection results are compared. Under the same underwater scene, the traditional detection result is greatly affected by the illumination condition. In this case, a false detection or a target missing situation may occur. Although the deep learning method is more stable and faster than the conventional method, the detection result is not as accurate as the conventional method. The detection algorithm provided by the invention not only has environmental robustness and detection precision, but also does not increase the cost of detection time, and is more beneficial to underwater operation.
Step 3: and (3) obtaining pixel coordinates of each underwater target characteristic point in the image through target detection in the step (2), and positioning the underwater target according to the geometric relationship among the characteristic points.
Step 3.1: after the two-dimensional pixel coordinates of each feature point are obtained, firstly, the obtained pixel coordinates of the centers of the feature points are ordered according to the geometric relation information among the feature points, so that each feature point can be in one-to-one correspondence with the coordinates of each feature point.
Taking 5 feature points of an artificial underwater cooperative target in the invention as an example, the pixel coordinate combination of the centers of the 5 detected feature points is P { P i(ui,vi), (i=1, 2, …, 5) }, and the 5 feature point coordinates obtained after sequencing are P { P i(Ui,Vi), (i=a, B, …, E) } in sequence, and the specific sequencing process is as follows:
(1) From 5 center points p i, 3 combinations of points are optional, giving a total of 10 combinations. Of these combinations, one in which three points have a collinear relationship is denoted lisl, and the combination lisl contains { P C,PD,PE };
(2) From the 5 center points p i, 2 combinations of points are optional, giving a total of 10 combinations. Of these combinations, the combination of two points with the longest distance is denoted lisd, and the combination lisd contains { P A,PE };
(3) The point at combination P but not at combination lisl and combination lisd is P B;
(4) The common point of combination lisl and combination lisd is P E, the other point in combination lisd is P A;
(5) The set of combinations lisl with P E removed is denoted lisCD, containing the points { P C,PD };
(6) The point closer to the point P E in the combination lisCD is P C, and the point farther from it is P D, so that the coordinates of the 5 feature points obtained after the sorting are P { P i(Ui,Vi), (i=a, B, …, E) } in order.
Step 3.2: two-dimensional pixel coordinates P { P i(Ui,Vi), (i=a, B, …, E) } of 5 feature points and three-dimensional coordinates (X Ti,YTi,ZTi), (i=a, B, …, E) of feature points in the target coordinate system are known. The coordinates of the feature points in the camera coordinate system are (X Ci,YCi,ZCi), (i=a, B, …, E), and the position and posture of the object under the camera are represented by an offset matrix T and a rotation matrix R respectively, and then there are
Wherein R C is a camera internal reference.
Since Z Ci under the monocular camera is unknown, to obtain T and R, the coordinates of the feature points under the camera coordinate system need to be calculated first using the pinhole imaging model shown in fig. 5.
In fig. 5, O is a camera origin, OA, OB, OC is used to calculate camera coordinates of feature points. A, B, C, D are spatial feature points, and a, B, C, D are feature image points. Then, according to the geometric relationship, there is a cosine equation as follows:
Order the Then there are:
Reams the The binary quadratic equation for x and y is obtained as follows:
AB, AC, BC can be calculated from the coordinates of the feature points in the target coordinate system, and thus v, w can be calculated. a, b, c are the image coordinates detected by the object detection algorithm, and can thus be found cos < a, b >, cos < a, c >, cos < b, c >. Therefore, the coordinates a, B, and C of the camera can be calculated by solving the binary quadratic equation to obtain OA, OB, and OC. Because the equation has four sets of solutions, the re-projection error needs to be calculated by using the point D, and the real pose is calculated by using a set of solutions with the minimum error, so that the position and three pose angles of the underwater target are obtained.

Claims (4)

1. The visual-based underwater artificial target detection and positioning method is characterized by comprising the following steps of:
Step 1: defining the number of target feature points of the artificial underwater cooperative target as n, wherein n is more than or equal to 4, and the pattern formed by n target feature points is required to be asymmetric; carrying out underwater data acquisition and data preprocessing on the target so as to carry out subsequent training;
step 2: the method for detecting the underwater target comprises a coarse detection part and a fine detection part, and specifically comprises the following substeps:
Step 2.1: firstly, training an acquired underwater target image, roughly detecting an underwater target through a target detection method based on deep learning, and detecting target feature points through a bounding box (u i,vi,wi,hi) of output target feature points, wherein u i,vi respectively represents the left upper corner coordinates of an ith feature point bounding box, and the width and the height of the ith feature point bounding box of w i,hi;
Step 2.2: respectively expanding pixels to four sides of a surrounding frame of each characteristic point obtained by coarse detection, then cutting, carrying out self-adaptive binarization on a cut target image after gray level processing, and then carrying out morphological operation and circular edge detection on a binarization result; obtaining accurate pixel coordinates of the center of each target feature point in the image by carrying out region selection and roundness screening on the edge profile;
Step 3: obtaining pixel coordinates of each underwater target feature point in the image through the target detection in the step 2, and positioning the underwater target according to the geometric relationship among the feature points, wherein the method comprises the following substeps:
Step 3.1: after obtaining the two-dimensional pixel coordinates of each feature point, firstly ordering the pixel coordinates of the center of the obtained feature point according to the geometric relation information among the feature points, so that each feature point can be in one-to-one correspondence with the coordinates of each feature point;
step 3.2: knowing the two-dimensional pixel coordinates P { P i(Ui,Vi) of the feature points and the three-dimensional coordinates (X Ti,YTi,ZTi) of the feature points in the target coordinate system; the coordinates of the feature point under the camera coordinate system are (X Ci,YCi,ZCi), and the position and the gesture of the target under the camera are respectively represented by an offset matrix T and a rotation matrix R, and then the position and the gesture of the target under the camera are represented by
Wherein R C is a camera internal reference;
Since Z Ci under the monocular camera is unknown, to obtain T and R, a pinhole imaging model is used first, and coordinates of feature points under a camera coordinate system are calculated;
Defining O as camera origin, OA, OB, OC as camera coordinates for calculating feature points, A, B, C, D as spatial feature points, a, B, C, D as feature image points, then according to geometric relationship, there is a cosine equation as follows:
Order the Then there is
Reams theThe binary quadratic equation for x and y is obtained as follows:
AB, AC, BC can be obtained through coordinate calculation of the feature points under a target coordinate system, so v, w can be calculated; a, b, c are image coordinates detected by the object detection algorithm, so that cos < a, b >, cos < a, c >, cos < b, c >; obtaining OA, OB and OC by solving the binary quadratic equation, and further calculating camera coordinates A, B and C; because the equation has four sets of solutions, the re-projection error needs to be calculated by using the point D, and the real pose is calculated by using a set of solutions with the minimum error, so that the position and three pose angles of the underwater target are obtained.
2. The method for accurately detecting and positioning the underwater artificial target based on vision according to claim 1, wherein in the step 1, when the collected underwater target data is preprocessed, noise addition, image rotation and image overturning operations are performed on the data, so that data augmentation is realized.
3. The method for accurately detecting and positioning an underwater artificial target based on vision according to claim 1, wherein in the step 2, when the conventional target detection method is performed, the bounding boxes of the detected target feature points are firstly expanded by 20 pixels to four sides respectively and then cut, so that the detection range of the conventional method is reduced, and the detection speed is increased.
4. The method for accurately detecting and positioning an underwater artificial target based on vision according to claim 1, wherein in the step 3, when an underwater target positioning experiment is performed, the camera needs to be recalibrated underwater to obtain the internal parameter RC of the camera underwater.
CN202110682252.7A 2021-06-20 2021-06-20 Underwater artificial target accurate detection and positioning method based on vision Active CN113313116B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110682252.7A CN113313116B (en) 2021-06-20 2021-06-20 Underwater artificial target accurate detection and positioning method based on vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110682252.7A CN113313116B (en) 2021-06-20 2021-06-20 Underwater artificial target accurate detection and positioning method based on vision

Publications (2)

Publication Number Publication Date
CN113313116A CN113313116A (en) 2021-08-27
CN113313116B true CN113313116B (en) 2024-06-21

Family

ID=77379575

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110682252.7A Active CN113313116B (en) 2021-06-20 2021-06-20 Underwater artificial target accurate detection and positioning method based on vision

Country Status (1)

Country Link
CN (1) CN113313116B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115031627B (en) * 2022-05-07 2024-04-30 西北工业大学 Method for realizing visual perception among individuals in underwater cluster
CN114998714B (en) * 2022-06-09 2024-05-10 电子科技大学 Underwater node positioning device and method based on deep learning image detection
CN115296738B (en) * 2022-07-28 2024-04-16 吉林大学 Deep learning-based unmanned aerial vehicle visible light camera communication method and system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171748A (en) * 2018-01-23 2018-06-15 哈工大机器人(合肥)国际创新研究院 A kind of visual identity of object manipulator intelligent grabbing application and localization method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104748746B (en) * 2013-12-29 2017-11-03 刘进 Intelligent machine attitude determination and virtual reality loaming method
CN109101897A (en) * 2018-07-20 2018-12-28 中国科学院自动化研究所 Object detection method, system and the relevant device of underwater robot
CN109376785B (en) * 2018-10-31 2021-09-24 东南大学 Navigation method based on iterative extended Kalman filtering fusion inertia and monocular vision
CN110826575A (en) * 2019-12-13 2020-02-21 哈尔滨工程大学 Underwater target identification method based on machine learning
CN111721259B (en) * 2020-06-24 2022-05-03 江苏科技大学 Underwater robot recovery positioning method based on binocular vision
CN111915678B (en) * 2020-07-17 2021-04-27 哈尔滨工程大学 Underwater monocular vision target depth positioning fusion estimation method based on depth learning

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171748A (en) * 2018-01-23 2018-06-15 哈工大机器人(合肥)国际创新研究院 A kind of visual identity of object manipulator intelligent grabbing application and localization method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于加权融合特征与Ostu分割的红外弱小目标检测算法;刘昆;刘卫东;;计算机工程;20170715(第07期);全文 *

Also Published As

Publication number Publication date
CN113313116A (en) 2021-08-27

Similar Documents

Publication Publication Date Title
CN113313116B (en) Underwater artificial target accurate detection and positioning method based on vision
CN109785291B (en) Lane line self-adaptive detection method
CN105225230B (en) A kind of method and device of identification foreground target object
CN110569704A (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN110648367A (en) Geometric object positioning method based on multilayer depth and color visual information
CN108981672A (en) Hatch door real-time location method based on monocular robot in conjunction with distance measuring sensor
CN109143247B (en) Three-eye underwater detection method for acousto-optic imaging
CN109308718B (en) Space personnel positioning device and method based on multiple depth cameras
CN111862201A (en) Deep learning-based spatial non-cooperative target relative pose estimation method
CN110189375B (en) Image target identification method based on monocular vision measurement
CN106780560B (en) Bionic robot fish visual tracking method based on feature fusion particle filtering
CN110070557A (en) A kind of target identification and localization method based on edge feature detection
CN111784655B (en) Underwater robot recycling and positioning method
CN111721259A (en) Underwater robot recovery positioning method based on binocular vision
CN111612765A (en) Method for identifying and positioning circular transparent lens
CN117036641A (en) Road scene three-dimensional reconstruction and defect detection method based on binocular vision
CN111524233A (en) Three-dimensional reconstruction method for dynamic target of static scene
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
CN115830018A (en) Carbon block detection method and system based on deep learning and binocular vision
Li et al. Vision-based target detection and positioning approach for underwater robots
CN111080685A (en) Airplane sheet metal part three-dimensional reconstruction method and system based on multi-view stereoscopic vision
CN111881878A (en) Lane line identification method for look-around multiplexing
CN107330436B (en) Scale criterion-based panoramic image SIFT optimization method
CN115497073A (en) Real-time obstacle camera detection method based on fusion of vehicle-mounted camera and laser radar
CN116125489A (en) Indoor object three-dimensional detection method, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant