CN112977764A - Underwater robot course control system and method based on vision - Google Patents

Underwater robot course control system and method based on vision Download PDF

Info

Publication number
CN112977764A
CN112977764A CN202011608465.7A CN202011608465A CN112977764A CN 112977764 A CN112977764 A CN 112977764A CN 202011608465 A CN202011608465 A CN 202011608465A CN 112977764 A CN112977764 A CN 112977764A
Authority
CN
China
Prior art keywords
robot
course
camera
led lamp
floating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011608465.7A
Other languages
Chinese (zh)
Inventor
管朝鹏
吴东栋
陈姝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Research Institute of Nuclear Power Operation
Original Assignee
Research Institute of Nuclear Power Operation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Research Institute of Nuclear Power Operation filed Critical Research Institute of Nuclear Power Operation
Priority to CN202011608465.7A priority Critical patent/CN112977764A/en
Publication of CN112977764A publication Critical patent/CN112977764A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B63SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
    • B63CLAUNCHING, HAULING-OUT, OR DRY-DOCKING OF VESSELS; LIFE-SAVING IN WATER; EQUIPMENT FOR DWELLING OR WORKING UNDER WATER; MEANS FOR SALVAGING OR SEARCHING FOR UNDERWATER OBJECTS
    • B63C11/00Equipment for dwelling or working underwater; Means for searching for underwater objects
    • B63C11/52Tools specially adapted for working underwater, not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B63SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
    • B63GOFFENSIVE OR DEFENSIVE ARRANGEMENTS ON VESSELS; MINE-LAYING; MINE-SWEEPING; SUBMARINES; AIRCRAFT CARRIERS
    • B63G8/00Underwater vessels, e.g. submarines; Equipment specially adapted therefor
    • B63G8/001Underwater vessels adapted for special purposes, e.g. unmanned underwater vessels; Equipment specially adapted therefor, e.g. docking stations
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B63SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
    • B63GOFFENSIVE OR DEFENSIVE ARRANGEMENTS ON VESSELS; MINE-LAYING; MINE-SWEEPING; SUBMARINES; AIRCRAFT CARRIERS
    • B63G8/00Underwater vessels, e.g. submarines; Equipment specially adapted therefor
    • B63G8/14Control of attitude or depth
    • B63G8/20Steering equipment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Ocean & Marine Engineering (AREA)
  • Manipulator (AREA)

Abstract

The invention belongs to the technical field of underwater robots, and particularly relates to a system and a method for controlling the course of an underwater floating robot based on vision. The control system comprises a floating robot, a control computer and a camera; the control computer is connected with the robot driving control module through a control network cable, and the robot driving control module is connected with the floating robot through a robot umbilical cable; the control computer is connected with the camera driving decoding module through a video network cable, and the camera driving decoding module is connected with the camera through a video transmission cable. The control method comprises the steps of determining a target course, determining light spot position information in a picture, calculating a current course, comparing the target course with the current course, judging a result, adjusting the course of the floating robot and the like. Has the advantages that: the camera shoots the LED lamp carried on the robot body, so that the course information of the robot can be quickly extracted, the course of the robot is controlled in a fixed way, and the efficient underwater operation of the floating robot is realized.

Description

Underwater robot course control system and method based on vision
Technical Field
The invention belongs to the technical field of underwater robots, and particularly relates to a system and a method for controlling the course of an underwater floating robot based on vision.
Background
An underwater robot is a device that can move underwater, has vision and perception systems, operates by remote control or autonomously, and can use a manipulator or other tool instead of or in addition to a human to accomplish underwater tasks. At present, underwater robots are mainly applied to the marine field, such as marine scientific research, oil exploitation, salvage operation, submarine pipeline and cable inspection, marine culture and the like, and also have certain application in the fields of river and lake treatment, urban water channel treatment and the like. In the ocean field, due to the characteristics of wide operation environment, large diving depth, rough sea and the like, the underwater robot is generally designed to have large volume and weight, high price and high maintenance cost when encountering equipment faults. The underwater operation environment of the nuclear power station is greatly different from the marine operation, for example, the underwater environment of the nuclear power station is generally narrow in space, various components are complex, and a magnetic field is shielded, so that the conventional underwater robot cannot be directly applied to the development operation under the water environment of the nuclear power station.
In a nuclear power environment, the running stability of the robot is the key of whether the robot can realize accurate and efficient underwater operation, and course control and depth control are the basis of stable motion of the robot in the horizontal and vertical directions. Usually, when the underwater robot works in the ocean or rivers and lakes, the underwater robot adopts a course-fixing mode based on GPS signals or an inertial navigation dead reckoning navigation mode. The positioning error is large based on a GPS signal mode, the method can be used in the environment of ocean large-range movement, meanwhile, the nuclear power environment is generally blocked by thick metal or structures, the GPS signal is shielded or attenuated, and the positioning in the nuclear power underwater environment by the GPS signal is basically impossible. The dead reckoning-based dead reckoning track is realized by utilizing acceleration and angular acceleration integration, the error is larger and larger along with the time lapse, and the correction cannot be carried out through external information in the middle process, so that the application in the nuclear power underwater environment is difficult to realize. Therefore, it is desirable to provide a system and a method for controlling the course of an underwater floating robot.
Disclosure of Invention
The invention aims to provide a visual-based course control system and method for an underwater floating robot, aiming at the technical problem that an attitude sensor of the robot cannot provide accurate course information due to the shielding and weakening effect of a steel structure on a geomagnetic field and the characteristic defects of the sensor when the floating underwater robot works in underwater environments such as a reactor pressure vessel, a spent fuel pool and the like of a nuclear power station.
The technical scheme of the invention is as follows:
a vision-based course control system of an underwater floating robot comprises a floating robot, a control computer and a video acquisition device; the top of the floating robot is provided with a light source, and the video acquisition device is arranged on the edge of the water pool; the control computer is connected with the robot driving control module through a control network cable, and the robot driving control module is connected with the floating robot through a robot umbilical cable; the control computer is connected with the camera driving decoding module through a video network cable, and the camera driving decoding module is connected with the video acquisition device through a video transmission cable.
Furthermore, one or more propellers are respectively arranged in the horizontal direction and the vertical direction of the floating robot and used for realizing the functions of horizontal translation, floating and sinking, heading turning, depth setting and heading setting of the floating robot.
Furthermore, the whole floating robot is adjusted to a suspension or positive buoyancy state through a balancing weight.
Further, the floating robot control instruction sent by the control computer is converted into power output of each propeller through the robot driving control module; the video images collected by the video collecting device are decoded and output to the control computer through the camera driving decoding module.
Furthermore, the light source comprises a first LED lamp and a second LED lamp which are packaged by independent watertight structures respectively, and the interior of the light source is provided with a driving circuit which can control the lighting and extinguishing frequencies of the LED lamps.
Further, the video acquisition device is a camera.
Further, the lighting and extinguishing time interval of the first LED lamp is smaller than the exposure time period of the camera, and the lighting and extinguishing time interval of the second LED lamp is larger than the exposure time period of the camera.
Further, the installation space L of the first LED lamp and the second LED lamp is more than 2 x Lmax,LmaxThe maximum movement displacement of a single LED lamp in adjacent image frames is calculated by the formula Lmax=VRmax*TcIn the formula VRmaxIs the maximum underwater movement speed T of the floating robotcThe time interval for the camera to capture each frame of image.
The invention also provides a course control method adopting the course control system of the underwater floating robot based on the vision, which sequentially comprises the following steps:
s1, a course controller of a control computer receives target course information of a floating robot;
s2, shooting by the camera and sending the shot to a camera driving decoding module, and then extracting light spot position information corresponding to the LED lamp in the picture;
s3, the control computer calculates the current course of the floating robot through the installation position information of the camera, the light spot position information obtained in the S2 and the structural size information of the water pool;
s4, the course controller compares the target course information received in the S1 with the current course information obtained by calculation in the S3, and judges whether the current course is equal to the target course or not;
s5, selecting an execution process by the floating robot according to the judgment result of the S4: when the target course is equal to the current course, the floating robot completes the course positioning control; when the target course is not equal to the current course, the course controller calculates the horizontal thrust distribution of the propeller of the floating robot and controls the propeller to move to realize course adjustment of the floating robot;
and S6, repeatedly executing the step 2 to the step 5 until the floating robot finishes course positioning control.
Further, S2, shooting by a camera, and extracting light spot position information in the picture by the following steps:
s2.1, shooting video images of the floating robot and the carried first LED lamp and second LED lamp by a camera positioned above the pool;
s2.2, transmitting a video image shot by the camera to a camera driving decoding module through a video transmission cable, and extracting the position information of light spots in the video image by the camera driving decoding module through a built-in light spot identification and extraction algorithm;
assuming that the positions of light spots of a first LED lamp and a second LED lamp shot by a camera of the current image frame in two-dimensional coordinates after conversion are P1(x1、y1) And P2(x2、y2) The positions of the first and second LED lamps in the next frame image become P3(x3、y3) And P4(x4、y4) Separately calculate P3、P4To P1、P2The distance of (c).
Figure RE-GDA0003047268030000041
Figure RE-GDA0003047268030000042
Figure RE-GDA0003047268030000043
Figure RE-GDA0003047268030000044
Thus LP1P3And LP1P4In, the distance is less than LmaxThe corresponding point is P1Position of points in adjacent image frames, similarly LP2P3And LP2P4In, the distance is less than LmaxThe corresponding point is P2The position of a point in an adjacent image frame;
and S2.3, the camera drives the decoding module to send the spot position information obtained in the S2.2 to the control computer through a video network cable.
The invention has the beneficial effects that:
according to the invention, the camera is used for shooting the LED lamp carried on the robot body, so that the course information of the robot can be rapidly extracted, the course-fixed control of the robot is realized, and the efficient underwater operation of the floating robot is realized. The invention can realize the accurate course positioning control of the underwater floating robot in the nuclear power special environment, improve the motion control performance of the underwater robot and greatly improve the underwater operation efficiency.
Drawings
FIG. 1 is a schematic structural diagram of a course control system of an underwater floating robot based on vision of the invention;
FIG. 2 is a schematic diagram of LED light spot detection in the course control system of the vision-based underwater floating robot of the present invention;
FIG. 3 is a schematic view of the course control system of the vision-based underwater buoyant robot of the present invention;
FIG. 4 is a schematic flow chart of a course control method of the vision-based underwater floating robot.
In the figure: the system comprises 1-a first LED lamp, 2-a second LED lamp, 3-a floating robot, 4-a camera, 5-a robot driving control module, 6-a camera driving decoding module, 7-a control computer, 8-a water pool, 9-a robot umbilical cable, 10-a video transmission cable, 11-a control network cable and 12-a video network cable.
Detailed Description
The invention is described in further detail below with reference to the figures and the embodiments.
The embodiment provides a course control system of an underwater floating robot based on vision, the general structure of which is shown in fig. 1 and comprises a first LED lamp 1 and a second LED lamp 2, a floating robot 3, a camera 4, a robot driving control module 5, a camera driving decoding module 6, a control computer 7, a water tank 8, a robot umbilical cable 9, a video transmission cable 10, a control network cable 11 and a video network cable 12. The top of the floating robot 3 is provided with two LED lamps serving as light sources, a first LED lamp 1 and a second LED lamp 2 are fixedly arranged on the top of the floating robot 3 at a certain interval L, and the camera 4 is arranged on the upper edge of the water pool 8; the control computer 7 is connected with the robot driving control module 5 through a control network cable 11, and the robot driving control module 5 is connected with the floating robot 3 through a robot umbilical cable 9; the control computer 7 is connected with the camera driving decoding module 6 through a video network cable 12, and the camera driving decoding module 6 is connected with the camera 4 through a video transmission cable 10.
The first LED lamp 1 and the second LED lamp 2 are packaged by adopting independent watertight structures respectively, and the internal driving circuit of the first LED lamp and the second LED lamp distinguishes the two LED lamps by controlling the lighting and extinguishing frequencies of the LED lamps.
4 propellers are arranged in the 3 horizontal directions of the floating robot, 2 propellers are arranged in the vertical direction of the floating robot, the whole robot is adjusted to a micro positive buoyancy state through a balancing weight, and under the combined action of the 6 propellers, the robot can realize the functions of horizontal translation, floating and sinking, bow turning, depth setting and course setting. According to the different water environment of 3 practical application of floating robot, for example running water, sea water, boric acid water etc. density has a difference, and general floating robot 3 can be adjusted to be in the suspended state through increasing the balancing weight before launching at every turn, and little positive buoyancy can be in the suspended state just to drop partial balancing weight promptly.
The control instruction of the floating robot sent by the control computer 7 is converted into the power output of each propeller through the robot driving module 5, so as to realize various motion functions of the floating robot. The video images collected by the camera 4 are decoded by the camera driving decoding module 6 and output to the control computer 7, the positions of the two LED lamps in each frame of image can be obtained by adopting a light spot identification and extraction algorithm, and the course change of the floating robot can be calculated according to the difference of the positions of the light spots of the LED lamps in each frame of image.
The difficulty of course positioning is the extraction and identification of the position information of the LED lamp. The lighting and extinguishing time intervals of the first LED lamp 1 are set to be smaller than the exposure time period of the camera 4, and the lighting and extinguishing time intervals of the second LED lamp 2 are set to be larger than the exposure time period of the camera 4. Therefore, in the imaging field of view of the camera 4, the first LED lamp 1 always presents a bright light spot, and by setting a suitable strobe period of the second LED lamp 2, the imaging light spots of the second LED lamp 2 can appear at periodic intervals.
As shown in FIG. 2, assume that the position of the LED lamp light spot photographed by the camera 4 in the current image frame is converted into two-dimensional coordinates P1(x1、y1) And P2(x2、y2) The position of the LED lamp in the next frame image becomes P3(x3、y3) And P4(x4、y4) Separately calculate P3、P4To P1、P2The distance of (c).
Figure RE-GDA0003047268030000061
Figure RE-GDA0003047268030000071
Figure RE-GDA0003047268030000072
Figure RE-GDA0003047268030000073
According to the maximum underwater movement speed V of the floating robotRmaxAnd the time interval T of each frame of image shot by the cameracThe maximum movement displacement of a single LED lamp in adjacent image frames can be calculated to be Lmax=VRmax*Tc. In order to ensure that the light spot of the first LED lamp 1 does not appear in the imaging region of the second LED lamp 2 in the adjacent image frame, the distance L between the first LED lamp 1 and the second LED lamp 2 is required to be greater than 2 × L during the initial installation of the two lampsmax. Thus LP1P3And LP1P4In, the distance is less than LmaxThe corresponding point is P1Position of points in adjacent image frames, similarly LP2P3And LP2P4In, the distance is less than LmaxThe corresponding point is P2Point on adjacentA position in the image frame.
For example, the calculated position of the LED light spot in the two frames of images has a corresponding relation P3Corresponds to P1, P4Corresponds to P2Then, the current heading can be calculated as:
Figure RE-GDA0003047268030000074
based on the schematic diagram of course control shown in FIG. 3, the course angle θ obtained by image recognitionheadingA course PID controller as feedback input to the control computer 7, the controller inputting a course angle theta by comparing the targetinAnd the current heading angle thetaheadingOutput signals to control the thrust distribution of the horizontal thruster so as to realize the course control of the floating robot until the current course angle thetaheadingTarget heading angle θinUntil now.
The embodiment also provides an underwater buoyant robot course control method adopting the underwater buoyant robot course control system based on the vision, the flow of which is shown in fig. 4, and the method sequentially comprises the following steps:
s1, a course controller of a control computer 7 receives target course information of the floating robot 3, namely a target input course angle thetain
S2, shooting by a camera 4, and extracting light spot position information in the picture;
s2.1, shooting video images of the floating robot 3 and the carried first LED lamp 1 and second LED lamp 2 by a camera 4 positioned above a pool 8;
s2.2, transmitting a video image shot by the camera 4 to the camera driving decoding module 6 through the video transmission cable 10, and extracting the position information of light spots in the video image by the camera driving decoding module 6 through a built-in light spot identification and extraction algorithm;
suppose that the light spots of the first LED lamp 1 and the second LED lamp 2 shot by the camera 4 in the current image frame are converted into positions P in two-dimensional coordinates1(x1、y1) And P2(x2、y2) The positions of the first and second LED lamps 1 and 2 in the next frame image become P3(x3、y3) And P4(x4、y4) Separately calculate P3、P4To P1、P2The distance of (c).
Figure RE-GDA0003047268030000081
Figure RE-GDA0003047268030000082
Figure RE-GDA0003047268030000083
Figure RE-GDA0003047268030000084
Thus LP1P3And LP1P4In, the distance is less than LmaxThe corresponding point is P1Position of points in adjacent image frames, similarly LP2P3And LP2P4In, the distance is less than LmaxThe corresponding point is P2The position of a point in an adjacent image frame.
S2.3, the camera drives the decoding module 6 to send 10 the spot position information obtained in the S2.2 to the control computer 7 through a video network cable;
s3, the control computer 7 calculates the current course information of the floating robot 3 through the installation position information of the camera 4, the light spot position information obtained in S2.3 and the structural size information of the water pool 8;
for example, the light spot positions of the first LED lamp 1 and the second LED lamp 2 in the two frames of images are calculated to obtain a corresponding relation P3Corresponds to P1,P4Corresponds to P2Then, the current heading angle can be calculated as:
Figure RE-GDA0003047268030000091
s4, comparing the target course received in the S1 with the current course obtained by calculation in the S3 by the course controller, and judging whether the current course is equal to the target course or not;
according to the heading control principle diagram shown in FIG. 3, the heading angle θ obtained by image recognition of S3headingA heading PID controller fed back to the control computer 7 by comparing the target input heading angle θ received at S1inAnd the current heading angle thetaheadingTo output a signal to control the thrust distribution of the horizontal thruster of the floating robot 3 so as to realize the course control of the floating robot 3 until the current course angle thetaheadingTarget heading angle θinUntil now.
S5, selecting an execution process by the floating robot 3 according to the judgment result;
s5.1, the target course is equal to the current course, and the floating robot 3 completes the course positioning control;
s5.2, the target course is not equal to the current course, the course controller calculates the horizontal thrust distribution of the thruster and controls the thruster to move so as to realize course adjustment of the floating robot 3;
and S6, repeatedly executing the step 2 to the step 5 until the floating robot 3 finishes course positioning control.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. It is intended that the present invention cover the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents.

Claims (10)

1. The utility model provides an underwater floating robot course control system based on vision which characterized in that: comprises a floating robot (3), a control computer (7) and a video acquisition device; the top of the floating robot (3) is provided with a light source, and the video acquisition device is arranged at the edge of the water pool (8); the control computer (7) is connected with the robot driving control module (5) through a control network cable (11), and the robot driving control module (5) is connected with the floating robot (3) through a robot umbilical cable (9); the control computer (7) is connected with the camera driving decoding module (6) through a video network cable (12), and the camera driving decoding module (6) is connected with the video acquisition device through a video transmission cable (10).
2. The vision-based underwater buoyant robot heading control system of claim 1, wherein: one or more propellers are respectively arranged in the horizontal direction and the vertical direction of the floating robot (3) and used for realizing the functions of horizontal translation, floating and sinking, heading turning, depth setting and heading setting of the floating robot (3).
3. The vision-based underwater buoyant robot heading control system of claim 2, wherein: the whole floating robot (3) is adjusted to be in a suspension or positive buoyancy state through the balancing weight.
4. The vision-based underwater buoyant robot heading control system of claim 2, wherein: the floating robot control instruction sent by the control computer (7) is converted into power output of each propeller through the robot driving control module (5); the video images collected by the video collecting device are decoded and output to the control computer (7) through the camera driving decoding module (6).
5. The vision-based underwater buoyant robot heading control system of claim 1, 2, 3 or 4, wherein: the light source comprises a first LED lamp (1) and a second LED lamp (2), the first LED lamp and the second LED lamp are packaged by independent watertight structures respectively, and the interior of the light source is provided with a driving circuit which can control the lighting and extinguishing frequencies of the LED lamps.
6. The vision-based underwater buoyant robot heading control system of claim 5, wherein: the video acquisition device is a camera (4).
7. The vision-based underwater buoyant robot heading control system of claim 6, wherein: the lighting and extinguishing time interval of the first LED lamp (1) is smaller than the exposure time period of the camera (4), and the lighting and extinguishing time interval of the second LED lamp (2) is larger than the exposure time period of the camera (4).
8. The vision-based underwater buoyant robot heading control system of claim 7, wherein: the installation distance L between the first LED lamp (1) and the second LED lamp (2) is more than 2 x Lmax,LmaxThe maximum movement displacement of a single LED lamp in adjacent image frames is calculated by the formula Lmax=VRmax*TcIn the formula VRmaxIs the maximum moving speed T of the floating robot (3) under watercThe time interval for the camera to capture each frame of image.
9. A course control method using the vision-based underwater buoyant robot course control system of claim 8, characterized by comprising the following steps in sequence:
s1, a course controller of a control computer (7) receives target course information of a floating robot (3);
s2, shooting by the camera (4), sending the shot picture to the camera driving decoding module (6), and then extracting light spot position information corresponding to the LED lamp in the picture;
s3, the control computer (7) calculates the current course of the floating robot (3) according to the installation position information of the camera (4), the light spot position information obtained in the S2 and the structural size information of the water pool (8);
s4, the course controller compares the target course information received in the S1 with the current course information obtained by calculation in the S3, and judges whether the current course is equal to the target course or not;
s5, selecting an execution process by the floating robot (3) according to the judgment result of S4: when the target course is equal to the current course, the floating robot (3) completes the course positioning control; when the target course is not equal to the current course, the course controller calculates the horizontal thrust distribution of the thruster of the floating robot (3) and controls the thruster to move to realize course adjustment of the floating robot (3);
s6, repeatedly executing the step 2 to the step 5 until the floating robot (3) completes course positioning control.
10. The heading control method of claim 9, wherein: s2, the camera (4) shoots, and the light spot position information in the picture is extracted through the following steps:
s2.1, a camera (4) positioned above a pool (8) shoots video images of the floating robot (3) and a first LED lamp (1) and a second LED lamp (2) carried by the floating robot;
s2.2, transmitting a video image shot by the camera (4) to the camera driving decoding module (6) through the video transmission cable (10), and extracting the position information of light spots in the video image by the camera driving decoding module (6) through a built-in light spot identification and extraction algorithm;
supposing that the positions of light spots of a first LED lamp (1) and a second LED lamp (2) shot by a camera (4) of the current image frame in two-dimensional coordinates after conversion are P1(x1、y1) And P2(x2、y2) The positions of the first LED lamp (1) and the second LED lamp (2) in the next frame image become P3(x3、y3) And P4(x4、y4) Separately calculate P3、P4To P1、P2The distance of (c).
Figure RE-FDA0003047268020000031
Figure RE-FDA0003047268020000032
Figure RE-FDA0003047268020000033
Figure RE-FDA0003047268020000034
Thus LP1P3And LP1P4In, the distance is less than LmaxThe corresponding point is P1Position of points in adjacent image frames, similarly LP2P3And LP2P4In, the distance is less than LmaxThe corresponding point is P2The position of a point in an adjacent image frame;
and S2.3, the camera drives the decoding module (6) to transmit the spot position information obtained in the S2.2 to the control computer (7) through the video network cable (10).
CN202011608465.7A 2020-12-30 2020-12-30 Underwater robot course control system and method based on vision Pending CN112977764A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011608465.7A CN112977764A (en) 2020-12-30 2020-12-30 Underwater robot course control system and method based on vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011608465.7A CN112977764A (en) 2020-12-30 2020-12-30 Underwater robot course control system and method based on vision

Publications (1)

Publication Number Publication Date
CN112977764A true CN112977764A (en) 2021-06-18

Family

ID=76345168

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011608465.7A Pending CN112977764A (en) 2020-12-30 2020-12-30 Underwater robot course control system and method based on vision

Country Status (1)

Country Link
CN (1) CN112977764A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
CN105404303A (en) * 2015-12-28 2016-03-16 河海大学常州校区 Motion control method of ROV underwater robot
CN106094819A (en) * 2016-06-17 2016-11-09 江苏科技大学 Underwater robot control system and course heading control method based on sonar image target recognition
CN106741761A (en) * 2016-11-25 2017-05-31 浙江大学 One kind has cameras people under cable remote-controlled water
CN107953350A (en) * 2016-10-17 2018-04-24 江苏舾普泰克自动化科技有限公司 It is a kind of to be used to detect the underwater robot control system with operation
CN207835648U (en) * 2017-11-09 2018-09-07 美钻石油钻采***(上海)有限公司 A kind of underwater robot light vision monitoring apparatus
CN215205321U (en) * 2020-12-30 2021-12-17 核动力运行研究所 Underwater robot course control system based on vision

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
CN105404303A (en) * 2015-12-28 2016-03-16 河海大学常州校区 Motion control method of ROV underwater robot
CN106094819A (en) * 2016-06-17 2016-11-09 江苏科技大学 Underwater robot control system and course heading control method based on sonar image target recognition
CN107953350A (en) * 2016-10-17 2018-04-24 江苏舾普泰克自动化科技有限公司 It is a kind of to be used to detect the underwater robot control system with operation
CN106741761A (en) * 2016-11-25 2017-05-31 浙江大学 One kind has cameras people under cable remote-controlled water
CN207835648U (en) * 2017-11-09 2018-09-07 美钻石油钻采***(上海)有限公司 A kind of underwater robot light vision monitoring apparatus
CN215205321U (en) * 2020-12-30 2021-12-17 核动力运行研究所 Underwater robot course control system based on vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈伟;眭翔;曾庆军;王彪;王志东;窦京;: "多功能模态切换的有缆遥控水下机器人控制***设计与实验", 江苏科技大学学报(自然科学版), no. 05, 15 October 2014 (2014-10-15) *

Similar Documents

Publication Publication Date Title
US7496226B2 (en) Multi-camera inspection of underwater structures
Myint et al. Dual-eyes vision-based docking system for autonomous underwater vehicle: an approach and experiments
Wang et al. Visual navigation and docking for a planar type AUV docking and charging system
CN103963947B (en) Submarine navigation device guides automatic butt method and device with the light of base station
CN108438178B (en) Sliding type manned device for maintenance and detection of deepwater face dam
CN109859202B (en) Deep learning detection method based on USV water surface optical target tracking
CN110610134B (en) Unmanned ship autonomous docking method
Ji-Yong et al. Design and vision based autonomous capture of sea organism with absorptive type remotely operated vehicle
CN116255908B (en) Underwater robot-oriented marine organism positioning measurement device and method
CN215205321U (en) Underwater robot course control system based on vision
WO2018186750A1 (en) Camera assisted control system for an underwater vehicle
CN111452939A (en) Autonomous line-inspection underwater helicopter for diversion tunnel detection
CN104766312A (en) Intelligent underwater robot autonomous butting method based on bi-sight-vision guiding
KR102018089B1 (en) Mission execution system capable of detachment between drones in underwater
Okamoto et al. Development of hovering-type AUV “HOBALIN” for exploring seafloor hydrothermal deposits
CN112977764A (en) Underwater robot course control system and method based on vision
Negahdaripour et al. A vision system for real-time positioning, navigation, and video mosaicing of sea floor imagery in the application of ROVs/AUVs
Liang et al. Experiment of robofish aided underwater archaeology
CN113822297B (en) Marine ship target recognition device and method
CN113031632B (en) Control system and control method suitable for water surface recovery of underwater vehicle
Neto et al. Autonomous underwater vehicle to inspect hydroelectric dams
Batlle et al. ROV-aided dam inspection: Practical results
Masuzaki et al. Position control of Remotely Operated Vehicle Using Template matching
Liu et al. Development of AUV mechatronics integration for underwater intervention tasks
Park et al. Multi-legged ROV Crabster and an acoustic camera for survey of underwater cultural heritages

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination