CN113050665A - Energy-saving underwater robot detection method and system based on SLAM framework - Google Patents
Energy-saving underwater robot detection method and system based on SLAM framework Download PDFInfo
- Publication number
- CN113050665A CN113050665A CN202110313293.9A CN202110313293A CN113050665A CN 113050665 A CN113050665 A CN 113050665A CN 202110313293 A CN202110313293 A CN 202110313293A CN 113050665 A CN113050665 A CN 113050665A
- Authority
- CN
- China
- Prior art keywords
- underwater robot
- energy
- robot
- scene
- underwater
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 29
- 238000000034 method Methods 0.000 claims abstract description 27
- 238000012545 processing Methods 0.000 claims description 34
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 claims description 24
- 230000008569 process Effects 0.000 claims description 17
- 230000006870 function Effects 0.000 claims description 11
- 230000002452 interceptive effect Effects 0.000 claims description 4
- 230000008447 perception Effects 0.000 claims description 4
- 230000009023 proprioceptive sensation Effects 0.000 claims description 4
- 238000006073 displacement reaction Methods 0.000 claims description 3
- 239000000284 extract Substances 0.000 claims description 3
- 238000007667 floating Methods 0.000 claims description 3
- 238000003306 harvesting Methods 0.000 claims description 3
- 239000000463 material Substances 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims description 3
- 239000013589 supplement Substances 0.000 claims description 3
- 230000003993 interaction Effects 0.000 abstract 1
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 6
- 229910052799 carbon Inorganic materials 0.000 description 6
- 239000013535 sea water Substances 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 4
- 230000005855 radiation Effects 0.000 description 4
- 230000009467 reduction Effects 0.000 description 4
- 150000003839 salts Chemical class 0.000 description 4
- 239000011248 coating agent Substances 0.000 description 3
- 238000000576 coating method Methods 0.000 description 3
- 238000004134 energy conservation Methods 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 230000000050 nutritive effect Effects 0.000 description 3
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000008030 elimination Effects 0.000 description 2
- 238000003379 elimination reaction Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000013178 mathematical model Methods 0.000 description 2
- 229910052760 oxygen Inorganic materials 0.000 description 2
- 239000001301 oxygen Substances 0.000 description 2
- 230000002035 prolonged effect Effects 0.000 description 2
- 238000007789 sealing Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 241000251468 Actinopterygii Species 0.000 description 1
- 241001474374 Blennius Species 0.000 description 1
- 206010021143 Hypoxia Diseases 0.000 description 1
- 206010034719 Personality change Diseases 0.000 description 1
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical group [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000001174 ascending effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000005260 corrosion Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 235000015097 nutrients Nutrition 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 230000029553 photosynthesis Effects 0.000 description 1
- 238000010672 photosynthesis Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 229920002050 silicone resin Polymers 0.000 description 1
- 230000004083 survival effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/04—Control of altitude or depth
- G05D1/06—Rate of change of altitude or depth
- G05D1/0692—Rate of change of altitude or depth specially adapted for under-water vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01D—MEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
- G01D21/00—Measuring or testing not otherwise provided for
- G01D21/02—Measuring two or more variables by means not covered by a single other subclass
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Manipulator (AREA)
Abstract
The invention discloses an energy-saving underwater robot detection method and system based on an SLAM framework, which comprises the steps of scanning a three-dimensional scene based on a time-varying direction field, and segmenting and reconstructing a target scene; aiming at the more key recognition target, the mechanical arm is used for interaction, and information is further collected; under the guidance of the value function of object identification and the loop detection module, the underwater robot can ensure the reasonability of route planning while carrying out object guided scanning; and finally, the remote cruising of the robot can be realized by a mode of collecting solar energy and converting the solar energy into electric energy. According to the method, the vector field is replaced by the time-varying direction field, so that the interference of singular points of a navigation boundary line is effectively reduced, and the attitude of the aircraft is adjusted more smoothly; the value function based on object identification and the closed loop detection module can collect navigation data and regularly correct navigation track when the robot conducts object-guided scanning, so that the accuracy of object identification can be greatly improved.
Description
Technical Field
The invention belongs to the field of artificial intelligence and path planning, and particularly relates to an energy-saving underwater robot detection method and system based on an SLAM framework.
Background
The demand of underwater robots is also greatly increased by the development and utilization of oceans. With the increasing detection requirements of people, the robots also generally have the problems that the image processing is not clear enough, the navigation technology is not mature enough, navigation accumulated errors cannot be well eliminated, the underwater navigation time cannot meet the task requirements and the like.
The automatic navigation mode of the robot based on the vector field is many, and the defects are more prominent in the current underwater navigation field: (1) because of the self-contained orientation of the vector field, the robot generates ambiguity in a traveling route and generates more singular points, and the excessive singular points can prevent a scanned scene from forming a topological structure, so that the global navigation of the robot is relatively difficult to realize; (2) the vector field is not a second-order continuous field, so that the path obtained after comprehensive calculation is not smooth enough, and further the robot is prevented from sudden direction change in the underwater moving process, and huge errors are brought to navigation positioning.
In the positioning and composition process of the robot, most of the adopted mathematical models are still mathematical models of Kalman filters, the robot can be regarded as a sequence of poses, and the map is a set of road signs. The linear filtering assumption in the precondition of the mathematical modeling based on the kalman filter leads to the fact that the covariance matrix must be stored, otherwise convergence cannot be carried out, which brings great resource consumption to a processor, and the underwater environment is a dynamic environment, some road signs can change along with time, so that the model is increasingly unsuitable. Secondly, with the development of science and technology, how to make emerging sensors well suitable for relatively mature SLAM architecture is a problem to be solved urgently, such as the installation mode of the emerging sensors and the real-time image transmission mode.
In order to realize the active object recognition function of the robot, most of the existing methods are dictionary training methods, wherein a great amount of time is needed for training a classifier, the robot is required to remember a great number of objects, and the recognition process of the objects is increasingly difficult or even difficult due to geographical limitations and the selection of an optimal observation point in an underwater special environment. In order to solve the problems, the key information in each frame of image can be acquired in a targeted manner, and the object with more available information in each frame of image is selected for identification, so that the workload of the processor can be effectively reduced, and the power consumption of the chip can be further reduced.
The noise problem of images in the underwater navigation process is also more prominent, the noise is accumulated by the inertial measurement equipment and the progressive matching mode in the navigation equipment, and the original navigation loses the practical significance after the error between frames is accumulated to a certain value. In addition, the underwater high-pressure low-temperature special environment has complex geographical conditions, dark light and blurred images, the hue of the underwater high-pressure low-temperature special environment is mainly blue-green, and various underwater organisms, floating objects and generated scattered light can interfere imaging. This is a difficult problem in terms of the quality of the robot itself.
The underwater operation time of the robot is precious, the traditional robot needs to return to a base station to supplement energy, the cost for establishing the energy base station in the sea is high, and in addition, a part of energy of the robot is consumed on a back-and-forth path, so that the time for the robot to operate under water is shortened. And the robot is not particularly suitable for work which is far off the shore and has long construction period. In order to reduce the energy consumption on the way to and fro, a photovoltaic cell panel can be arranged at the top of the robot, thereby prolonging the working time of the robot.
In recent years, people are particularly alert to the protection of ecological environment, the requirements on energy-saving marine detectors are improved, and the energy conservation and emission reduction are hopefully realized, and the carbon sink capacity of the ocean is improved as much as possible.
Disclosure of Invention
The purpose of the invention is as follows: the invention provides an energy-saving underwater robot system based on an SLAM framework and an exploration method, aiming at solving the problems that the navigation time of a robot is short and the cruise curve is not smooth enough and achieving the purposes of object-guided scanning, energy conservation, emission reduction and water surface and underwater environment protection.
The technical scheme is as follows: the invention provides an energy-saving underwater robot detection method based on an SLAM framework, which specifically comprises the following steps:
(1) the underwater robot acquires self positioning by using the attitude sensor, and simultaneously collects three-dimensional data in a scene and builds the scene;
(2) the underwater robot carries out image processing on the collected underwater pictures and calls a pre-imported object database model to carry out active identification on objects in the scene;
(3) the method comprises the steps that the environment and time factors of object identification are integrated, and based on a value function and a loop detection algorithm, an underwater robot calculates a path which consumes relatively less energy and time, has less damage to a machine and more harvest information, so that object-guided scanning is realized;
(4) when the underwater robot detects that the energy of the underwater robot is insufficient, the underwater robot immediately stores the working data of the underwater robot, moves upwards to the water surface, opens the photovoltaic cell panel, converts the absorbed solar energy into electric energy and charges the energy for the underwater robot.
Further, the step (1) includes the steps of:
(11) calling a vision sensor of the underwater robot to detect the environment, transmitting the sensed real-time picture to a central processing unit, utilizing the imported real-time data by the processor, and primarily constructing the three-dimensional scene;
(12) and projecting the constructed three-dimensional scene, calculating the segmentation entropy and the reconstruction entropy of the direction field of the three-dimensional scene by taking the projection boundary as constraint, and further perfecting the constructed three-dimensional scene.
Further, the step (2) comprises the steps of:
(21) in a point cloud picture describing a three-dimensional scene direction field, aiming at dense or fuzzy places, a robot arm is called to carry out interactive touch on the area so as to determine the specific condition of the surface of a detected object;
(22) the central processing unit classifies the collected objects; the collected objects comprise road signs, obstacles, benthos and targets;
(23) the underwater robot actively observes a target object, classifies the object, determines external characteristics such as the name and the size of the object, enables the underwater scene to be displayed more visually, and archives object information in a database mode.
Further, the step (3) includes the steps of:
(31) the processing process of each frame of image is optimized by utilizing the sparsity of the signposts distributed in the moving image in scene scanning, the difference between the two frames of images before and after the two frames of images are compared by a detection camera at any time, the displacement of the underwater robot is recorded, and the estimation of the body pose and the estimation of the external environment pose are realized by utilizing a proprioception sensor and an environment perception sensor; the central processing unit iterates the closest point by comparing the images between the previous and next key frames in real time, extracts the image characteristics, matches the image characteristics, and finally removes errors of the matched image;
(32) determining the scanning sequence of each object in the scene according to the size of an object identification value function, namely the matching degree of a single point cloud in a point cloud picture and an independent object and the significance degree of the object in the field of vision of the robot, so as to preliminarily determine a navigation route;
(33) the central processing unit compares and scans the scene with the previous scene while the underwater robot cruises according to the route based on a loop detection algorithm, judges whether the scene is a navigation passing point or not according to the recorded route if the frame of picture is detected to have high goodness of fit with a certain frame of picture in the previous frame of picture, and scans by taking the point as a starting point if the scene is really the navigation passing point; meanwhile, a local loop is added between the road signs, so that constraint is added between adjacent frames.
Further, the step (4) comprises the steps of:
(41) when the electric quantity detector displays that the electric quantity is insufficient, an early warning signal is actively sent out, the underwater robot starts an energy-saving mode and quickly transmits working data of the underwater robot to a memory;
(42) the underwater robot starts a lifting machine, floats to the water surface, opens the safety air cushion and the photovoltaic cell panel, absorbs solar energy through floating on the water surface, converts the solar energy into electric energy, and supplements energy for self follow-up sailing.
Further, the photovoltaic cell panel in the step (4) is sealed in a sealing box made of an anticorrosive material.
The invention also provides an energy-saving underwater robot detection system based on the SLAM framework, which comprises a sensor module, a central processing unit module, a power supply module, a memory module, a server module and external equipment; the sensor module is responsible for collecting data of all aspects of the environment where the underwater robot is located and transmitting the data to the central processor module in real time; the power supply module comprises an electric quantity detector, a battery and a photovoltaic cell panel, is responsible for storing electric energy and detecting electric quantity, and timely sends out an early warning signal when the electric quantity is insufficient to remind the underwater robot to float to the water surface to use the photovoltaic cell panel for charging; the memory module comprises an internal memory and an external memory, the internal memory is used for communicating the external memory and the central processing unit and directly exchanging data with the central processing unit, and the external memory is used for carrying object database models and other important information and storing data in time when the electric quantity is insufficient so as to ensure that the data is not lost; the server module is responsible for establishing connection with workers, sending and receiving signals and helping the workers to carry out online observation in the process of object identification; the external equipment module comprises a mechanical arm, a depth detector, a lifting machine and an air cushion; the central processing unit is responsible for establishing connection with other modules, performing data operation and transmitting an operation result and an instruction to the internal memory.
Has the advantages that: compared with the prior art, the invention has the beneficial effects that: 1. the invention provides a concept based on photoelectric energy conversion, optimizes an energy system of the robot, and is more friendly to water quality and underwater organisms; 2. an autonomous navigation mode based on a direction field is provided, the influence of singular points is effectively reduced, the advancing curve of the robot is smoothed, and the quality of the collected image is improved; 3. the method has the advantages that the value function is used for evaluating the object to be identified, so that the efficiency of the robot is improved, and finer and more effective scanning and composition can be realized within a limited time; 4. the photovoltaic cell panel is arranged at the top of the robot to collect solar radiation, so that light energy is converted into electric energy, the frequency of returning the robot to a base station is reduced, the working time of the robot is prolonged, and carbon emission is reduced; 5. the robot invisibly neutralizes deep seawater and shallow seawater between the sea surface and the sea bottom, and the carbon sink capacity of the seawater is improved.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a system framework diagram of a robot;
FIG. 3 is a specification diagram of object classification for an underwater scene;
FIG. 4 is a flow chart of underwater object identification;
fig. 5 is a flow chart of robot path planning.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
The invention provides an energy-saving underwater robot detection method based on an SLAM framework, which is used for positioning and composition in an underwater special environment, actively identifying and detecting an object, then calculating a reasonable cruising route and self energy supply, and as shown in figure 1, the method specifically comprises the following steps:
step 1: the underwater robot acquires self positioning by using the attitude sensor, collects three-dimensional data in a scene and builds the scene.
And calling a vision sensor of the underwater robot to detect the environment, transmitting the sensed real-time picture to a central processing unit, utilizing the imported real-time data by the processor, and primarily constructing the three-dimensional scene. And projecting the constructed three-dimensional scene, calculating the segmentation entropy and the reconstruction entropy of the direction field of the three-dimensional scene by taking the projection boundary as constraint, and further perfecting the constructed three-dimensional scene.
And packaging and protecting the photovoltaic cell panel at the top of the robot, confirming that all components of the robot system, as shown in figure 2, work normally, and then recording the submerging depth by using a depth detector while submerging. After the robot reaches the working area, sample data around the robot starts to be collected, meanwhile, the posture of the robot is continuously adjusted in the scanning process, and real-time data are recorded. And preliminarily constructing the scene by using the obtained three-dimensional data, calculating a scene boundary line, and constructing a scene approximate frame.
The scene scanning algorithm is developed based on a time-varying direction field, the robot scans the scene while positioning, the three-dimensional scene is reconstructed by using the collected three-dimensional data and is projected, then a direction field is calculated by taking the tangential direction of the projection boundary as a constraint, and the robot is guided to move along the direction field. The directional field is calculated by taking the tangential direction of the boundary of the obstacle as a constraint, so that the robot cannot collide with the obstacle or have sudden attitude change in the advancing process, and the directional field is continuous in the second order, so that the smooth and continuous motion track of the robot can be ensured, and high-quality scanning is realized.
Step 2: and the underwater robot carries out image processing on the collected underwater pictures and calls a pre-imported object database model to carry out active identification on the objects in the scene.
The collected three-dimensional data is transmitted to a central processing unit, and is subjected to image preprocessing, then the images are subjected to noise reduction and filtering, and further operations such as saturation and contrast adjustment are performed, so that the images are displayed more visually.
In a point cloud chart describing a three-dimensional scene direction field, aiming at a dense or fuzzy place, a robot arm is called to carry out interactive touch on the area so as to determine the specific condition of the surface of a detected object. The central processor classifies the collected objects, such as road signs, obstacles, benthos, targets, etc. The classification specification shown in fig. 3 is adopted to classify and label the objects in the scene, so as to store and archive the information of each object in the following. In the classified target objects, objects which are more prominent in the scene and have larger influence on the navigation route and objects which contain more information and have larger available value can be selected for preferential identification. The underwater robot actively observes a target object, classifies the object, determines external characteristics such as the name and the size of the object, enables the underwater scene to be displayed more visually, and archives object information in a database mode.
Before sailing, a prepared object database model needs to be imported into the robot, and the robot is trained in a targeted manner. As shown in fig. 4, in the process of classifying the underwater navigation recognition object, for the region with low classification certainty, another optimal observation region and angle can be calculated, the posture of the robot is adjusted, the robot is driven to go to the optimal observation point, and the object is further recognized. Aiming at the object of which the name can not be determined, the robot arm can be called to perform interactive touch on the object, data is uploaded to the server, and manual online auxiliary observation is performed, so that the accuracy of object identification in the scene is guaranteed.
And step 3: as shown in fig. 5, by integrating the environmental and time factors such as the quality of the robot, the water quality condition, the flow velocity of seawater and sea wind, the fish flow condition, the solar radiation amount, the underwater operation time, the scene boundary line and the like, and based on the cost function and the loop detection algorithm, the underwater robot calculates a path which consumes relatively less energy and time, has less damage to the robot and more harvest information, and realizes the object-guided scanning.
When the robot works, the detection camera compares the difference between the front and back two frames of images at any time, records the displacement of the robot, realizes the estimation of the body pose and the external environment pose by using the proprioception sensor and the environment perception sensor, and reduces the positioning error by using the multi-feature matching idea.
In the SLAM process of the robot, thousands of frames of images are generated, each frame of image has hundreds of key points, in the process of square summation error elimination, millions of optimization variables are generated, a large amount of calculation cost is consumed, and a sparse algebra method can be adopted, namely, the processing process of each frame of image is optimized by utilizing the sparsity of the distribution of the signposts in the scene scanning in the motion image, so that the power consumption of a central processing unit is reduced. When the robot sails, the central processing unit compares images between front and back key frames in real time, iterates the closest points, extracts image features, then matches the images, and finally performs error elimination on the matched images so as to check the reasonability of a sailing route.
And calculating the value of each object according to the value function, sequencing the target objects according to the value, and determining the scanning sequence of each object in the scene according to the value of the object identification value function, namely the matching degree of the single point cloud in the point cloud picture and the independent object and the significance degree of the object in the field of vision of the robot so as to preliminarily determine a navigation route. Then, whether the robot is suitable for underwater operation in the same day and the suitable underwater operation time are determined by considering the factors of the remaining energy of the robot in a standby state, the water quality acidity and alkalinity, the underwater biological flowing condition, the solar radiation amount in the same day and the like, and the sailing route is corrected according to the factors.
The central processing unit detects whether the new image appears in the previous image sequence or not by carrying out mode recognition on each frame of image, and if the new image appears in the previous image sequence, the reference point can be repositioned, the positioning error is reduced, and the navigation route is corrected. The central processing unit compares and scans the scene with the previous scene while the underwater robot cruises according to the route based on a loop detection algorithm, judges whether the scene is a navigation passing point or not according to the recorded route if the frame of picture is detected to have high goodness of fit with a certain frame of picture in the previous frame of picture, and scans by taking the point as a starting point if the scene is really the navigation passing point; meanwhile, a local loop is added between the road signs, so that constraint is added between adjacent frames.
According to the establishment of the time-varying direction field, singular points in the field are connected to form a topological structure, and the global navigation of the underwater robot is realized. Meanwhile, two adjacent singular points can be eliminated in pairs, or the singular points are hidden near an obstacle, the direction field is optimized, and the navigation route of the robot is further corrected.
After the object is observed, the robot starts object-guided scene scanning to identify the object while exploring the scene. In the image segmentation, a part of the point cloud picture which can represent the object most is searched, and the value obtained by combining the value function calculation is used for determining the next object to be scanned. In the scanning process, with the continuous enrichment of three-dimensional information, the optimal scanning position of the next object can be determined at any time, and finally the three-dimensional reconstruction of the whole scene and the semantic recognition of the objects in the scene are completed. And after the navigation is finished, returning to the water surface, and opening the photovoltaic cell panel to charge to complete a task cycle.
And 4, step 4: when the underwater robot detects that the energy of the underwater robot is insufficient, the underwater robot immediately stores the working data of the underwater robot, moves upwards to the water surface, opens the photovoltaic cell panel, converts the absorbed solar energy into electric energy and charges the energy for the underwater robot.
When the navigation energy is insufficient after the electric quantity detector displays the early warning information, the energy-saving mode is immediately started, the exploration work is stopped, and the work data generated by the energy-saving mode is quickly transmitted to the memory. The robot starts the lifting machine, after rising to the surface of water, opens the safety air cushion again, floats stably after, expandes photovoltaic cell board, through absorbing solar energy, and then converts the electric energy into, for self subsequent navigation work energy. The photovoltaic cell panel is packaged in a sealing box made of an anticorrosive material; an anti-corrosion coating, such as a silicone resin coating, can also be added on the surface of the battery plate so as to protect the battery plate. The robot utilizes a photovoltaic cell panel to directly convert solar radiation energy into electric energy for use, the essence of the robot is that solar energy is absorbed by surface coating of the cell panel, wherein photons transfer energy to silicon atoms, internal electrons jump and are accumulated on two sides of a P-N junction to form potential difference, the potential difference is connected into two stages of robot batteries by utilizing a lead, and voltages on the two sides can generate output power through a circuit to carry electrons, so that the robot batteries are charged.
In the process that the robot returns to the sea surface and generates electricity by using solar energy, the ascending current can be driven to bring the water rich in nutritive salt at the bottom of the culture sea area to the upper water body, the nutritive salt required by the cultured seaweed is supplied for photosynthesis, and meanwhile, the high-concentration nutritive salt at the bottom can be slowly released, so that ecological disasters such as red tide and the like caused by sudden disturbance such as storm tide and the like are avoided; when the robot returns to the underwater to continue working, the water with the surface rich in oxygen can be taken to the deep layer to complement the deep water, so that the problem of oxygen deficiency of marine organisms at the bottom can be relieved. The robot reciprocates to the sea surface seabed to drive water at different depths to be mutually connected in series, thereby solving the problem of supply and demand dislocation of nutrient salt, inorganic carbon and dissolved oxygen and increasing the carbon sink capacity of the seawater.
The invention also provides an energy-saving underwater robot detection system based on the SLAM framework, which comprises a sensor module, a central processing unit module, a power supply module, a memory module, a server module and external equipment, as shown in figure 2. The sensor module is responsible for collecting data of all aspects of the environment where the underwater robot is located and transmitting the data to the central processor module in real time; the device comprises a three-dimensional sensor, an environment perception sensor, a vision sensor and a proprioception sensor. The power supply module is responsible for storing electric energy and detecting electric quantity, and timely sends out an early warning signal when the electric quantity is insufficient to remind the underwater robot to float to the water surface and utilize the photovoltaic cell panel to charge; the device comprises an electric quantity detector, a battery and a photovoltaic cell panel. The memory module comprises an internal memory and an external memory, the internal memory is responsible for communicating the external memory with the central processing unit and directly communicates data with the central processing unit, and the external memory is responsible for carrying object database models and other important parameter information and timely storing data when the electric quantity is insufficient, so that the data are not lost. The server module is responsible for establishing connection with workers, sending and receiving signals and helping the workers to carry out online observation in the object identification process. The external equipment module comprises a mechanical arm, a depth detector, a lifting machine and an air cushion. The central processing unit is responsible for establishing connection with other modules, performing data operation, and transmitting operation results and instructions to the internal memory, and comprises a three-dimensional data processing module, an image processing module and other calculation modules.
In conclusion, the energy-saving underwater robot detection method based on the SLAM framework can effectively smooth the navigation track of the robot, further improve the quality of the acquired image, and can periodically eliminate the accumulated error caused by inertial navigation through the closed-loop detection module, thereby providing the concept of the cost function, reducing the workload of the processor chip and improving the navigation working efficiency. And the mode of installing the photovoltaic cell panel on the top of the robot converts light energy into electric energy to charge the robot, so that the underwater operation time of the robot is effectively prolonged, and the purposes of energy conservation and emission reduction are achieved. Finally, the robot continuously returns to the sea surface and the sea bottom, so that the shallow water and the deep water in the sea can be complemented, the carbon sink capacity of the sea water is effectively increased, and the ecological environment which we rely on for survival is protected.
Claims (7)
1. An energy-saving underwater robot detection method based on an SLAM framework is characterized by comprising the following steps:
(1) the underwater robot acquires self positioning by using the attitude sensor, and simultaneously collects three-dimensional data in a scene and builds the scene;
(2) the underwater robot carries out image processing on the collected underwater pictures and calls a pre-imported object database model to carry out active identification on objects in the scene;
(3) the method comprises the steps that the environment and time factors of object identification are integrated, and based on a value function and a loop detection algorithm, an underwater robot calculates a path which consumes relatively less energy and time, has less damage to a machine and more harvest information, so that object-guided scanning is realized;
(4) when the underwater robot detects that the energy of the underwater robot is insufficient, the underwater robot immediately stores the working data of the underwater robot, moves upwards to the water surface, opens the photovoltaic cell panel, converts the absorbed solar energy into electric energy and charges the energy for the underwater robot.
2. The SLAM architecture-based energy-saving underwater robot detection method according to claim 1, wherein the step (1) comprises the steps of:
(11) calling a vision sensor of the underwater robot to detect the environment, transmitting the sensed real-time picture to a central processing unit, utilizing the imported real-time data by the processor, and primarily constructing the three-dimensional scene;
(12) and projecting the constructed three-dimensional scene, calculating the segmentation entropy and the reconstruction entropy of the direction field of the three-dimensional scene by taking the projection boundary as constraint, and further perfecting the constructed three-dimensional scene.
3. The SLAM architecture-based energy-saving underwater robot detection method according to claim 1, wherein the step (2) comprises the steps of:
(21) in a point cloud picture describing a three-dimensional scene direction field, aiming at dense or fuzzy places, a robot arm is called to carry out interactive touch on the area so as to determine the specific condition of the surface of a detected object;
(22) the central processing unit classifies the collected objects; the collected objects comprise road signs, obstacles, benthos and targets;
(23) the underwater robot actively observes a target object, classifies the object, determines external characteristics such as the name and the size of the object, enables the underwater scene to be displayed more visually, and archives object information in a database mode.
4. The SLAM architecture-based energy-saving underwater robot detection method according to claim 1, wherein the step (3) comprises the steps of:
(31) the processing process of each frame of image is optimized by utilizing the sparsity of the signposts distributed in the moving image in scene scanning, the difference between the two frames of images before and after the two frames of images are compared by a detection camera at any time, the displacement of the underwater robot is recorded, and the estimation of the body pose and the estimation of the external environment pose are realized by utilizing a proprioception sensor and an environment perception sensor; the central processing unit iterates the closest point by comparing the images between the previous and next key frames in real time, extracts the image characteristics, matches the image characteristics, and finally removes errors of the matched image;
(32) determining the scanning sequence of each object in the scene according to the size of an object identification value function, namely the matching degree of a single point cloud in a point cloud picture and an independent object and the significance degree of the object in the field of vision of the robot, so as to preliminarily determine a navigation route;
(33) the central processing unit compares and scans the scene with the previous scene while the underwater robot cruises according to the route based on a loop detection algorithm, judges whether the scene is a navigation passing point or not according to the recorded route if the frame of picture is detected to have high goodness of fit with a certain frame of picture in the previous frame of picture, and scans by taking the point as a starting point if the scene is really the navigation passing point; meanwhile, a local loop is added between the road signs, so that constraint is added between adjacent frames.
5. The SLAM architecture-based energy-saving underwater robot detection method according to claim 1, wherein the step (4) comprises the steps of:
(41) when the electric quantity detector displays that the electric quantity is insufficient, an early warning signal is actively sent out, the underwater robot starts an energy-saving mode and quickly transmits working data of the underwater robot to a memory;
(42) the underwater robot starts a lifting machine, floats to the water surface, opens the safety air cushion and the photovoltaic cell panel, absorbs solar energy through floating on the water surface, converts the solar energy into electric energy, and supplements energy for self follow-up sailing.
6. The SLAM architecture-based energy-saving underwater robot detection method according to claim 1, wherein the photovoltaic cell panel of step (4) is sealed in a sealed box made of an anticorrosive material.
7. An energy-saving underwater robot detection system based on SLAM architecture adopting the method of any one of claims 1 to 6, characterized by comprising a sensor module, a central processor module, a power supply module, a memory module, a server module, and an external device; the sensor module is responsible for collecting data of all aspects of the environment where the underwater robot is located and transmitting the data to the central processor module in real time; the power supply module comprises an electric quantity detector, a battery and a photovoltaic cell panel, is responsible for storing electric energy and detecting electric quantity, and timely sends out an early warning signal when the electric quantity is insufficient to remind the underwater robot to float to the water surface to use the photovoltaic cell panel for charging; the memory module comprises an internal memory and an external memory, the internal memory is used for communicating the external memory and the central processing unit and directly exchanging data with the central processing unit, and the external memory is used for carrying object database models and other important information and storing data in time when the electric quantity is insufficient so as to ensure that the data is not lost; the server module is responsible for establishing connection with workers, sending and receiving signals and helping the workers to carry out online observation in the process of object identification; the external equipment module comprises a mechanical arm, a depth detector, a lifting machine and an air cushion; the central processing unit is responsible for establishing connection with other modules, performing data operation and transmitting an operation result and an instruction to the internal memory.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110313293.9A CN113050665B (en) | 2021-03-24 | 2021-03-24 | Energy-saving underwater robot detection method and system based on SLAM framework |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110313293.9A CN113050665B (en) | 2021-03-24 | 2021-03-24 | Energy-saving underwater robot detection method and system based on SLAM framework |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113050665A true CN113050665A (en) | 2021-06-29 |
CN113050665B CN113050665B (en) | 2022-04-19 |
Family
ID=76514847
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110313293.9A Active CN113050665B (en) | 2021-03-24 | 2021-03-24 | Energy-saving underwater robot detection method and system based on SLAM framework |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113050665B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114228959A (en) * | 2021-12-29 | 2022-03-25 | 中国科学院沈阳自动化研究所 | Underwater robot polar region under-ice recovery method based on acoustic road sign and optical road sign combined auxiliary navigation |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101797968A (en) * | 2010-03-29 | 2010-08-11 | 哈尔滨工程大学 | Open-shelf underwater detecting robot mechanism |
CN106843242A (en) * | 2017-03-21 | 2017-06-13 | 天津海运职业学院 | A kind of multi-robots system of under-water body cleaning |
CN108693547A (en) * | 2018-06-05 | 2018-10-23 | 河海大学 | A kind of navigation system for underwater bathyscaph and accurate three-point positioning method |
CN109367738A (en) * | 2018-10-10 | 2019-02-22 | 西北工业大学 | A kind of underwater AUTONOMOUS TASK robot and its operational method |
CN208614792U (en) * | 2018-06-25 | 2019-03-19 | 武汉交通职业学院 | A kind of Intelligent Underwater Robot control system |
CN110706248A (en) * | 2019-08-20 | 2020-01-17 | 广东工业大学 | Visual perception mapping algorithm based on SLAM and mobile robot |
US10788836B2 (en) * | 2016-02-29 | 2020-09-29 | AI Incorporated | Obstacle recognition method for autonomous robots |
CN111897349A (en) * | 2020-07-08 | 2020-11-06 | 南京工程学院 | Underwater robot autonomous obstacle avoidance method based on binocular vision |
-
2021
- 2021-03-24 CN CN202110313293.9A patent/CN113050665B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101797968A (en) * | 2010-03-29 | 2010-08-11 | 哈尔滨工程大学 | Open-shelf underwater detecting robot mechanism |
US10788836B2 (en) * | 2016-02-29 | 2020-09-29 | AI Incorporated | Obstacle recognition method for autonomous robots |
CN106843242A (en) * | 2017-03-21 | 2017-06-13 | 天津海运职业学院 | A kind of multi-robots system of under-water body cleaning |
CN108693547A (en) * | 2018-06-05 | 2018-10-23 | 河海大学 | A kind of navigation system for underwater bathyscaph and accurate three-point positioning method |
CN208614792U (en) * | 2018-06-25 | 2019-03-19 | 武汉交通职业学院 | A kind of Intelligent Underwater Robot control system |
CN109367738A (en) * | 2018-10-10 | 2019-02-22 | 西北工业大学 | A kind of underwater AUTONOMOUS TASK robot and its operational method |
CN110706248A (en) * | 2019-08-20 | 2020-01-17 | 广东工业大学 | Visual perception mapping algorithm based on SLAM and mobile robot |
CN111897349A (en) * | 2020-07-08 | 2020-11-06 | 南京工程学院 | Underwater robot autonomous obstacle avoidance method based on binocular vision |
Non-Patent Citations (4)
Title |
---|
HAOQIAN HUANG,等: "High accuracy navigation information estimation for inertial system using the multi-model EKF fusing adams explicit formula applied to underwater gliders", 《ISATRANSACTIONS》 * |
TENG MA,等: "Robust bathymetric SLAM algorithm considering invalid loop closures", 《APPLIED OCEAN RESEARCH》 * |
吴俊君,等: "仿人机器人视觉导航中的实时性运动模糊探测器设计", 《自动化学报》 * |
张阳,等: "基于ORB-SLAM2算法的水下机器人实时定位研究", 《测绘通报》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114228959A (en) * | 2021-12-29 | 2022-03-25 | 中国科学院沈阳自动化研究所 | Underwater robot polar region under-ice recovery method based on acoustic road sign and optical road sign combined auxiliary navigation |
Also Published As
Publication number | Publication date |
---|---|
CN113050665B (en) | 2022-04-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Lu et al. | CONet: A cognitive ocean network | |
CN107542073A (en) | A kind of mixed dynamic water surface cleaning of intelligence based on Raspberry Pi and water monitoring device and method | |
CN107622231B (en) | A kind of intelligent floating material collection system of water day one and its collection method | |
KR101313546B1 (en) | Camera robot of buoy type | |
CN107117268B (en) | A kind of the ocean rubbish recovering method and system of heterogeneous system | |
CN110297498A (en) | A kind of rail polling method and system based on wireless charging unmanned plane | |
CN105184816A (en) | Visual inspection and water surface target tracking system based on USV and detection tracking method thereof | |
CN109725310A (en) | A kind of ship's fix supervisory systems based on YOLO algorithm and land-based radar system | |
CN112644646A (en) | Underwater robot intelligent system for large-water-area fish resource investigation and working method | |
KR101313306B1 (en) | Buoy type robot for monitoring conditions | |
KR20130025490A (en) | Self power generating robot of buoy type | |
CN113050665B (en) | Energy-saving underwater robot detection method and system based on SLAM framework | |
CN116255908B (en) | Underwater robot-oriented marine organism positioning measurement device and method | |
CN111880192B (en) | Ocean monitoring buoy device and system based on water surface and underwater target early warning | |
CN112215131A (en) | Automatic garbage picking system and manual operation and automatic picking method thereof | |
CN107730539B (en) | Autonomous underwater robot control system and sonar target tracking method | |
CN114046777A (en) | Underwater optical imaging system and method suitable for large-range shallow sea coral reef drawing | |
Güney et al. | Autonomous control of shore robotic charging systems based on computer vision | |
Xu et al. | UAV-ODS: A real-time outfall detection system based on UAV remote sensing and edge computing | |
CN116258251A (en) | Cold source disaster-causing object alarm early warning intelligent system of coastal nuclear power station | |
CN115027627A (en) | Intelligent unmanned ship system for inspection and rescue facing to drainage basin safety | |
CN204937448U (en) | A kind of wind light mutual complementing water surface robot | |
KR20230164518A (en) | Ocean information prediction system and vessel comprising the same | |
Chen et al. | Knowledge-driven semantic segmentation for waterway scene perception | |
CN110645981B (en) | Unmanned ship navigation system and method for cleaning pile foundation type waterborne photovoltaic module |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |