CN116486252A - Intelligent unmanned search and rescue system and search and rescue method based on improved PV-RCNN target detection algorithm - Google Patents
Intelligent unmanned search and rescue system and search and rescue method based on improved PV-RCNN target detection algorithm Download PDFInfo
- Publication number
- CN116486252A CN116486252A CN202310196909.8A CN202310196909A CN116486252A CN 116486252 A CN116486252 A CN 116486252A CN 202310196909 A CN202310196909 A CN 202310196909A CN 116486252 A CN116486252 A CN 116486252A
- Authority
- CN
- China
- Prior art keywords
- rescue
- search
- boat
- searched
- rescue boat
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 65
- 238000001514 detection method Methods 0.000 title claims abstract description 46
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 claims abstract description 19
- 238000004891 communication Methods 0.000 claims description 17
- 230000008569 process Effects 0.000 claims description 16
- 238000005070 sampling Methods 0.000 claims description 9
- 238000010586 diagram Methods 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 7
- 230000003044 adaptive effect Effects 0.000 claims description 6
- 238000013459 approach Methods 0.000 claims description 6
- 230000004927 fusion Effects 0.000 claims description 5
- 230000005540 biological transmission Effects 0.000 claims description 3
- 230000006835 compression Effects 0.000 claims description 3
- 238000007906 compression Methods 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 3
- 230000001133 acceleration Effects 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- 101100182247 Caenorhabditis elegans lat-1 gene Proteins 0.000 description 2
- 101100182248 Caenorhabditis elegans lat-2 gene Proteins 0.000 description 2
- 239000002131 composite material Substances 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/766—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/30—Assessment of water resources
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
The invention belongs to the field of maritime search and rescue, and particularly relates to an intelligent unmanned search and rescue system and method based on an improved PV-RCNN target detection algorithm. Aiming at the problem that the sparse convolution mode in the traditional PV-RCNN algorithm cannot achieve the combination of the receptive field size and the convolution speed, the invention provides an improved PV-RCNN target detection algorithm. According to the method, a sparse convolution mode of adaptively changing the size of the receptive field is adopted, meanwhile, point cloud data of a detected target are classified, so that the angle of a detection boat carrying a PV-RCNN algorithm can be adaptively adjusted on the water surface to obtain richer hull semantic information, and the detection precision of the target is improved. The intelligent search and rescue system disclosed by the invention has the advantages of integrating the laser radar and the improved PV-RCNN target detection algorithm, so that the target searching capability is improved, the analysis and detection can be automatically perceived and performed near the search and rescue target site, the detection and recognition speed is higher, and the rescue speed is improved.
Description
Technical Field
The invention belongs to the field of maritime search and rescue, and particularly relates to an intelligent unmanned search and rescue system and method based on an improved PV-RCNN target detection algorithm.
Background
In recent years, unmanned ships and boats technology and application in China are gradually paid attention to and are continuously accelerated. Aiming at the rescue condition of a water surface fault or a stranded unmanned ship, the traditional rescue of the water surface unmanned ship consumes a great deal of manpower, the navigation route is also planned manually, the rescue cost is high, and the rescue work lacks systematicness. Therefore, an intelligent system is urgently needed to realize the emergency rescue task of unmanned ships in daily tests or related events. The rescue system needs to meet the requirement that a given route of the rescue boat has a navigation control function of a short-distance recognition target task on a full-weather and weak scene. The existing three-dimensional target detection algorithm comprises a PV-RCNN algorithm, but the PV-RCNN algorithm has the problem of a sparse convolution mode, common sparse convolution comprises sub-manifold sparse convolution and conventional sparse convolution, the former can only perform feature extraction under a receptive field with a specific size, the receptive field is limited, unconnected features cannot be processed, and the latter has a sufficiently large fixed receptive field, but has large calculated amount and high GPU memory consumption.
Disclosure of Invention
Aiming at the problems and the shortcomings in the prior art, the invention aims to provide an intelligent unmanned search and rescue system and method based on an improved PV-RCNN target detection algorithm.
In order to achieve the aim of the invention, the technical scheme adopted by the invention is as follows:
the first aspect of the invention provides a method for detecting a surface unmanned ship based on an improved PV-RCNN algorithm, which comprises the following steps:
(1) Scanning a water surface scene to be detected by using a laser radar to obtain an original point cloud; voxel processing is carried out on the obtained original point cloud to form a plurality of voxels, the characteristic of the original point cloud in each voxel is averaged to be used as the voxel characteristic of the voxel, and then the side View characteristic of the point cloud is extracted from the voxel characteristic through a Multi-View method to obtain a side View characteristic diagram;
(2) Performing adaptive sparse convolution processing on the voxel characteristics obtained in the step (1) to obtain information communication voxel characteristics; performing downsampling operation on the information communication voxel characteristics to obtain multi-layer characteristics with different scales;
(3) Carrying out feature compression on the features obtained by adopting the highest sampling multiple in the multi-layer features obtained in the step (2) in height to obtain a top-view feature map;
(4) Downsampling and upsampling are carried out on the overlook characteristic image and the side-view characteristic image to obtain overlook characteristic images and side-view characteristic images with different scales, the overlook characteristic images and the side-view characteristic images with the same scale are spliced to obtain spliced characteristic images with multiple scales, and then the spliced characteristic images with multiple scales are spliced to obtain a final spliced characteristic image;
(5) Generating a suggestion frame by adopting an anchor frame method for the final spliced feature map;
(6) Performing set abstraction operation on the multi-layer features obtained in the step (2) by adopting the highest sampling multiple, increasing the weight of foreground points in the multi-layer features, and finally performing furthest point sampling on the foreground points to obtain key points;
(7) And (3) summarizing the key points in the step (6) into a RoI grid with a plurality of receptive fields by using a RoI grid pooling module to classify and regress the features in the suggestion frame, thereby completing the task of target detection.
According to the method, preferably, the operation of performing adaptive sparse convolution on the voxel feature obtained in the step (1) in the step (2) is: and comparing the number of the point clouds in the voxel characteristic with a set threshold value, and when the number of the point clouds in the voxel characteristic is larger than the set threshold value, using the voxel characteristic as the information communication voxel characteristic. The set threshold is determined by the sparseness degree of the point cloud; the thinner the point cloud, the smaller the set threshold.
According to the method, preferably, in the step (1), the process of scanning the water surface scene to be detected with the laser radar to obtain the original point cloud is as follows: the central axis from the bow to the stern of the unmanned ship is used as a standard line, the connecting line of the laser radar and the center of the unmanned ship is used as a detection line, and when the included angle between the detection line and the standard line is 20-160 degrees, the point cloud information detected by the laser radar is defined as the side point cloud of the unmanned ship. The point cloud information in other angle ranges is bow or stern point cloud; when the laser radar detects the stern or the bow of the unmanned ship, the detection ship can adaptively move to a proper position to extract side data.
According to the method, the weight of the foreground points in the method is preferably increased in the step (6) by adopting the following formula:
wherein the method comprises the steps ofFor foreground point weight, +.>() Is a three-layer multi-layer perceptron, and f is the characteristic of the ith layer under the p position of the foreground point.
According to the method, preferably, the downsampling operation in the step (2) is a downsampling operation of 1x, 2x, 4x and 8x on the voxel features obtained in the step (1), so as to obtain multi-layer features with different scales.
According to the method, preferably, in the step (4), the mode of splicing the spliced characteristic diagrams with multiple scales is to splice the spliced characteristic diagrams according to channels.
According to the method, preferably, in the step (1), the process of scanning the water surface scene to be detected with the laser radar to obtain the original point cloud is as follows:
the invention provides an intelligent unmanned search and rescue system based on a laser radar and a PV-RCNN detection method, which comprises a main control console and a search and rescue boat body boat, wherein a communication module, a navigation control module, a sensing module, a rescue module and an industrial control module are arranged on the search and rescue boat body boat; the main control console is used for receiving the search and rescue task, acquiring the size and GPS information of the searched and rescue boat, and communicating the size and GPS information of the searched and rescue boat to the search and rescue boat so as to update the position information of the searched and rescue boat in real time; the navigation control module is used for navigating the search and rescue boat to autonomously travel to an accident site; the sensing module is used for confirming the identity of the searched and rescuing boat and detecting the position of the searched and rescuing boat; the sensing module comprises a laser radar, wherein the laser radar acquires point cloud data by emitting laser beams, and processes the point cloud data by adopting the method of the first aspect to finish the task of detecting the position of the search and rescue boat;
the rescue module comprises a control cabinet and an electromagnet, the rescue boat is pulled out by the electromagnet and then safely returns to the rescue boat in parallel, and the industrial control console is used for coordinating task allocation and information transmission of the plurality of the rescue boats and the rescue boat.
According to the intelligent unmanned search and rescue system, preferably, the navigation control module comprises a GPS, an IMU and an electronic map, and adopts a composite positioning method of GPS positioning, inertial positioning and electronic map matching positioning.
The third aspect of the present invention provides a search and rescue method of the intelligent unmanned search and rescue system according to the second aspect, comprising the following steps:
s1: the main control station sends a command to the search and rescue boat through the communication module;
s2: the search and rescue boat receives the task information offshore;
s3: the search and rescue boat goes to the target position through the navigation control module, and the specific position of the search and rescue boat is detected through the laser radar and the detection method based on the PV-RCNN algorithm in the first aspect;
s4: the search and rescue boat is connected with the searched and rescue boat through the rescue module;
s5: the search and rescue boat and the searched and rescue boat return to the navigation through the route planning module, and obstacle avoidance is performed through a laser radar during the course;
s6: the search and rescue boat is in shore with the searched and rescue boat.
According to the search and rescue method, preferably, the specific process of step 1 is as follows: when there is unmanned ship to take place the accident in the surface of water, send the SOS information by the search and rescue ship, this information mainly includes GPS information and basic appearance model information by the search and rescue ship, and after the main control station received the SOS information by the search and rescue ship, send the order for the search and rescue ship through communication module, give the GPS information by the search and rescue ship and with certain frequency real-time update.
According to the search and rescue method, preferably, the specific process of the step 2 is that the search and rescue boat receives the search and rescue boat information updated in real time, and the search and rescue boat with the laser radar senses the surrounding environment and is offshore.
According to the search and rescue method, preferably, the specific process of the step 3 is as follows:
3.1 after the rescue boat leaves the shore, the navigation control module enables the rescue boat to independently travel to the vicinity of the target GPS position according to the GPS position of the rescue boat and by combining the obstacle avoidance system of the laser radar;
3.2 if the GPS signal of the searched and rescuing boat is good, the searched and rescuing boat directly approaches through the acquired GPS information of the searched and rescuing boat and senses and detects the specific position of the searched and rescuing boat through a laser radar and the detection method based on the PV-RCNN algorithm in the first aspect; if the GPS signal loss condition occurs on the coast of the searched and rescue boat, the searched and rescue boat approaches according to the last GPS information of the searched and rescue boat, then the search and rescue boat senses the search and rescue boat through a laser radar, and the detection method based on the PV-RCNN algorithm in the first aspect is used for detecting the target, so that the specific position of the searched and rescue boat is determined.
According to the search and rescue method, preferably, in the step 3.1, the fusion positioning of the GPS and the IMU is realized by means of a callable extended Kalman filtering method provided by the ROS platform after the search and rescue boat is off-shore.
Compared with the prior art, the invention has the following beneficial effects:
(1) Aiming at the problem that the sparse convolution mode in the traditional PV-RCNN algorithm cannot achieve the combination of the receptive field size and the convolution speed, the invention provides an improved PV-RCNN target detection algorithm. The method adopts a sparse convolution mode capable of adaptively changing the size of the receptive field, and can meet the information exchange and speed requirements between adjacent point clouds. Meanwhile, the invention classifies the point cloud data of the detected target, so that the detection boat carrying the PV-RCNN detection algorithm can self-adaptively adjust the angle of the detection boat on the water surface to obtain richer hull semantic information, and the detection precision of the target is improved.
(2) The invention also provides a system and a method for searching and rescuing the unmanned ship with the water surface fault and stranding, and the search and rescue system has high flexibility and strong real-time performance when executing the water surface search and rescue task. The invention improves the capability of target searching by integrating the advantages of the laser radar and the improved PV-RCNN target detection algorithm, can autonomously sense, analyze and detect near the search and rescue target location, has higher detection and identification speeds, and improves the rescue speed.
Drawings
FIG. 1 is a schematic diagram of an intelligent unmanned ship search and rescue system of the present invention;
FIG. 2 is a diagram of a GPS/IMU fusion framework of the present invention;
FIG. 3 is a view of the target of the search and rescue boat of the present invention as approaching the search and rescue boat;
FIG. 4 is a point cloud image of a search and rescue boat of the present invention returning side by side with a search and rescue boat;
FIG. 5 is a projection view of a laser radar sweeping to a shore point cloud when the search and rescue boat of the present invention is on shore.
Detailed Description
In order to enable those skilled in the art to more clearly understand the technical scheme of the present invention, the technical scheme of the present invention will be described in detail with reference to specific embodiments.
Example 1
A method for detecting a surface unmanned ship based on an improved PV-RCNN algorithm, comprising the steps of:
(1) Scanning a water surface scene to be detected by using a laser radar to obtain an original point cloud; voxel processing is carried out on the obtained original point cloud to form a plurality of voxels, the characteristic of the original point cloud in each voxel is averaged to be used as the voxel characteristic of the voxel, and then the side View characteristic of the point cloud is extracted from the voxel characteristic through a Multi-View method to obtain a side View characteristic diagram;
the process for obtaining the original point cloud comprises the following steps: the central axis from the stern of the unmanned ship to the bow is used as a standard line, the connecting line of the laser radar and the center of the ship is used as a detection line, the included angle between the detection line and the standard line is defined to be 90-180 degrees when the laser radar detects the point cloud on the bow or the bow plus side, and the included angle between the detection line and the standard line is defined to be 0-90 degrees when the laser radar detects the point cloud on the stern or the stern plus side. Therefore, the corresponding included angle is 20-160 degrees when the laser radar detects the side point cloud.
The results of collecting the water surface point cloud data in this embodiment are shown in table 1:
table 1 water surface point cloud data
As can be seen from table 1, the sensing of the laser radar on the detected boat is poor in semantic information when only the bow or stern point cloud exists, and the side point cloud is rich in semantic information. According to the gesture that is exposed at the detection ship by the detection ship, and the navigation of detection ship on the surface of water is comparatively free, but detection ship self-adaptation adjusts the bow angle, makes the detection ship can detect more side information of the detection ship to guarantee also having more abundant semantic information when the detection ship distance is farther, bring higher recognition accuracy.
For example, when the detected point cloud is identified as the bow and the stern shown in table 1 by the detecting boat, the detecting boat is identified by changing the track of the detecting boat and changing the relative position of the detecting boat and the detected boat. The detected boat is supposed to move to a proper position towards the right front when the detected boat is right in front of the detected boat, so that the detected boat can detect the side face, and has richer semantic information and more robustness in detection.
(2) Performing adaptive sparse convolution processing on the voxel characteristics obtained in the step (1), wherein the processing comprises the following specific operations: and comparing the number of the point clouds in the voxel characteristic with a set threshold value, and when the number of the point clouds in the voxel characteristic is larger than the set threshold value, using the voxel characteristic as the information communication voxel characteristic. For example, a convolution kernel of 3×3 is included for the size, 27 neighbor boxes are included for the convolution kernel, whether there is a point cloud in the voxels is determined, and a threshold is determined according to the distribution of the number of point clouds, so as to determine whether to calculate the voxel as a voxel for information communication. The threshold is determined according to the sparseness degree of the point cloud, and the more sparse the point cloud is, the smaller the threshold is. Finally, the invention can realize the selective expansion in the receptive field range and carry out adaptive information exchange.
Then, carrying out downsampling operation on the information communication voxel characteristics to obtain multi-layer characteristics with different scales;
(3) Carrying out feature compression on the features obtained by adopting the highest sampling multiple in the multi-layer features obtained in the step (2) in height to obtain a top-view feature map;
under the condition that only the bow and the stern of the detected part are solved, the side View and the top View of the unmanned boat are larger in area relative to the front View and the back View, so that the semantic information of the side View and the top View is more complete.
(4) Performing downsampling and upsampling on the overlooking feature map obtained in the step (3) and the side-looking feature map obtained in the step (1) to obtain overlooking feature maps and side-looking feature maps with different scales, splicing the overlooking feature maps and the side-looking feature maps with the same scales to obtain spliced feature maps with multiple scales, and splicing the spliced feature maps with multiple scales according to channels to obtain a final spliced feature map;
(5) Generating a suggestion frame by adopting an anchor frame method for the final spliced feature map;
(6) Performing set abstraction operation on the multi-layer characteristics obtained in the step (2) and obtained by adopting the highest sampling multiple, and then according to a formulaIncreasing the weight of foreground points therein, wherein +.>For foreground point weight, a () is a three-layer multi-layer perceptron, and f is the feature of the ith layer in the foreground point p position. Finally, the furthest point sampling is carried out on the foreground points to obtain key points;
(7) And (3) summarizing the key points in the step (6) into a RoI grid with a plurality of receptive fields by using a RoI grid pooling module to classify and regress the features in the suggestion frame, thereby completing the task of target detection.
Example 2
The embodiment provides an intelligent unmanned search and rescue system, when unmanned ship breaks down, stranding on the surface of water at test site, the master control station receives the rescue task, dispatches the search and rescue ship to the target sea area, searches and rescue accident unmanned ship.
As shown in FIG. 1, the intelligent unmanned ship search and rescue system provided by the invention comprises a main control console and an unmanned ship body boat. The unmanned ship body boat is provided with a communication module, an aviation control module, a perception module, a rescue module and an industrial control module. The main control station is used for receiving search and rescue tasks, acquiring the size and GPS information of the searched and rescue boats, communicating the GPS information of the searched and rescue boats to the search and rescue boats so as to update the position information of the searched and rescue boats in real time, the navigation control module is used for navigating the search and rescue boats to independently travel to a target GPS position (or an accident place) in an aviation mode, the sensing module is used for confirming the identity of the searched and rescue boats and detecting the specific position of the searched and rescue boats, the rescue module is used for safely returning the search and rescue boats in parallel after being pulled out of the search and rescue boats, and the industrial control station is used for coordinating task allocation and information transmission of the plurality of searched and rescue boats and the search and rescue boats.
The navigation control module comprises a GPS, an IMU and an electronic map, and adopts a composite positioning method of GPS positioning, inertial positioning and electronic map matching positioning.
The sensing module comprises a laser radar, the laser radar mainly scans and acquires the size, shape and distance of the searched and rescuing boat by emitting laser beams to the surrounding to construct a real-time 3D environment, and then the point cloud data is processed by adopting the improved PV-RCNN algorithm in the embodiment 1 to confirm the position of the searched and rescuing boat.
Example 3
A search and rescue method of an intelligent unmanned ship search and rescue system comprises the following steps:
step 1: the main control station sends a command to the search and rescue boat through the communication module;
the specific process is as follows: when there is unmanned ship to take place the accident in the surface of water, send the SOS information by the search and rescue ship, this information mainly includes GPS information and basic appearance model information by the search and rescue ship, and after the main control station received the SOS information by the search and rescue ship, send the order for the search and rescue ship through communication module, give the GPS information by the search and rescue ship and with certain frequency real-time update.
Step 2: the search and rescue boat receives the task information offshore;
the specific process is as follows: the search and rescue boat receives the search and rescue boat information updated in real time, and the search and rescue boat with the laser radar senses the surrounding environment and is offshore.
Step 3: the search and rescue boat goes to the vicinity of the target position through the navigation control module, and the specific position of the search and rescue boat is detected through the improved PV-RCNN algorithm in the embodiment 1;
the specific process is as follows:
3.1 after the rescue boat leaves the shore, the navigation control module enables the rescue boat to independently travel to the vicinity of the target GPS position according to the GPS position of the rescue boat and by combining the obstacle avoidance system of the laser radar;
extreme conditions may be encountered during this process, such as narrow channels and sections with overpasses at the bottom. The GPS signals of the search and rescue boat may be lost or the GPS signals are unstable and the refreshing frequency is low (generally 10 Hz), so that accidents occur when the unmanned boat collides with the shore, the longitude and latitude of the GPS are used as input signals to be transmitted into the IMU, the IMU can pass through a narrow river channel by measuring some parameters, and the surrounding environment can be built through a laser radar SLAM and combined with the IMU to pass through the narrow and bent river channel.
The fusion process of the specific GPS and IMU is shown in fig. 2:
IMU (Inertial Measurement Unit) inertial measurement unit, typically consisting of 3 accelerometers and 3 gyroscopes, can measure acceleration and angular velocity in three directions, using a respectively x ,a y ,a z ,w x ,w y ,w z And (3) representing. GPS (Global Positioning System) the GPS receiver is carried in the unmanned ship to obtain the positioning information of GPS, and the information can be obtained with precision, latitude and height.
Because the working frequency of the GPS (less than 10 Hz) does not meet the requirement of real-time performance, and the working frequency of the IMU can reach more than 50 Hz, the situation that the GPS and the IMU acquire data at different moments can occur. Therefore, when the GPS has not generated data yet and only the IMU data is obtained, the IMU data passes through the motion model to obtain the prediction state X (ˇ) k, and at the same time, the prediction state is transmitted back to the motion model to continue the prediction of the next step. In another case, when GPS data is generated, the predicted state generated at the previous time is subjected to data fusion with the received GPS position information, so that a corrected state X (≡k) is obtained. And finally, transmitting the motion model back to perform subsequent motion model prediction.
Wherein the state vector of the motion model consists of 3 parts: position, speed and direction, these parameters are updated according to the following method:
position p k The formula isWhere fk-1 is the measurement of IMU, C ns Is used for carrying out coordinate transformation on the measured value of the IMU;
assuming that the object makes uniform acceleration linear motion, the velocity Vk formula is V k =V k-1 +Δt(C ns f k-1 -g)
Direction q k The formula is q k =Ω(q(w k-1 Δt))q k-1 Q in k-1 Represented by quaternions;
the basic principle is shown in fig. 2, and the fusion positioning of the GPS/IMU is realized by means of a callable extended Kalman filtering method provided by the ROS platform.
3.2 if the GPS signal of the searched and rescuing boat is good, the searched and rescuing boat directly approaches through the acquired GPS information of the searched and rescuing boat and senses the specific position of the searched and rescuing boat through a laser radar;
if the GPS signal loss condition occurs near the shore of the searched-and-rescue boat, the searched-and-rescue boat can be in the ground and the moving range of the searched-and-rescue boat is limited because the searched-and-rescue boat is likely to be in the ground and the moving range of the searched-and-rescue boat is limited, the searched-and-rescue boat can be abutted to the ground according to the last GPS information of the searched-and-rescue boat, and then the ground is sensed through the laser radar, as shown in FIG. 3, the image is sensed by the laser radar when the searched-and-rescue boat is abutted to the searched-and-rescue boat, and the specific position of the searched-and-rescue boat is determined by applying the improved PV-RCNN algorithm to detect the target.
After the suggestion frame of the searched-and-rescue boat is detected, the relative distance d and the direction alpha of the searched-and-rescue boat can be calculated, and the GPS position information G2 (long 2, lat 2) of the searched-and-rescue boat is calculated through the GPS information G1 (long 1, lat 1) of the searched-and-rescue boat.
The calculation method comprises the following steps:
long2=long1+d*sinα/[ARC*cos(lat1)*2π/360]
lat2=lat1+d*cosα/(ARC*2π/360)
step 4: the search and rescue boat is connected with the searched and rescue boat through the rescue module;
the specific process is as follows: the search and rescue boat is provided with a control cabinet and an electromagnet, the equipment is arranged on the side edge of the search and rescue boat, and the search and rescue boat is attracted by the equipment.
Step 5: the search and rescue boat and the searched and rescue boat return to the navigation through the route planning module, and obstacle avoidance is performed through a laser radar during the course; the returning process in the step 5 is similar to that in the step 3, and a point cloud image of the search and rescue boat returning side by side with the search and rescue boat is shown in fig. 4;
step 6: the search and rescue boat is in shore with the searched and rescue boat.
The specific process is as follows: as shown in fig. 5, the projection diagram of the point cloud of the shore is drawn by the laser radar when the search and rescue boat is on shore, the boundary line of the shore is extracted based on the point cloud data, the boundary line of the shore is obtained through projection transformation, the route is planned by judging the shore, the route is kept at a certain distance from the shore and is on shore in parallel, the time for returning to different emitting points of the laser is different under the influence of the water flow speed and the ship speed when the search and rescue boat is on shore, the point cloud is distorted, and therefore the real-time position of the search and rescue boat is corrected by combining the speed and acceleration information of the IMU and the position information of the GPS sensor.
The embodiments described above are specific embodiments of the present invention, but the embodiments of the present invention are not limited to the embodiments described above, and any other combinations, changes, modifications, substitutions, and simplifications that do not exceed the design concept of the present invention fall within the scope of the present invention.
Claims (10)
1. A method for detecting a surface unmanned ship based on an improved PV-RCNN algorithm, comprising the steps of:
(1) Scanning a water surface scene to be detected by using a laser radar to obtain an original point cloud; voxel processing is carried out on the obtained original point cloud to form a plurality of voxels, the characteristic of the original point cloud in each voxel is averaged to be used as the voxel characteristic of the voxel, and then the side View characteristic of the point cloud is extracted from the voxel characteristic through a Multi-View method to obtain a side View characteristic diagram;
(2) Performing adaptive sparse convolution processing on the voxel characteristics obtained in the step (1) to obtain information communication voxel characteristics; performing downsampling operation on the information communication voxel characteristics to obtain multi-layer characteristics with different scales;
(3) Carrying out feature compression on the features obtained by adopting the highest sampling multiple in the multi-layer features obtained in the step (2) in height to obtain a top-view feature map;
(4) Downsampling and upsampling are carried out on the overlook characteristic image and the side-view characteristic image to obtain overlook characteristic images and side-view characteristic images with different scales, the overlook characteristic images and the side-view characteristic images with the same scale are spliced to obtain spliced characteristic images with multiple scales, and then the spliced characteristic images with multiple scales are spliced to obtain a final spliced characteristic image;
(5) Generating a suggestion frame by adopting an anchor frame method for the final spliced feature map;
(6) Performing set abstraction operation on the multi-layer features obtained in the step (2) by adopting the highest sampling multiple, increasing the weight of foreground points in the multi-layer features, and finally performing furthest point sampling on the foreground points to obtain key points;
(7) And (3) summarizing the key points in the step (6) into a RoI grid with a plurality of receptive fields by using a RoI grid pooling module, classifying and regressing the features in the suggestion frame, and completing the task of target detection.
2. The method according to claim 1, wherein the operation of performing adaptive sparse convolution on the voxel feature obtained in step (1) in step (2) is: and comparing the number of the point clouds in the voxel characteristic with a set threshold value, and when the number of the point clouds in the voxel characteristic is larger than the set threshold value, using the voxel characteristic as the information communication voxel characteristic.
3. The method according to claim 2, wherein the step (1) of scanning the water surface scene to be detected with the laser radar includes the steps of: taking the central axis from the bow to the stern of the unmanned ship as a standard line, taking the connecting line of the laser radar and the center of the unmanned ship as a detection line, and defining that the included angle between the detection line and the standard line is 20 o ~160 o And when the point cloud information detected by the laser radar is the side point cloud of the unmanned ship.
4. A method according to claim 3, wherein the foreground points are weighted in step (6) using the formula:
wherein the method comprises the steps ofFor foreground point weight, +.>() Is a three-layer multi-layer perceptron,fis a foreground pointpDown to the positioniLayer characteristics.
5. The method of claim 4, wherein the downsampling operation is a 1x, 2x, 4x, 8x downsampling operation of the voxel features obtained in step (1) to obtain multi-layer features of different scales.
6. The method of claim 5, wherein the step (4) of stitching the stitched feature map of multiple dimensions is performed by stitching by channels.
7. An intelligent unmanned search and rescue system based on a laser radar and a PV-RCNN detection method is characterized by comprising a main control console and a search and rescue boat body boat, wherein a communication module, a navigation control module, a sensing module, a rescue module and an industrial control module are arranged on the search and rescue boat body boat;
the main control console is used for receiving the search and rescue task, acquiring the size and GPS information of the searched and rescue boat, and communicating the size and GPS information of the searched and rescue boat to the search and rescue boat so as to update the position information of the searched and rescue boat in real time;
the navigation control module is used for navigating the search and rescue boat to autonomously travel to an accident site;
the sensing module is used for confirming the identity of the searched and rescuing boat and detecting the position of the searched and rescuing boat; the sensing module comprises a laser radar, wherein the laser radar acquires point cloud data by emitting laser beams and processes the point cloud data by adopting the method of claim 1 to finish the task of detecting the position of the searched and rescuing boat;
the rescue module is used for safely returning the rescue boat and the search and rescue boat in parallel after the rescue boat is pulled out by the electromagnet;
the industrial control station is used for coordinating task allocation and information transmission of the plurality of searched and rescuing boats.
8. The search and rescue method of the intelligent unmanned search and rescue system as claimed in claim 7, comprising the steps of:
s1: the main control station sends a command to the search and rescue boat through the communication module;
s2: the search and rescue boat receives the task information offshore;
s3: the search and rescue boat goes to the target position through the navigation control module, and the specific position of the searched and rescue boat is detected through the method of claim 1;
s4: the search and rescue boat is connected with the searched and rescue boat through the rescue module;
s5: the search and rescue boat and the searched and rescue boat return to the navigation through the route planned by the navigation control module, and the environment is perceived by the laser radar to avoid the obstacle during the period;
s6: the search and rescue boat is in shore with the searched and rescue boat.
9. The search and rescue method as claimed in claim 8, wherein the specific process of step S3 is as follows:
3.1 After the rescue boat leaves the shore, the navigation control module enables the rescue boat to independently travel to the vicinity of the target GPS position in an aviation mode according to the GPS position of the rescue boat and by combining a laser radar obstacle avoidance system;
3.2 if the GPS signal of the searched and rescuing boat is good, the searched and rescuing boat directly approaches through the obtained GPS information of the searched and rescuing boat and senses and detects the specific position of the searched and rescuing boat through the detection method of claim 1; if the GPS signal loss condition occurs on the coast of the searched-and-rescue boat, the searched-and-rescue boat approaches according to the last GPS information of the searched-and-rescue boat, then senses the GPS signal loss condition through a laser radar after the GPS signal loss condition approaches, and performs target detection by using the detection method of claim 1, so that the specific position of the searched-and-rescue boat is determined.
10. The search and rescue method of claim 9, wherein the fusion positioning of the GPS and the IMU is achieved by means of a callable extended kalman filter method provided by the ROS platform after the search and rescue boat is off-shore in step 3.1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310196909.8A CN116486252A (en) | 2023-03-03 | 2023-03-03 | Intelligent unmanned search and rescue system and search and rescue method based on improved PV-RCNN target detection algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310196909.8A CN116486252A (en) | 2023-03-03 | 2023-03-03 | Intelligent unmanned search and rescue system and search and rescue method based on improved PV-RCNN target detection algorithm |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116486252A true CN116486252A (en) | 2023-07-25 |
Family
ID=87223923
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310196909.8A Pending CN116486252A (en) | 2023-03-03 | 2023-03-03 | Intelligent unmanned search and rescue system and search and rescue method based on improved PV-RCNN target detection algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116486252A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117115704A (en) * | 2023-08-03 | 2023-11-24 | 武汉理工大学 | Marine search and rescue system and method based on multi-sensor fusion |
-
2023
- 2023-03-03 CN CN202310196909.8A patent/CN116486252A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117115704A (en) * | 2023-08-03 | 2023-11-24 | 武汉理工大学 | Marine search and rescue system and method based on multi-sensor fusion |
CN117115704B (en) * | 2023-08-03 | 2024-04-02 | 武汉理工大学 | Marine search and rescue system and method based on multi-sensor fusion |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wu et al. | Survey of underwater robot positioning navigation | |
KR102240839B1 (en) | Autonomous navigation method using image segmentation | |
CN110850403B (en) | Multi-sensor decision-level fused intelligent ship water surface target feeling knowledge identification method | |
CN110414396B (en) | Unmanned ship perception fusion algorithm based on deep learning | |
Zhuang et al. | Radar-based collision avoidance for unmanned surface vehicles | |
KR102235787B1 (en) | Device and method for monitoring a berthing | |
US20220024549A1 (en) | System and method for measuring the distance to an object in water | |
CN108630017A (en) | A kind of ship's navigation collision prevention method and system | |
CN110580044A (en) | unmanned ship full-automatic navigation heterogeneous system based on intelligent sensing | |
CN106710313A (en) | Method and system for ship in bridge area to actively avoid collision based on laser three-dimensional imaging technique | |
CN109029465B (en) | Millimeter wave radar-based tracking and obstacle avoidance system for unmanned ship | |
JP2018503913A (en) | Ship auxiliary docking method and system | |
CN207908979U (en) | A kind of target identification tracing system of unmanned boat | |
CN102042835A (en) | Autonomous underwater vehicle combined navigation system | |
CN110472500A (en) | A kind of water surface sensation target fast algorithm of detecting based on high speed unmanned boat | |
KR102466804B1 (en) | Autonomous navigation method using image segmentation | |
CN111090283B (en) | Unmanned ship combined positioning and orientation method and system | |
US20220392211A1 (en) | Water non-water segmentation systems and methods | |
CN113124864A (en) | Water surface navigation method adopting machine vision and inertial navigation fusion | |
Clunie et al. | Development of a perception system for an autonomous surface vehicle using monocular camera, lidar, and marine radar | |
CN116486252A (en) | Intelligent unmanned search and rescue system and search and rescue method based on improved PV-RCNN target detection algorithm | |
KR20240080189A (en) | Distance measurement method and distance measurement device using the same | |
WO2021178603A1 (en) | Water non-water segmentation systems and methods | |
CN114061565B (en) | Unmanned ship SLAM and application method thereof | |
Yao et al. | Waterscenes: A multi-task 4d radar-camera fusion dataset and benchmark for autonomous driving on water surfaces |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |