CN113108780A - Unmanned ship autonomous navigation method based on visual inertial navigation SLAM algorithm - Google Patents

Unmanned ship autonomous navigation method based on visual inertial navigation SLAM algorithm Download PDF

Info

Publication number
CN113108780A
CN113108780A CN202110340449.2A CN202110340449A CN113108780A CN 113108780 A CN113108780 A CN 113108780A CN 202110340449 A CN202110340449 A CN 202110340449A CN 113108780 A CN113108780 A CN 113108780A
Authority
CN
China
Prior art keywords
image
visual
information
inertial navigation
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110340449.2A
Other languages
Chinese (zh)
Inventor
沈奥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202110340449.2A priority Critical patent/CN113108780A/en
Publication of CN113108780A publication Critical patent/CN113108780A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an unmanned ship autonomous navigation method based on a visual inertial navigation SLAM algorithm, which integrates visual information and inertial navigation information and carries out navigation environment perception and self positioning on an unmanned ship, thereby providing key information for autonomous navigation of the unmanned ship. The invention adopts a visual odometer front-end design method based on the HSV color region segmentation algorithm, improves the accuracy of using the characteristic points in the pose estimation process of the system, and solves the problem that the pose estimation precision excludes the water surface dynamic region existing in the characteristic point detection; according to the invention, line features are added at the front end of the odometer, point features and line features are synthesized, and the advantages of point line feature information are complementary, so that the stability and robustness of the whole system are improved, the types of feature information in an image extracted by the front end of the visual odometer are enriched, and the feature information is more stably represented. And establishing a dotted line comprehensive characteristic visual dictionary to obtain an accurate global consistency track map, so that the positioning track is closer to the real navigation track of the unmanned ship.

Description

Unmanned ship autonomous navigation method based on visual inertial navigation SLAM algorithm
Technical Field
The invention relates to the technical field of unmanned ships, in particular to an unmanned ship autonomous navigation method based on a visual inertial navigation SLAM algorithm.
Background
The world countries pay more and more attention to ocean resources, the construction of oceans is a strong country to conform to the world development trend, the major strategic targets of the countries and the nations are achieved, and the ocean strength of the countries directly determines and influences the trend of the politics, the economy, the safety and the civilization of the countries. Technological innovation is an important measure and a unique way for realizing ocean emphases.
At present when artificial intelligence is rapidly developed, an unmanned ship as a novel offshore intelligent platform has the characteristics of autonomous motion control, strong adaptability to complex environments, rapid response to emergencies and the like, and like other intelligent platforms, the unmanned ship can autonomously plan a path and autonomously sail, and can adopt a manual intervention mode or an autonomous information acquisition mode to complete various tasks such as environment perception, target identification and detection, target tracking and the like on a water surface complex environment.
Unmanned ships have very wide application as novel intelligent equipment. For example, scientific research: depth measurement research, multi-ship cooperation and control strategy research; environmental study: marine environment detection, sampling and evaluation, typhoon marine early warning; military application: carrying out port investigation and patrol, searching and rescuing and anti-terrorism protection; ocean resource exploration: submarine detection, offshore oil and gas exploration, sea surface platform construction and maintenance and the like. In a complex water environment, whether the unmanned ship can accurately navigate and position is the key of the safe operation of the unmanned ship. The Simultaneous Localization and Mapping (SLAM) technology is the key to solving this problem. The visual SLAM technology provides a feasible scheme for autonomous navigation and environment detection of the unmanned ship in an unknown marine environment.
Disclosure of Invention
The invention aims to overcome the defects that in the prior art, the light ray change of the marine environment is large, the image information is easily influenced by the light ray reflected on the water surface, and other environmental factors, the sight distance of a water surface target is long, the requirement on a vision sensor is strict, the marine environment is complex, severe and unchangeable, compared with a normal ship, the unmanned ship has small volume and weight and the like, the sailing stability in the extensive ocean is poor, and the quality of the obtained water surface image is poor, and provides the unmanned ship autonomous navigation method based on the visual inertial navigation SLAM algorithm.
The invention is realized by the following technical scheme:
an unmanned ship autonomous navigation method based on a visual inertial navigation SLAM algorithm comprises a visual odometer front-end module, a system initialization module, a front-end optimization module and a closed-loop detection module;
visual odometer front end module: adopting a visual odometer front end based on an HSV color region segmentation algorithm to screen pixel points of image information acquired by a shipborne high-definition camera in real time, and selecting region corner point characteristics of a fixed reference system in an image;
a system initialization module: adding line features at the front end of the visual odometer, and extracting the line features in the image information according to a line segment detector algorithm;
a front-end optimization module: firstly, preprocessing image information acquired by a ship-borne high-definition camera and inertial navigation information acquired by an inertial navigation measurement unit; the image information carries out distortion correction on the image according to the internal reference matrix and the distortion coefficient of the camera calibration result to obtain visual information; performing two parallel thread processes on the preprocessed image information, wherein the feature points of the dynamic region in the image are screened and eliminated on the basis of the point features of HSV, and the line features in the image information are extracted according to a line segment detector algorithm; screening point feature matching and line feature matching between adjacent frames by a random sampling consistency algorithm to eliminate error matching of point and line features; processing the vision information and the inertial navigation information by adopting a parallel thread mode through the preprocessed inertial navigation information and the vision information, performing information fusion on the processed vision information and the inertial navigation information, and estimating real-time pose information of the unmanned ship;
a closed loop detection module: and establishing a visual dictionary based on the offline point-line comprehensive characteristics under the water surface environment to obtain a global consistency map.
The visual mileage front-end module comprises the following specific working steps:
a1, the front end of the visual odometer corrects the image information acquired in real time through the calibrated camera distortion parameter, and preprocesses the image; after image preprocessing, dividing the image into two parallel and simultaneous processing processes, namely a thread for processing the point characteristics and a thread for extracting the fixed reference system area;
a2, firstly, Harris corner detection is carried out on the image, and characteristic pixel coordinates of all points are obtained;
a3, converting the image information from the RGB model into HSV model, dividing the converted HSV model image into H, S, V channels, and determining the corresponding value range of the required color area HSV, wherein the conversion formula is as follows:
V=max(R,G,B)
Figure BDA0002999368370000031
Figure BDA0002999368370000032
by setting H in the programmax,Hmin,Smax,Smin,Vmax,VminScreening all pixel points in the three channels by using the parameter values;
and A4, further screening the detected Harris corners, and selecting the characteristics of the regional corners of the fixed reference system in the image.
The line segment detector algorithm comprises the following specific working steps:
B1. zooming the image: the sampling ratio for x and y axes of the picture is 0.8, and the scaling ratio is E ═ 0.64, in order to solve the jaggy appearance of the picture.
B2. Calculating the gradient: and performing gradient calculation on four pixels below the right of each pixel point to reduce the dependence degree between the pixel points.
B3. Gradient ordering: the larger the magnitude of the gradient, the more significant the pixel is represented.
B4. Judging gradient amplitude and a threshold value; if the amplitude of the pixel point is smaller than the threshold value, the pixel point is not considered, and the pixel point is considered as noise.
B5. Line segments support domain update growth: calculating whether the gradient amplitude of the pixel points around the isolated pixel point and the direction tolerance value between the gradient directions are smaller than a threshold value, if the tolerance value tau meets the threshold value, the pixel points are contained in a line segment support domain, wherein the tolerance value tau is contained in the line segment support domain
Figure BDA0002999368370000033
Indicating the error value with the j-th pixel point.
B6. Rectangle estimation: and performing rectangular approximate estimation on the line segment support area.
Figure BDA0002999368370000034
Figure BDA0002999368370000035
Wherein lx,lyIs the coordinate of the center point of the matrix, G (j) is the gradient amplitude, sigma of the pixel point jj∈ReG (j) is that the traversal line segment supports all pixel points in the domain.
B7. And (4) calculating the false alarm number.
The front-end optimization module comprises the following specific steps:
C1. firstly, distortion correction is carried out on an image according to an internal reference matrix and a distortion coefficient of a camera calibration result to obtain better visual information; acquiring inertial navigation information of an inertial navigation measurement unit by an IMU pre-integration method, and converting a coordinate system of the inertial navigation information;
C2. after distortion correction, dividing image information into two parallel threads to further process the image, wherein firstly, the dynamic region feature points in the image are effectively screened and eliminated based on the point features of HSV; secondly, linear features in the image are extracted, and the description of LBD is subjected to secondary drawing to improve matching efficiency, so that the real-time performance of the system is improved;
C3. screening point feature matching and line feature matching between adjacent frames, and eliminating error matching of point and line features by using a random sample consensus (RANSAC);
C4. and processing the acquired visual information and inertial navigation information in a parallel process mode to improve the information processing efficiency, and then performing information fusion on the processed visual information and inertial navigation information to estimate the real-time pose information of the unmanned ship.
The closed loop detection module comprises the following specific steps:
D1. firstly, preprocessing an image acquired by an unmanned ship, namely, performing image distortion correction on the acquired image, and correcting to acquire an image edge distortion area; secondly, selecting key frames in the images, judging similarity according to the number of point features and the number of line features in images between two adjacent key frames, and when the similarity of the two adjacent key frames exceeds a threshold value, proving that the distinguishability of the two adjacent key frames is not high and the two adjacent key frames are not used in closed-loop detection; when the similarity of two adjacent frames does not exceed a threshold value, using the currently acquired image for closed-loop detection; finally, closed-loop response detection is carried out according to the created visual dictionary of the point-line comprehensive characteristics, visual word matching is carried out according to the point and line characteristics extracted from the current image frame and the created visual dictionary of the point-line comprehensive characteristics, the matched visual words are quickly matched in a historical image database to obtain a historical image similar to the current image, and closed-loop response of the unmanned ship navigation track map is generated; finally, screening the closed loop to obtain a global consistency track map;
D2. closed-loop detection, namely firstly storing the generated visual dictionary tree into a BoW database in a storage mode including a forward index and a reverse index for indexing an image with high similarity between a historical image and a current image in the database; and then image similarity calculation is carried out, the similarity between the images is carried out by calculating the similarity between all the features in the two images to carry out image matching, the similarity between the two images is represented by utilizing Minkowski distance, an image matching strategy is divided into complete matching and similarity matching, the complete matching is that all the features in the two images are completely the same, the similarity matching is that the similarity of the quantity of all the features in the two images is greater than a set certain threshold value, and in visual SLAM closed-loop detection, the similarity between the images is calculated by adopting a method of image similarity matching and the distance between the current image feature obtained by the unmanned ship and the image feature in a historical image library. The calculation formula is as follows: for two visual word vectors P ═ P1,p2,…,pn]TAnd Q ═ Q1,q2,…,qn]TOf Minkowski distance of
Figure BDA0002999368370000051
The invention has the advantages that: the invention adopts a visual odometer front-end design method based on the HSV color region segmentation algorithm, improves the accuracy of using the characteristic points in the pose estimation process of the system, and solves the problem that the pose estimation precision excludes the water surface dynamic region existing in the characteristic point detection;
according to the method, the line characteristics are added at the front end of the odometer, the point characteristics and the line characteristics are integrated, the advantages of the point line characteristic information are complementary, the stability and the robustness of the whole system are improved, the types of characteristic information in an image extracted by the front end of the visual odometer are enriched, and the characteristic information is more stably represented;
according to the method, the accuracy of closed-loop detection is improved by establishing the visual dictionary based on the offline point-line comprehensive characteristics under the water surface environment, and an accurate global consistency map is obtained.
Drawings
FIG. 1 is a flow chart of the system of the present invention;
FIG. 2 is an HSV color region segmentation algorithm;
FIG. 3 is a modified visual odometer design;
fig. 4 is a flow chart of visual SLAM closed loop self-test.
Detailed Description
As shown in fig. 1 and 4, the unmanned ship autonomous navigation method based on the visual inertial navigation SLAM algorithm includes 4 modules, namely a visual odometer front-end module, a system initialization module, a front-end optimization module and a closed-loop detection module;
as shown in fig. 3, the visual mileage front end module specifically comprises the following steps:
and the front end of the visual odometer corrects the image information acquired by the shipborne high-definition camera in real time through the calibrated camera distortion parameter, and preprocesses the image. After image preprocessing, the method is divided into a thread for processing the point characteristics and a thread for extracting the fixed reference system area. The two threads are parallel threads and are processed simultaneously.
Firstly, Harris corner detection is carried out on a preprocessed image, and characteristic pixel coordinates of all points in the image are obtained.
Secondly, the image is divided into RGB chroma spaceAnd the space is converted into an HSV (hue, saturation and value) chromaticity space, so that the influence of illumination change on the next color region segmentation is reduced. As shown in fig. 2, the converted HSV chromaticity space image is divided into H, S, V three channels, the value range corresponding to the color area HSV required by us is determined, and H in the program is setmax,Hmin,Smax,Smin,Vmax,VminAnd screening all pixel points in the three channels by using the parameter values. (H)min,HmaxMinimum and maximum hue, Smin,SmaxMinimum and maximum saturation values, Vmin,VmaxBrightness minimum and maximum, respectively)
And further screening Harris corners detected in the image by combining the fixed reference system area, and selecting the characteristics of the corners of the fixed reference system area in the image.
The system initialization module is characterized in that a Line Segment Detector (LSD) is a Line Segment detection algorithm which obtains a detection result with sub-pixel level precision in a linear time, can obtain a straight Line detection result with higher precision in a shorter time, and has the core thought of combining pixel points with similar gradients. The algorithm process can be divided into 6 steps:
zooming an image;
calculating a gradient;
sorting the gradients;
judging a gradient amplitude threshold value;
the line segment supports domain update growth;
rectangular estimation;
and (4) calculating the false alarm number.
The front-end optimization module comprises the following specific steps:
firstly, preprocessing image information acquired by a high-definition camera and inertial navigation information acquired by an inertial navigation measurement unit. And carrying out distortion correction on the image according to the internal reference matrix and the distortion coefficient of the camera calibration result of the image information acquired by the high-definition camera to acquire better visual information. The inertial navigation information acquired by the inertial navigation measurement unit is converted into a coordinate system of the inertial navigation information by an IMU pre-integration method, so that repeated integration of the inertial navigation information by the system is avoided, and the overall real-time performance of the system is improved.
After the image information is subjected to distortion correction, the system is divided into two parallel threads to further process the image, firstly, the dynamic region feature points in the image are effectively screened and eliminated based on the HSV point features, and the pose estimation accuracy of the visual SLAM algorithm is improved. And secondly, extracting line features in the image according to LSD (line Band descriptor) line feature extraction, performing vector description on the line features in the image by using an LBD descriptor, and binarizing the L BD descriptor to improve the matching efficiency of the line features in the feature matching process and improve the real-time performance of the whole system.
And screening point feature matching and line feature matching between adjacent frames by a Random Sample Consensus (RANSAC) algorithm to eliminate error matching of the point features and the line features.
And in the process of processing the visual information and the inertial navigation information, processing the visual information and the inertial navigation information acquired in real time in a parallel thread mode, performing information fusion on the processed visual information and inertial navigation information, and estimating the real-time pose information of the unmanned ship.
The closed loop detection module comprises the following specific steps:
closed-loop detection methods based on the bag of Words (BoW) basically adopt point features in images to construct a visual dictionary.
Firstly, the unmanned ship firstly carries out image preprocessing after acquiring a new image, carries out image distortion correction on the acquired image, corrects an acquired image edge distortion area, and improves the quality of the acquired image. Secondly, selecting the key frame in the image to reduce the overall calculation amount of the system. And performing similarity judgment according to the point feature quantity and the line feature quantity of the images between two adjacent frames, and when the similarity of the images of the two adjacent frames exceeds a threshold value, proving that the distinguishability of the images of the two adjacent frames is not high and the images of the two adjacent frames are not used as key frames for closed-loop detection. When the similarity of two adjacent frames does not exceed the threshold value, the currently acquired image is used as a key frame for closed-loop detection. Further, closed loop response detection is performed according to the created visual dictionary of dotted line integrated features. And matching visual words according to the point and line features extracted from the current image frame and the created visual dictionary of the point-line comprehensive features, and quickly checking the matched visual words in a historical image database to obtain a historical image similar to the current image so as to generate a closed-loop map response of the navigation track of the unmanned ship. And finally, screening the closed loop to obtain a global consistency track map. And local closed loops and global closed loops are generated in the closed loop response, the local closed loops are removed, and only the global closed loops are used for updating the unmanned ship navigation track map.
And closed-loop detection, namely firstly storing the generated visual dictionary tree into a BoW database in a storage mode including a forward index and a reverse index, wherein the storage mode is used for indexing an image with high similarity between a historical image and a current image in the database. And then carrying out image similarity calculation, wherein the similarity between the images is used for carrying out image matching by calculating the similarity between all the features in the two images. The image matching strategy is divided into perfect matching and similarity matching. A perfect match is that all features in both images are identical. Similarity matching means that the similarity of all the feature quantities in the two images is larger than a set threshold. In the visual SLAM closed-loop detection module, the similarity between images is mainly used as a similarity matching method, and the similarity between the images is calculated through the distance between the current image features obtained by the unmanned ship and the image features in the historical image library.

Claims (5)

1. An unmanned ship autonomous navigation method based on a visual inertial navigation SLAM algorithm is characterized by comprising the following steps: the system comprises a visual odometer front-end module, a system initialization module, a front-end optimization module and a closed-loop detection module;
visual odometer front end module: adopting a visual odometer front end based on an HSV color region segmentation algorithm to screen pixel points of image information acquired by a shipborne high-definition camera in real time, and selecting region corner point characteristics of a fixed reference system in an image;
a system initialization module: adding line features at the front end of the visual odometer, and extracting the line features in the image information according to a line segment detector algorithm;
a front-end optimization module: firstly, preprocessing image information acquired by a ship-borne high-definition camera and inertial navigation information acquired by an inertial navigation measurement unit; the image information carries out distortion correction on the image according to the internal reference matrix and the distortion coefficient of the camera calibration result to obtain visual information; performing two parallel thread processes on the preprocessed image information, wherein the feature points of the dynamic region in the image are screened and eliminated on the basis of the point features of HSV, and the line features in the image information are extracted according to a line segment detector algorithm; screening point feature matching and line feature matching between adjacent frames by a random sampling consistency algorithm to eliminate error matching of point and line features; processing the vision information and the inertial navigation information by adopting a parallel thread mode through the preprocessed inertial navigation information and the vision information, performing information fusion on the processed vision information and the inertial navigation information, and estimating real-time pose information of the unmanned ship;
a closed loop detection module: and establishing a visual dictionary based on the offline point-line comprehensive characteristics under the water surface environment to obtain a global consistency map.
2. The unmanned ship autonomous navigation method based on the visual inertial navigation SLAM algorithm, according to claim 1, is characterized in that:
the visual mileage front-end module comprises the following specific working steps:
a1, the front end of the visual odometer corrects the image information acquired by the shipborne high-definition camera in real time through the calibrated camera distortion parameter, and pre-processes the image; after image preprocessing, dividing the image into a thread for processing the point characteristics and a thread for extracting the fixed reference system area, wherein the two threads are parallel threads and are processed simultaneously;
a2, firstly, Harris corner detection is carried out on the preprocessed image, and characteristic pixel coordinates of all points in image information are obtained;
a3, converting image information from RGB chromaticity space to HSV chromaticity space, dividing the converted HSV chromaticity space image into H, S, V channels, determining the numerical range corresponding to the required color region HSV, and setting H in the programmax,Hmin,Smax,Smin,Vmax,VminScreening all pixel points in the three channels by using the parameter values;
and A4, combining the fixed reference system area, further screening the detected Harris corners, and selecting the characteristics of the corners of the fixed reference system area in the image.
3. The unmanned ship autonomous navigation method based on the visual inertial navigation SLAM algorithm, according to claim 1, is characterized in that: the specific working steps of the line segment detector algorithm are as follows:
B1. zooming the image: setting the sampling ratio of x and y axes as 0.8, and the scaling ratio as E being 0.64;
B2. calculating the gradient: performing gradient calculation on four pixels below the right of each pixel point to reduce the degree of dependence between the pixel points;
B3. gradient ordering: the larger the amplitude of the gradient is, the more remarkable the pixel point is represented;
B4. judging gradient amplitude and a threshold value; if the amplitude of the pixel point is smaller than the threshold value, the pixel point is not considered, and the pixel point is considered as noise;
B5. line segments support domain update growth: calculating the parameters of the isolated pixel points by calculating the gradient amplitude and the gradient direction of the pixel points around the isolated pixel points;
B6. rectangle estimation: performing rectangular approximate estimation on the line segment support area;
B7. and (4) calculating the false alarm number.
4. The unmanned ship autonomous navigation method based on the visual inertial navigation SLAM algorithm, according to claim 1, is characterized in that: the front-end optimization module comprises the following specific working steps:
C1. firstly, preprocessing image information acquired by a ship-borne high-definition camera and inertial navigation information acquired by an inertial navigation measurement unit; distortion correction is carried out on the image information according to the internal reference matrix and the distortion coefficient of the camera calibration result to obtain visual information; the inertial navigation information acquired by the inertial navigation measurement unit is converted into a coordinate system of the inertial navigation information by an IMU pre-integration method;
C2. after distortion correction, dividing image information into two parallel threads to further process the image, wherein firstly, the dynamic region feature points in the image are effectively screened and eliminated based on the point features of HSV; secondly, extracting line features in the image information according to a linear detector algorithm, carrying out vector description on the line features in the image information by using an LBD descriptor, and carrying out binarization on the LBD descriptor;
C3. screening point feature matching and line feature matching between adjacent frames by a random sampling consistency algorithm to eliminate error matching of point and line features;
C4. and processing the visual information and the inertial navigation information acquired in real time in a parallel thread mode, performing information fusion on the processed visual information and inertial navigation information, and estimating real-time pose information of the unmanned ship.
5. The unmanned ship autonomous navigation method based on the visual inertial navigation SLAM algorithm, according to claim 1, is characterized in that: the closed loop detection module comprises the following specific working steps:
D1. after acquiring new image information, the unmanned ship firstly carries out image preprocessing, carries out image distortion correction on the acquired image, and corrects and acquires an image edge distortion area; secondly, selecting key frames in the images, judging similarity according to the number of point features and the number of line features in the images between two adjacent frames, when the similarity of the images of the two adjacent frames exceeds a threshold value, proving that the distinguishability of the images of the two adjacent frames is not high, and the images of the two adjacent frames are not used as the key frames for closed-loop detection; finally, closed-loop response detection is carried out according to the created visual dictionary of the point-line comprehensive characteristics, visual word matching is carried out according to the point and line characteristics extracted from the current image frame and the created visual dictionary of the point-line comprehensive characteristics, the matched visual words are quickly checked in a historical image database to inquire out a historical image similar to the current image, and closed-loop response of the unmanned ship navigation track map is generated; finally, screening the closed loop to obtain a global consistency track map;
D2. closed-loop detection, namely firstly storing the generated visual dictionary tree into a BoW database in a storage mode including a forward index and a reverse index for indexing an image with high similarity between a historical image and a current image in the database; and then, image similarity calculation is carried out, the similarity between the images is carried out by calculating the similarity between all the features in the two images, the image matching strategy is divided into complete matching and similarity matching, the complete matching is that all the features in the two images are completely the same, the similarity matching is that the similarity of the quantity of all the features in the two images is greater than a set certain threshold value, and in the visual SLAM closed loop detection, the similarity between the images is calculated by adopting a method of the similarity matching between the images and the distance between the current image feature obtained by the unmanned ship and the image feature in the historical image library.
CN202110340449.2A 2021-03-30 2021-03-30 Unmanned ship autonomous navigation method based on visual inertial navigation SLAM algorithm Pending CN113108780A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110340449.2A CN113108780A (en) 2021-03-30 2021-03-30 Unmanned ship autonomous navigation method based on visual inertial navigation SLAM algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110340449.2A CN113108780A (en) 2021-03-30 2021-03-30 Unmanned ship autonomous navigation method based on visual inertial navigation SLAM algorithm

Publications (1)

Publication Number Publication Date
CN113108780A true CN113108780A (en) 2021-07-13

Family

ID=76712761

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110340449.2A Pending CN113108780A (en) 2021-03-30 2021-03-30 Unmanned ship autonomous navigation method based on visual inertial navigation SLAM algorithm

Country Status (1)

Country Link
CN (1) CN113108780A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101701828A (en) * 2009-11-23 2010-05-05 常州达奇信息科技有限公司 Blind autonomous navigation method based on stereoscopic vision and information fusion
CN102789233A (en) * 2012-06-12 2012-11-21 湖北三江航天红峰控制有限公司 Vision-based combined navigation robot and navigation method
US20140316698A1 (en) * 2013-02-21 2014-10-23 Regents Of The University Of Minnesota Observability-constrained vision-aided inertial navigation
CN105574521A (en) * 2016-02-25 2016-05-11 民政部国家减灾中心 House contour extraction method and apparatus thereof
CN106679648A (en) * 2016-12-08 2017-05-17 东南大学 Vision-inertia integrated SLAM (Simultaneous Localization and Mapping) method based on genetic algorithm
CN108846867A (en) * 2018-08-29 2018-11-20 安徽云能天智能科技有限责任公司 A kind of SLAM system based on more mesh panorama inertial navigations
CN109405824A (en) * 2018-09-05 2019-03-01 武汉契友科技股份有限公司 A kind of multi-source perceptual positioning system suitable for intelligent network connection automobile
CN109579863A (en) * 2018-12-13 2019-04-05 北京航空航天大学 Unknown topographical navigation system and method based on image procossing
CN111210477A (en) * 2019-12-26 2020-05-29 深圳大学 Method and system for positioning moving target
CN111561923A (en) * 2020-05-19 2020-08-21 北京数字绿土科技有限公司 SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion
CN112396595A (en) * 2020-11-27 2021-02-23 广东电网有限责任公司肇庆供电局 Semantic SLAM method based on point-line characteristics in dynamic environment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101701828A (en) * 2009-11-23 2010-05-05 常州达奇信息科技有限公司 Blind autonomous navigation method based on stereoscopic vision and information fusion
CN102789233A (en) * 2012-06-12 2012-11-21 湖北三江航天红峰控制有限公司 Vision-based combined navigation robot and navigation method
US20140316698A1 (en) * 2013-02-21 2014-10-23 Regents Of The University Of Minnesota Observability-constrained vision-aided inertial navigation
CN105574521A (en) * 2016-02-25 2016-05-11 民政部国家减灾中心 House contour extraction method and apparatus thereof
CN106679648A (en) * 2016-12-08 2017-05-17 东南大学 Vision-inertia integrated SLAM (Simultaneous Localization and Mapping) method based on genetic algorithm
CN108846867A (en) * 2018-08-29 2018-11-20 安徽云能天智能科技有限责任公司 A kind of SLAM system based on more mesh panorama inertial navigations
CN109405824A (en) * 2018-09-05 2019-03-01 武汉契友科技股份有限公司 A kind of multi-source perceptual positioning system suitable for intelligent network connection automobile
CN109579863A (en) * 2018-12-13 2019-04-05 北京航空航天大学 Unknown topographical navigation system and method based on image procossing
CN111210477A (en) * 2019-12-26 2020-05-29 深圳大学 Method and system for positioning moving target
CN111561923A (en) * 2020-05-19 2020-08-21 北京数字绿土科技有限公司 SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion
CN112396595A (en) * 2020-11-27 2021-02-23 广东电网有限责任公司肇庆供电局 Semantic SLAM method based on point-line characteristics in dynamic environment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
姚康博: "基于视觉SLAM的无人船自主导航研究", no. 1, pages 17 *

Similar Documents

Publication Publication Date Title
Shao et al. Saliency-aware convolution neural network for ship detection in surveillance video
CN111968128B (en) Unmanned aerial vehicle visual attitude and position resolving method based on image markers
CN100538723C (en) The inner river ship automatic identification system that multiple vision sensor information merges
CN113627473B (en) Multi-mode sensor-based water surface unmanned ship environment information fusion sensing method
US20220024549A1 (en) System and method for measuring the distance to an object in water
CN104535066A (en) Marine target and electronic chart superposition method and system in on-board infrared video image
Zhang et al. Research on unmanned surface vehicles environment perception based on the fusion of vision and lidar
CN110766721B (en) Carrier landing cooperative target detection method based on airborne vision
US20170316573A1 (en) Position measuring equipment
CN114415168A (en) Unmanned surface vessel track fusion method and device
Zhang et al. A object detection and tracking method for security in intelligence of unmanned surface vehicles
CN115546741A (en) Binocular vision and laser radar unmanned ship marine environment obstacle identification method
CN114581675A (en) Marine ship detection method based on machine vision and multi-source data fusion
CN113486819A (en) Ship target detection method based on YOLOv4 algorithm
CN116994135A (en) Ship target detection method based on vision and radar fusion
Li et al. Vision-based target detection and positioning approach for underwater robots
Song et al. Research on unmanned vessel surface object detection based on fusion of ssd and faster-rcnn
CN113933828A (en) Unmanned ship environment self-adaptive multi-scale target detection method and system
CN110120073A (en) A method of based on the guidance unmanned boat recycling of beacon light visual signal
Petković et al. An overview on horizon detection methods in maritime video surveillance
CN113837924A (en) Water bank line detection method based on unmanned ship sensing system
Del Pizzo et al. Reliable vessel attitude estimation by wide angle camera
CN117456346A (en) Underwater synthetic aperture sonar image target detection method and system
CN113108780A (en) Unmanned ship autonomous navigation method based on visual inertial navigation SLAM algorithm
CN115346133A (en) Ship detection method and system based on optical satellite image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination