CN117572459A - Unmanned aerial vehicle capable of automatically switching navigation system - Google Patents

Unmanned aerial vehicle capable of automatically switching navigation system Download PDF

Info

Publication number
CN117572459A
CN117572459A CN202210930365.9A CN202210930365A CN117572459A CN 117572459 A CN117572459 A CN 117572459A CN 202210930365 A CN202210930365 A CN 202210930365A CN 117572459 A CN117572459 A CN 117572459A
Authority
CN
China
Prior art keywords
aerial vehicle
unmanned aerial
pose
laser
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210930365.9A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Hydrogen Source Intelligent Technology Co ltd
Original Assignee
Beijing Hydrogen Source Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Hydrogen Source Intelligent Technology Co ltd filed Critical Beijing Hydrogen Source Intelligent Technology Co ltd
Priority to CN202210930365.9A priority Critical patent/CN117572459A/en
Publication of CN117572459A publication Critical patent/CN117572459A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/933Lidar systems specially adapted for specific applications for anti-collision purposes of aircraft or spacecraft
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C27/00Rotorcraft; Rotors peculiar thereto
    • B64C27/04Helicopters
    • B64C27/08Helicopters with two or more rotors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64DEQUIPMENT FOR FITTING IN OR TO AIRCRAFT; FLIGHT SUITS; PARACHUTES; ARRANGEMENT OR MOUNTING OF POWER PLANTS OR PROPULSION TRANSMISSIONS IN AIRCRAFT
    • B64D47/00Equipment not otherwise provided for
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C22/00Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/43Determining position using carrier phase measurements, e.g. kinematic positioning; using long or short baseline interferometry
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/46Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being of a radio-wave signal type
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Electromagnetism (AREA)
  • Automation & Control Theory (AREA)
  • Mechanical Engineering (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Navigation (AREA)

Abstract

The invention discloses an unmanned aerial vehicle capable of automatically switching navigation systems, which comprises an unmanned aerial vehicle body, a laser SLAM system for realizing navigation of the unmanned aerial vehicle, a visual SLAM system for assisting navigation of the unmanned aerial vehicle and a controller, wherein the unmanned aerial vehicle body is provided with a plurality of navigation modules; when the controller detects that the error between the pose obtained by the laser inertial odometer in the laser SLAM system and the reference true value is larger than a switching setting threshold, the laser SLAM system is switched to the vision SLAM system for unmanned plane navigation. The unmanned aerial vehicle provided by the invention can still make a movement decision in a complex environment when encountering severe weather such as heavy rain, dense smoke or dense fog, and the like, so that the accuracy and stability of the unmanned aerial vehicle are improved, the unmanned aerial vehicle is convenient to use, and the safety is improved.

Description

Unmanned aerial vehicle capable of automatically switching navigation system
Technical Field
The invention belongs to the technical field of unmanned aerial vehicles, and particularly relates to an unmanned aerial vehicle capable of automatically switching navigation systems.
Background
Because the laser radar has the advantages of high precision, large visual angle, wide range finding and the like, a part of the existing unmanned aerial vehicle adopts the laser radar SLAM technology to navigate at present, but the applicant finds out in the research that the laser of the laser radar has smaller attenuation and longer propagation distance in sunny weather, and the attenuation is rapidly increased in bad weather such as heavy rain, dense smoke, dense fog and the like, and the propagation distance of the laser is greatly influenced. Therefore, when an unmanned aerial vehicle adopting laser radar technology for navigation encounters heavy rain, dense smoke or dense fog and the like, data acquired by a laser radar can be inaccurate, so that the unmanned aerial vehicle cannot determine the position and the posture of the unmanned aerial vehicle, and further the unmanned aerial vehicle is difficult to understand a complex environment and make a movement decision.
Therefore, how to make a motion decision in a complex environment when an unmanned aerial vehicle encounters severe weather such as heavy rain, dense smoke or dense fog is a technical problem which needs to be solved urgently.
Disclosure of Invention
The embodiment of the invention provides an unmanned aerial vehicle capable of automatically switching navigation systems, which can achieve the effect that the unmanned aerial vehicle can still make a motion decision in a complex environment when encountering severe weather such as heavy rain, dense smoke or dense fog through switching of different navigation systems.
Based on the above, the embodiment of the invention provides an unmanned aerial vehicle with an automatic switching navigation system, which comprises an unmanned aerial vehicle body, a laser SLAM system for realizing navigation of the unmanned aerial vehicle, a visual SLAM system for assisting navigation of the unmanned aerial vehicle and a controller; when the controller detects that the error between the pose obtained by the laser inertial odometer in the laser SLAM system and the reference true value is larger than a switching setting threshold, the laser SLAM system is switched to the vision SLAM system for unmanned plane navigation.
In the possible implementation manner, the laser SLAM system comprises a laser radar unit and an inertial unit of the IMU, and the laser SLAM system adopts the laser radar to combine with the inertial odometer in the inertial unit to calibrate the pose information of the odometer in real time.
In the possible implementation manner, the laser SLAM system calculates the pose of the unmanned aerial vehicle by using a scanning matching algorithm of laser radar point cloud data and uses the pose as a reference true value of the pose of the inertial odometer; when the error between the pose obtained by the inertial odometer and the reference true value is larger than a first set threshold value, performing calibration operation once, and updating the original pose of the inertial odometer by using the pose after the calibration operation; otherwise, the calibration is not carried out.
In the above possible implementation manner, the calculating the pose of the unmanned aerial vehicle by using the scan matching algorithm of the laser radar data and using the pose as the reference true value of the pose of the inertial odometer includes: and continuously acquiring environment scanning information provided by the laser radar, matching by utilizing front and rear two frames of point cloud data, calculating unmanned aerial vehicle displacement between the two continuous frames of laser point cloud data, and calculating the real-time pose of the unmanned aerial vehicle by calculating the unmanned aerial vehicle displacement between all laser frames and combining with the initial pose of the unmanned aerial vehicle, wherein the real-time pose is used as a reference true value of the pose of the inertial odometer.
In the possible implementation manner, the visual SLAM system includes a binocular camera unit located at the upper front of the unmanned aerial vehicle body, and the visual SLAM system reconstructs the three-dimensional position of the feature points by adopting a multi-key frame matching point triangularization feature point depth acquisition method.
In the above possible implementation manner, the method for obtaining the depth of the feature point by adopting multi-key frame matching point triangulation includes:
the three-dimensional position of an environmental characteristic point is represented by using a six-dimensional vector multi-key frame matching point triangularization characteristic point depth acquisition method, wherein the three-dimensional position comprises the position of a binocular shooting unit of the unmanned aerial vehicle, and the environmental characteristic point direction and depth information of the unmanned aerial vehicle;
continuously updating the environment, when the characteristic estimation covariance is smaller than a second set threshold value, converting six-dimensional representation of the environmental characteristic points of the unmanned aerial vehicle into three-dimensional European coordinates, establishing a plurality of characteristic points in the same frame of image, and reducing the representation form of the characteristic points belonging to the same frame into a form of adding a binocular shooting unit gesture and a plurality of depths so as to effectively reduce the length of the unmanned aerial vehicle system state.
In the above possible implementation manner, the laser radar unit and the IMU inertial unit are integrated into a frame, the laser radar unit is located in front of the unmanned aerial vehicle body, and the binocular camera unit is located above the laser radar.
In the possible implementation manner, the laser SLAM system further comprises a laser system map updating module and a global map fusion module, wherein the laser system map updating module is used for updating map information in the laser system map updating module according to the pose information of the unmanned aerial vehicle updated by the laser SLAM system in real time, and the global map fusion module fuses the local map information of all local laser SLAM systems to generate a global three-dimensional map.
In the possible implementation manner, the visual SLAM system further includes a visual system map updating module, and updates map information in the visual SLAM system according to the pose information of the unmanned aerial vehicle updated by the visual SLAM system in real time.
In the possible implementation manner, the system further comprises a multi-sensor fusion module, wherein the multi-sensor fusion module fuses the pose motion information of the unmanned aerial vehicle obtained by the laser SLAM system, the pose motion information of the unmanned aerial vehicle obtained by the visual SLAM system and the updated global three-dimensional map information to obtain optimized pose motion information of the unmanned aerial vehicle; and when external GPS and/or RTK signals exist, the GPS and/or RTK information is fused at the same time, and pose movement information of the unmanned aerial vehicle with the earth coordinate system is obtained.
By the above, the embodiment of the invention provides the unmanned aerial vehicle capable of automatically switching the navigation system, based on the combination of the visual SLAM system and the three-dimensional laser SLAM system, when the navigation positioning of one system is deviated, the system can be seamlessly switched to the other system through switching, so that the environmental adaptability and stability of the unmanned aerial vehicle are improved, and meanwhile, the laser SLAM system performs real-time calibration correction on the inertial odometer in real time, so that the positioning navigation of the system is more accurate, and the unmanned aerial vehicle can be applied to wider scenes.
The beneficial effects of the invention are as follows:
1. when the unmanned aerial vehicle encounters severe weather such as heavy rain, dense smoke or dense fog, the movement decision can still be made in a complex environment, the accuracy and stability of the unmanned aerial vehicle are improved, the use is convenient, and meanwhile the safety is improved.
2. The unmanned aerial vehicle can be prevented from integrally stopping operating when one of the systems fails through switching of the two systems.
3. The satellite GPS signal receiving device can fly indoors or under any condition without satellite GPS signal receiving, is safe and reliable, is not easy to be interfered by wireless signals to fly, and can fly autonomously without communication connection.
4. The unmanned aerial vehicle can hover and fly autonomously with high reliability, can autonomously perform rapid route planning and autonomously fly to a destination.
5. The laser radar of the system belongs to an autonomous luminous sensor and can fly in a completely dark environment.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a unmanned aerial vehicle with an automatic switching navigation system according to an embodiment of the present invention.
Fig. 2 is a functional block diagram of a unmanned aerial vehicle with an automatic switching navigation system according to an embodiment of the present invention.
Fig. 3 is a switching flowchart applied to an automatic switching navigation system unmanned aerial vehicle according to an embodiment of the present invention.
Fig. 4 is a general flowchart of a method for positioning and navigating a unmanned aerial vehicle by using a vision fusion laser radar SLAM according to an embodiment of the present invention.
FIG. 5 is a flow chart illustrating the operation of the three-dimensional laser SLAM system according to an embodiment of the present invention.
FIG. 6 is a flow chart illustrating operation of the visual SLAM system according to an embodiment of the present invention.
Fig. 7 is a schematic diagram of an unmanned aerial vehicle system module based on vision fusion three-dimensional laser radar SLAM positioning navigation in an embodiment of the present invention.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, are intended to be within the scope of the present application.
The terms first, second, third, etc. or module a, module B, module C, etc. in the description and in the claims, etc. are used solely for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order, as may be appreciated, if permitted, to interchange particular orders or precedence orders to enable embodiments of the present application described herein to be implemented in orders other than those illustrated or described herein.
In the following description, reference numerals indicating steps such as S110, S120, … …, etc. do not necessarily indicate that the steps are performed in this order, and the order of the steps may be interchanged or performed simultaneously as allowed. The term "comprising" as used in the description and claims should not be interpreted as being limited to what is listed thereafter; it does not exclude other elements or steps. Thus, it should be interpreted as specifying the presence of the stated features, integers, steps or components as referred to, but does not preclude the presence or addition of one or more other features, integers, steps or components, or groups thereof. Thus, the expression "a device comprising means a and B" should not be limited to a device consisting of only components a and B. Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the application. Thus, appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment, but may. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments as would be apparent to one of ordinary skill in the art from this disclosure.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
Referring to fig. 1, fig. 2, and fig. 3, an unmanned aerial vehicle for automatically switching a navigation system according to an embodiment of the present invention includes: the unmanned aerial vehicle navigation system comprises an unmanned aerial vehicle body (not numbered), a laser SLAM system 100 for realizing unmanned aerial vehicle navigation, a visual SLAM system 200 for assisting unmanned aerial vehicle navigation and a controller 600; when the controller 600 detects that the error between the pose obtained by the laser inertial odometer in the laser SLAM system 100 and the reference true value is greater than the switching setting threshold, the laser SLAM system 100 is switched to the vision SLAM system 200 for unmanned aerial vehicle navigation.
In the above possible implementation manner, the laser SLAM system 100 includes a laser radar unit and an IMU inertial unit, and the laser SLAM system 100 adopts a laser radar to combine with an inertial odometer in the IMU inertial unit to calibrate pose information of the odometer in real time.
In the above possible implementation manner, the laser SLAM system 100 calculates the pose of the unmanned aerial vehicle by using a scan matching algorithm of laser radar point cloud data, and uses the pose as a reference true value of the pose of the inertial odometer; when the error between the pose obtained by the inertial odometer and the reference true value is larger than a first set threshold value, performing calibration operation once, and updating the original pose of the inertial odometer by using the pose after the calibration operation; otherwise, the calibration is not carried out.
In the above possible implementation manner, the calculating the pose of the unmanned aerial vehicle by using the scan matching algorithm of the laser radar data and using the pose as the reference true value of the pose of the inertial odometer includes: and continuously acquiring environment scanning information provided by the laser radar, matching by utilizing front and rear two frames of point cloud data, calculating unmanned aerial vehicle displacement between the two continuous frames of laser point cloud data, and calculating the real-time pose of the unmanned aerial vehicle by calculating the unmanned aerial vehicle displacement between all laser frames and combining with the initial pose of the unmanned aerial vehicle, wherein the real-time pose is used as a reference true value of the pose of the inertial odometer.
In the above possible implementation manner, the visual SLAM system 200 includes a binocular camera unit located at the upper front of the unmanned plane body, and the visual SLAM system 200 reconstructs the three-dimensional position of the feature point by using a feature point depth acquisition method of multi-key frame matching point triangularization.
In the above possible implementation manner, the method for obtaining the depth of the feature point by adopting multi-key frame matching point triangulation includes:
the three-dimensional position of an environmental characteristic point is represented by using a six-dimensional vector multi-key frame matching point triangularization characteristic point depth acquisition method, wherein the three-dimensional position comprises the position of a binocular shooting unit of the unmanned aerial vehicle, and the environmental characteristic point direction and depth information of the unmanned aerial vehicle;
Continuously updating the environment, when the characteristic estimation covariance is smaller than a second set threshold value, converting six-dimensional representation of the environmental characteristic points of the unmanned aerial vehicle into three-dimensional European coordinates, establishing a plurality of characteristic points in the same frame of image, and reducing the representation form of the characteristic points belonging to the same frame into a form of adding a binocular shooting unit gesture and a plurality of depths so as to effectively reduce the length of the unmanned aerial vehicle system state.
In the above possible implementation manner, the laser radar unit and the IMU inertial unit are integrated into a frame, the laser radar unit is located in front of the unmanned aerial vehicle body, and the binocular camera unit is located above the laser radar.
In the above possible implementation manner, the laser SLAM system 100 further includes a laser system map updating module and a global map fusion module, where the laser system map updating module is configured to update map information in the laser SLAM system 100 according to pose information of the unmanned aerial vehicle updated in real time, and the global map fusion module fuses local map information of all local laser SLAM systems to generate a global three-dimensional map.
In the above possible implementation manner, the visual SLAM system 200 further includes a visual system map updating module, which updates the map information in the visual SLAM system 200 according to the pose information of the unmanned aerial vehicle updated in real time.
In the possible implementation manner, the system further comprises a multi-sensor fusion module, wherein the multi-sensor fusion module fuses the pose motion information of the unmanned aerial vehicle obtained by the laser SLAM system, the pose motion information of the unmanned aerial vehicle obtained by the visual SLAM system and the updated global three-dimensional map information to obtain optimized pose motion information of the unmanned aerial vehicle; and when external GPS and/or RTK signals exist, the GPS and/or RTK information is fused at the same time, and pose movement information of the unmanned aerial vehicle with the earth coordinate system is obtained.
In the embodiment of the invention, when the unmanned aerial vehicle automatically switches the navigation system starts to fly or flies in a normal environment, the three-dimensional laser SLAM system is adopted for navigation, and when the controller detects that the error between the pose obtained by the odometer in the three-dimensional laser SLAM system and the reference true value is greater than the switching preset threshold value, the navigation system is switched to the visual SLAM system, so that the unmanned aerial vehicle is guided to continue to navigate and fly.
Further details of examples for implementing the present invention are set forth below.
On the one hand, referring to fig. 4-6, an embodiment of the present invention provides a method for positioning and navigating a vision fusion laser radar SLAM, which is used for positioning and navigating an unmanned aerial vehicle, and includes the following steps:
(1) Acquiring first fusion pose and motion information generated by a three-dimensional laser SLAM system;
(2) Acquiring second fusion pose and motion information generated by the visual SLAM system;
(3) Comparing the first fusion pose and the motion information with the second fusion pose and the motion information to obtain a third comparison error, and comparing the third comparison error with a system calibration error;
(4) And when the third comparison error is larger than the system calibration error, switching to a visual SLAM system for navigation of the unmanned aerial vehicle.
In the above embodiment, further comprising:
(5) In the step (4), further: when the third comparison error is larger than the system calibration error, switching to a vision SLAM system to acquire third fusion pose and motion information for positioning and navigation of the unmanned aerial vehicle; or when the third comparison error is smaller than or equal to the system calibration error, acquiring second fusion pose and motion information, and continuously adopting a laser SLAM system for positioning and navigation of the unmanned aerial vehicle;
(6) Acquiring map updating information of the three-dimensional laser SLAM system and/or acquiring map updating information of the visual SLAM system;
(7) Acquiring position information appointed by a user, generating an optimal route according to the position information appointed by the user, the map updating information, the third fusion pose and motion information or the second fusion pose and motion information, performing instant positioning and autonomous navigation on the unmanned aerial vehicle, displaying the position of the unmanned aerial vehicle and controlling the flight of the unmanned aerial vehicle;
In the embodiment of the invention, the system calibration error is a system preset value.
Further, in the embodiment of the present invention, the step (1) of obtaining the first fusion pose and motion information generated by the three-dimensional laser SLAM system further includes the following steps:
(1-1) acquiring first IMU pose and motion information of a current frame acquired by a first IMU unit, and acquiring point cloud data information of the current frame acquired by a three-dimensional laser radar;
(1-2) filtering and preprocessing the acquired point cloud data;
(1-3) resolving the filtered point cloud data to obtain the pose and motion information of the laser radar of the current frame and a local map of the surrounding environment of the current frame in the laser system, and updating the map in the laser system;
(1-4) fusing all the local maps to generate a global map;
(1-5) combining the pose and the motion information of the laser radar of the previous frame with the pose and the motion information of the laser radar of the current frame to generate a first motion track;
(1-6) combining the first IMU pose and motion information of the previous frame with the first IMU pose and motion information of the current frame to generate a second motion trail;
(1-7) comparing the first motion track with the second motion track to obtain a first comparison error, and comparing the first comparison error with a laser SLAM calibration error, wherein the laser SLAM calibration error is a system preset value;
(1-8) fusing pose and motion information of the first IMU and the laser radar according to a comparison result of the first comparison error and the laser SLAM calibration error, and combining a current updated global map model to obtain first fused pose and motion information of a current frame;
(1-9) outputting the first fusion pose and motion information and global map information.
Further, in the embodiment of the present invention, the map update information of the three-dimensional laser SLAM system is obtained in step (6), which is according to the local map information of the surrounding environment of the current frame in the laser system, and the map update in the laser SLAM system is performed according to the local map information.
Further, the embodiment of the invention further comprises the following steps (1-8): and fusing all the local map information to generate a global map.
Further, in the step (1-7) in the embodiment of the present invention, obtaining the first fusion pose and motion information of the current frame includes outputting the first fusion pose and motion information in combination with the current updated global map information;
further, in the embodiment of the present invention, the filtering and preprocessing of the collected point cloud data in the step (1-2) further includes the following steps:
(1-2-1) filtering the acquired point cloud data;
(1-2-2) preprocessing the filtered point cloud data;
(1-2-3) updating the temporary local map according to the preprocessed point cloud data.
The step (1-6) of comparing the first comparison error with the laser SLAM calibration error further comprises the steps of:
(1-6-1), if the first comparison error is greater than the laser SLAM calibration error, replacing the first IMU pose and motion information of the current frame with the laser radar pose and motion information of the current frame;
and (1-6-2), if the first comparison error is smaller than or equal to the laser SLAM calibration error, storing the first IMU pose and motion information of the current frame.
The visual SLAM system in the step (2) operates to generate second fusion pose and motion information, and further comprises the following steps:
(2-1) acquiring second IMU pose and motion information of a current frame acquired by a second IMU unit, and acquiring image pixel information of the current frame acquired by a camera; in this embodiment, a binocular camera is provided.
(2-2) performing front-end feature matching tracking on the image pixel information of the current frame;
(2-3) triangulating the characteristic points, calculating to obtain the pose and motion information of the camera of the current frame and the local map of the surrounding environment of the current frame in the vision system, and updating the map in the vision system;
(2-4) combining the pose and the motion information of the camera of the previous frame with the pose and the motion information of the camera of the current frame to generate a third motion track;
(2-5) combining the pose and the motion information of the second IMU of the previous frame with the pose and the motion information of the second IMU of the current frame to generate a fourth motion trail;
(2-6) comparing the third motion track with the fourth motion track to obtain a second comparison error, and comparing the second comparison error with a visual SLAM calibration error, wherein the visual SLAM calibration error is a system preset value;
and (2-7) fusing pose and motion information of the second IMU and the camera according to a comparison result of the second comparison error and the visual SLAM calibration error, and combining the vision system map information to obtain second fused pose and motion information of the current frame in the vision system map.
Further, in the embodiment of the present invention, the comparing the second comparison error with the visual SLAM calibration error in the step (2-6) further includes the following steps:
(2-6-1), if the second comparison error is greater than the visual SLAM calibration error, replacing the second IMU pose and motion information of the current frame with the camera pose and motion information of the current frame;
(2-6-2) if the second comparison error is less than or equal to the visual SLAM calibration error, storing second IMU pose and motion information of the current frame;
Further, in the embodiment of the present invention, the comparing the third comparison error with the system calibration error in the step (3) further includes the following steps:
(3-1) if the third comparison error is greater than the system calibration error, entering a step (4), switching to a vision SLAM system, and combining a GPS (global positioning system) receiving signal to obtain second fusion pose and motion information with an earth coordinate system;
and (3-2) if the third comparison error is smaller than or equal to the system calibration error, calculating the use weights of the three-dimensional laser SLAM system and the visual SLAM system according to a weight algorithm, and obtaining third fusion pose and motion information with an earth coordinate system by combining GPS receiving signals when external GPS signals exist.
The step (3-1) if the third comparison error is greater than the system calibration error further comprises the following steps:
(3-1-1) comparing the third comparison error with the system switching error;
(3-1-2), if the third comparison error is smaller than or equal to the system switching error, replacing the second fusion pose and the motion information with the first fusion pose and the motion information, and storing the first fusion pose and the motion information into a visual SLAM system;
and (3-1-3), if the third comparison error is larger than the system switching error, switching to the vision SLAM system in the step (4), and combining GPS receiving signals to obtain second fusion pose and motion information with an earth coordinate system when external GPS signals exist.
Referring to fig. 7, the embodiment of the invention further provides an unmanned aerial vehicle system based on vision fusion laser radar SLAM positioning navigation, which comprises a three-dimensional laser SLAM system 100, a vision SLAM system 200, a flight control unit 300 and a system processing unit 400, wherein the three-dimensional laser SLAM system 100 is used for unmanned aerial vehicle positioning navigation. The three-dimensional laser SLAM system 100 provides first fused pose and motion information, and the vision SLAM system 200 is used to optimize the three-dimensional laser SLAM system and provide unmanned aerial vehicle navigation when the three-dimensional laser SLAM system is not applicable. The vision SLAM system 200 provides second fusion pose and motion information, the flight control unit 300 is used for controlling the unmanned aerial vehicle, the system processing unit 400 is used for processing data of each system, the system processing unit 400 compares the first fusion pose and motion information with the second fusion pose and motion information to obtain a third comparison error, and the third comparison error is compared with a system calibration error preset by the system; and when the third comparison error is larger than a system calibration error preset by the system, switching the current navigation system to a visual SLAM system for positioning navigation of the unmanned aerial vehicle. The system processing unit 400 includes a system calibration module 410, a system switching module 420, and a system fusion module 430. Wherein,
The system calibration module 410 is configured to correct an error of the visual SLAM system 200 in real time, and compare the first fusion pose and motion information with the second fusion pose and motion information to obtain a third comparison error, and compare the third comparison error with a preset system calibration error;
the system switching module 420 is used for switching between the vision SLAM system 200 and the three-dimensional laser SLAM system 100;
the system fusion module 430 is configured to fuse pose and motion information of the visual SLAM system 200 and the three-dimensional laser SLAM system 100;
when the third comparison error is larger than the system calibration error, comparing the third comparison error with a preset system switching error; if the third comparison error is smaller than or equal to the system calibration error, outputting the first fusion pose and motion information and the second fusion pose and motion information to a system fusion module;
if the third comparison error is smaller than or equal to the system switching error, the system calibration module replaces the second fusion pose and the motion information with the first fusion pose and the motion information, and outputs the first fusion pose and the motion information and the second fusion pose and the motion information to the system fusion module;
If the third comparison error is larger than the system switching error, outputting the first fusion pose and the motion information and the second fusion pose and the motion information to a system switching module, wherein the system switching module only uses the second fusion pose and the motion information output by the visual SLAM system to position the navigation unmanned aerial vehicle;
and the system fusion module calculates the use weights of the three-dimensional laser SLAM system and the visual SLAM system according to the result sent by the system calibration module and fusion to obtain fusion pose motion information.
Further, in the embodiment of the present invention, the system further includes a multi-sensor fusion module 440, which fuses the pose motion information and the updated global three-dimensional map information according to the fusion pose motion information sent by the system fusion module, and fuses the GPS and/or RTK information simultaneously when there is an external GPS and/or RTK signal, so as to obtain the fusion pose and motion information with the earth coordinate system.
Further, in the embodiment of the present invention, the three-dimensional laser SLAM system 100 further includes a laser information acquisition unit 110 and a laser information processing unit 120, where the laser information acquisition unit 110 is configured to acquire information for the three-dimensional laser SLAM system 100, and the laser information processing unit 120 is configured to process the information acquired by the laser information acquisition unit.
The laser information acquisition unit 110 further comprises a first IMU unit 111 and at least one lidar 112, wherein,
the first IMU unit 111 is configured to provide predicted pose and motion information for the three-dimensional laser SLAM system 100;
the lidar 112 is used to collect point cloud data of the surrounding environment.
The lidar 112 is a multi-line three-dimensional laser scanning radar.
The step of providing the three-dimensional laser SLAM system 100 with predicted pose and motion information further includes the step of first acquiring mileage information by the first IMU unit 111, converting the mileage information into pose change information of the unmanned aerial vehicle through a model of unmanned aerial vehicle inertial odometer kinematics, and sending the pose change information to a bayesian filter to initially calculate the predicted pose and motion information.
Further, in the embodiment of the present invention, the laser information processing unit 120 further includes a point cloud processing module 130, a radar data processing module 140, a radar system storage module 150, a radar system map updating module 160, and a global map fusion module 170, wherein,
the point cloud processing module 130 is configured to process point cloud data;
the radar data processing module 140 is configured to process data of a radar system;
the radar system storage module 150 is used for storing information generated by the laser information processing unit;
The radar system map updating module 160 is configured to update map information of the three-dimensional laser SLAM system 100 in real time;
the global map fusion module 170 is configured to fuse all the local map information to generate global map information, so that the unmanned aerial vehicle plans a navigation path in the global map information.
The point cloud processing module 130 further comprises a filtering module 131 and a preprocessing module 132, wherein,
the filtering module 131 is configured to filter the point cloud data collected by the lidar 112;
the preprocessing module 132 is configured to perform preliminary processing on the filtered point cloud data to obtain temporary local map information.
Further, in the embodiment of the present invention, the filtering includes denoising the point cloud data, removing abnormal points, and reducing the amount of redundant point cloud data.
The radar data processing module 140 further includes a radar resolving module 141, a radar system trajectory generating module 142, a radar system trajectory comparing module 143, a radar system calibration module 144, a lidar fusion module 145, wherein,
the radar resolving module 141 is configured to resolve the filtered point cloud data to obtain corresponding pose and motion information of the laser radar and local map information of the surrounding environment of the current frame;
The radar system track generation module 142 is configured to generate a motion track of the unmanned aerial vehicle in two continuous frames by combining the pose and the motion information of the two continuous frames;
the radar system track comparison module 143 is used for comparing the motion track and the error;
the radar system calibration module 144 is configured to correct the error of the first IMU unit 111 in real time;
the laser radar fusion module 145 is configured to fuse the pose and motion information output by the first IMU unit 111 and the laser radar.
Further, in the embodiment of the present invention, the visual SLAM system 200 further includes a visual information collecting unit 210 and a visual information processing unit 220, where the visual information collecting unit 210 is configured to collect information for the visual SLAM system 200, and the visual information processing unit 220 is configured to process the information collected by the visual information collecting unit.
The visual information acquisition unit 210 further comprises a second IMU unit 211 and at least one set of binocular cameras 212, wherein,
the second IMU unit 211 is configured to provide predicted pose and motion information for the visual SLAM system 200;
the at least one set of binocular cameras 212 is used to capture image pixel information of the surrounding environment.
The step of providing the predicted pose and motion information for the vision SLAM system 200 further includes that the second IMU unit 211 collects the mileage information first, converts the mileage information into the pose change information of the unmanned aerial vehicle through the model of the unmanned aerial vehicle inertial odometer kinematics, and sends the pose change information to the bayesian filter to calculate the predicted pose and motion information preliminarily.
The visual information processing unit 220 includes a feature processing module 230, a visual data processing module 240, a visual system storage module 250, and a visual system map update module 260, wherein,
the feature processing module 230 is configured to process image features;
the vision data processing module 240 is configured to process data of a vision system;
the vision system storage module 250 is used for storing information generated by the vision information processing unit;
the vision system map updating module 260 is used to update map information of the vision SLAM system 200 in real time.
The feature processing module 230 further includes a feature matching tracking module 231, a feature point processing module 232, wherein,
the feature matching tracking module 231 is configured to track image pixel information acquired by the binocular camera 212 in real time and perform matching feature on image pixel information of two continuous frames;
the feature point processing module 232 is configured to perform feature point processing on the feature.
The feature point processing method comprises the following steps:
the three-dimensional position of an environmental characteristic point is represented by using a 6-dimensional vector multi-key frame matching point triangularization characteristic point depth acquisition method, wherein the three-dimensional position comprises the position of a camera of the unmanned aerial vehicle, and the direction and depth information of the environmental characteristic point where the unmanned aerial vehicle is positioned; continuously updating the environment, when the characteristic estimation covariance is smaller than a certain set threshold, converting 6-dimensional representation of the environmental characteristic points of the unmanned aerial vehicle into 3-dimensional European coordinates, establishing a plurality of characteristic points in the same frame of image, and reducing the representation form of the characteristic points belonging to the same frame into a form of adding a camera gesture and a plurality of depths so as to effectively reduce the length of the unmanned aerial vehicle system state.
The visual data processing module 240 further includes a visual resolving module 241, a visual system trajectory generating module 242, a visual system trajectory comparing module 243, a visual system calibration module 244, a visual binocular fusion module 245, wherein,
the visual resolving module 241 is configured to resolve the processed environmental feature point information into camera pose and motion information and local map information of the surrounding environment of the current frame;
the visual system track generation module 242 is configured to generate a motion track of the unmanned aerial vehicle in two continuous frames by combining the pose and the motion information of the two continuous frames;
the vision system track comparing module 243 is used for comparing the motion track and the error;
the vision system calibration module 244 is configured to correct the error of the second IMU unit 211 in real time;
the visual binocular fusion module 245 is configured to fuse pose and motion information output by the second IMU unit 211 and the binocular camera 212.
The flight control unit 300 includes a global planning module 310, a local planning module 320, and an underlying control module 330, wherein,
the global planning module 310 is configured to plan a navigation optimal path of a global map;
the local planning module 320 is configured to plan a global optimal path for the real-time local map information obtained after the preprocessing;
The bottom layer control module 330 is configured to control and allocate the unmanned aerial vehicle.
The overall operation condition of the unmanned aerial vehicle system based on the vision fusion three-dimensional laser radar SLAM positioning technology in the embodiment is as follows:
the three-dimensional laser SLAM system 100 and the visual SLAM system 200 respectively operate, and respectively generate first fusion pose and motion information and second fusion pose and motion information, and output the first fusion pose and motion information and the second fusion pose and motion information to the system calibration module 410;
the system calibration module 410 compares the first fusion pose and motion information with the second fusion pose and motion information to obtain a third comparison error, and compares the third comparison error with a preset system calibration error;
if the third comparison error is larger than the system calibration error, comparing the third comparison error with a preset system switching error; if the third comparison error is less than or equal to the system calibration error, outputting the first fusion pose and motion information and the second fusion pose and motion information to the system fusion module 430;
if the third comparison error is less than or equal to the system switching error, the system calibration module 410 replaces the second fusion pose and motion information with the first fusion pose and motion information, and outputs the first fusion pose and motion information and the second fusion pose and motion information to the system fusion module 430;
If the third comparison error is greater than the system switching error, outputting the first fusion pose and the motion information and the second fusion pose and the motion information to the system switching module 420, wherein the system switching module 420 only uses the second fusion pose and the motion information output by the visual SLAM system 200 to position the unmanned aerial vehicle;
the multi-sensor fusion module 440 fuses according to the fusion pose motion information and the updated global three-dimensional map information sent by the system fusion module, and fuses the GPS and/or RTK information simultaneously when external GPS and/or RTK signals exist, so as to obtain third fusion pose and motion information with an earth coordinate system;
the global planning module 310 plans an optimal navigation path of the unmanned aerial vehicle to the target point according to the position designated by the user, the map updating information and the third fusion pose and the motion information or the second fusion pose and the motion information, namely, a navigation path which is shortest in route and has no obstacle in the traveling process, and outputs all data to the local planning module 320;
the local planning module 320 plans a local optimal navigation path according to the real-time local map information sent by the preprocessing module 132 in the laser information processing unit and the data output by the global planning module 310, compares the local optimal navigation path with the optimal navigation path to obtain an optimal path, and outputs the optimal path to the bottom control module 330;
The bottom layer control module 330 performs control distribution on the unmanned aerial vehicle according to the optimal path, and controls information such as the flight speed, angle and azimuth of the unmanned aerial vehicle;
the drone receives the instruction and begins sailing.
The three-dimensional laser SLAM system 100 operates as follows:
the first IMU unit 111 acquires mileage information of the three-dimensional laser SLAM system 100, converts the mileage information into unmanned aerial vehicle pose change information through a model of unmanned aerial vehicle inertial odometer kinematics, sends the unmanned aerial vehicle pose change information into a Bayesian filter to initially calculate first IMU pose and movement information of a current frame and outputs the first IMU pose and movement information to the radar system track generation module 142, and the multi-line three-dimensional laser scanning radar acquires point cloud data information of the current frame and outputs the point cloud data information to the filtering module 131;
the filtering module 131 performs filtering processing such as noise reduction, abnormal point removal, redundant point cloud data quantity reduction and the like on the acquired point cloud data, and outputs the filtered point cloud data to the radar resolving module 141 and the preprocessing module 132 respectively;
the preprocessing module 132 performs preliminary processing on the filtered point cloud data to obtain temporary local map information and outputs the temporary local map information to the local planning module 320;
the radar resolving module 141 resolves the filtered point cloud data to obtain the laser radar pose and motion information of the current frame and the local map information of the surrounding environment of the current frame in the laser system, and outputs the laser radar pose and motion information of the current frame to the radar system track generating module 142, and outputs the local map information of the surrounding environment of the current frame in the laser system to the laser system map updating module 160;
The laser system map updating module 160 updates the local map information of two continuous frames in real time and outputs the local map information to the global map fusion module 170;
the global map fusion module 170 fuses all local map information to generate global map information, and when the unmanned aerial vehicle takes off at the same position next time, the global map information can be directly used to output the global map information to the laser radar fusion module 145 and the global planning module 310;
the radar system track generation module 142 generates a first motion track by combining the laser radar pose and motion information of the previous frame and the laser radar pose and motion information of the current frame, generates a second motion track by combining the first IMU pose and motion information of the previous frame and the first IMU pose and motion information of the current frame, and outputs the first motion track and the second motion track to the radar system track comparison module 143;
the radar system track comparison module 143 compares the first motion track and the second motion track to obtain a first comparison error, compares the first comparison error with a preset laser SLAM calibration error, and outputs a comparison result to the radar system calibration module 144;
if the first comparison error is greater than the laser SLAM calibration error, the radar system calibration module 144 replaces the first IMU pose and motion information of the current frame with the laser radar pose and motion information of the current frame, and if the first comparison error is less than or equal to the laser SLAM calibration error, the first IMU pose and motion information of the current frame is stored;
The pose and motion information of the first IMU and the laser radar are output to a laser radar fusion module 145, the laser radar fusion module 145 fuses the pose and motion information of the first IMU and the laser radar according to a fusion algorithm and combines the current updated global map model to obtain the first fused pose and motion information of the current frame, and the first fused pose and motion information of the current frame is output to a system calibration module 410.
The visual SLAM system 200 operates as follows:
the second IMU unit 211 obtains mileage information of the vision SLAM system 200, converts the mileage information into pose change information of the unmanned aerial vehicle through a model of the unmanned aerial vehicle inertial odometer kinematics, sends the pose change information of the unmanned aerial vehicle into a bayesian filter to primarily calculate second IMU pose and movement information of the current frame and outputs the second IMU pose and movement information to the vision system track generation module 242, and the binocular camera 212 collects image pixels of the current frame and outputs the image pixels to the feature matching tracking module 231;
the feature matching tracking module 231 tracks the image pixel information acquired by the binocular camera 212 in real time, performs matching features on the image pixel information of two continuous frames, and outputs the matched feature points to the feature point processing module 232;
the feature point processing module 232 performs feature point triangularization, specifically includes that a 6-dimensional vector multi-key frame matching point triangularization feature point depth acquisition method is utilized to represent a three-dimensional position of an environmental feature point, including a position of a camera of the unmanned aerial vehicle and environmental feature point direction and depth information of the unmanned aerial vehicle; continuously updating the environment, when the characteristic estimation covariance is smaller than a certain set threshold, converting 6-dimensional representation of the environmental characteristic points of the unmanned aerial vehicle into 3-dimensional European coordinates, establishing a plurality of characteristic points in the same frame of image, and reducing the representation form of the characteristic points belonging to the same frame into a form of adding a camera gesture and a plurality of depths so as to effectively reduce the length of the unmanned aerial vehicle system state. The processed environmental characteristic point information is output to a vision resolving module 241;
The vision resolving module 241 resolves the processed environmental feature point information to obtain the pose and motion information of the camera of the current frame and the local map of the surrounding environment of the current frame in the vision system, and outputs the pose and motion information of the camera of the current frame to the vision system track generating module 242, and outputs the local map information of the surrounding environment of the current frame in the vision system to the vision system map updating module 260;
the vision system map updating module 260 updates the map information of the vision SLAM system 200 in real time and outputs the map information to the vision binocular fusion module 245;
the vision system track generation module 242 generates a third motion track by combining the pose and motion information of the camera of the previous frame and the pose and motion information of the camera of the current frame, generates a fourth motion track by combining the pose and motion information of the second IMU of the previous frame and the pose and motion information of the second IMU of the current frame, and outputs the third motion track and the fourth motion track to the vision system track comparison module 243;
the vision system track comparison module 243 compares the third movement track with the fourth movement track to obtain a second comparison error, compares the second comparison error with a preset vision SLAM calibration error, and outputs the comparison result to the vision system calibration module 244;
If the second comparison error is greater than the visual SLAM calibration error, the visual system calibration module 244 replaces the second IMU pose and motion information of the current frame with the camera pose and motion information of the current frame, and if the second comparison error is less than or equal to the visual SLAM calibration error, stores the second IMU pose and motion information of the current frame;
the second IMU and dual-purpose pose and motion information is output to the vision dual-purpose fusion module 245, the vision dual-purpose fusion module 245 fuses the second IMU and camera pose and motion information according to a fusion algorithm, the second fusion pose and motion information of the current frame in the vision system map is obtained by combining the vision system map information, and the second fusion pose and motion information of the current frame is output to the system calibration module 410.
The method and the system provided by the invention use the vision system and the three-dimensional laser system simultaneously based on the system fusion module, and when the navigation positioning of one system is deviated, the system can be seamlessly switched to the other system through the system switching module, so that the environmental adaptability and the stability of the unmanned aerial vehicle are improved, and meanwhile, the system processing unit is used for comparing and correcting the data of the two systems in real time, so that the positioning navigation of the system is more accurate, and the system can be applied to wider scenes.
Meanwhile, in order to avoid the influence of errors generated by the inertial odometer on the accuracy of unmanned aerial vehicle positioning and the effect of map construction, the invention provides a real-time calibration strategy of the inertial odometer, and a laser SLAM system adopts a laser radar and the real-time calibration strategy of the inertial odometer to realize the real-time correction of the odometer errors and realize the real-time correction of the odometer errors, so that the accuracy of unmanned aerial vehicle positioning and map construction of an SLAM algorithm is improved.
In addition, compared with the traditional method, the method for acquiring the depth of the feature points by triangularization of the multiple key frame matching points does not need to manually extract the feature or optical flow image, construct feature descriptors and match the features between frames, does not need to carry out complex geometric operation, and can realize continuous deep learning of the novel unmanned plane and improve the overall reliability of the system.
Another important application scenario of the invention is that when an unmanned aerial vehicle adopting laser radar technology for navigation encounters heavy rain, dense smoke or dense fog, the data acquired by the laser radar can be inaccurate, so that the unmanned aerial vehicle can not determine the position and the posture of the unmanned aerial vehicle, and further, the unmanned aerial vehicle is difficult to understand the complex environment and make a movement decision. In order to avoid that an unmanned aerial vehicle cannot make a correct motion decision in bad weather such as heavy rain, dense smoke and dense fog, the unmanned aerial vehicle provided by the embodiment of the invention comprises a visual SLAM system, when a controller detects that the error between the pose obtained by a laser odometer and a reference true value is greater than a certain set threshold, the laser SLAM system is switched to the visual SLAM system for navigation, and the visual SLAM system adopts a characteristic point depth acquisition method of multi-key frame matching point triangularization, so that the navigation of the unmanned aerial vehicle in a severe weather environment is effectively realized.
The embodiments of the unmanned aerial vehicle and the visual fusion three-dimensional laser radar SLAM positioning method for an automatic switching navigation system and the unmanned aerial vehicle system using the same provided by the invention are described in detail, and according to the ideas of the embodiments of the invention, those skilled in the art will change the specific implementation and application scope, so that the disclosure should not be construed as limiting the invention.

Claims (10)

1. The utility model provides an automatic switch over navigation's unmanned aerial vehicle, includes unmanned aerial vehicle body, its characterized in that still includes: the laser SLAM system is used for realizing navigation of the unmanned aerial vehicle; the visual SLAM system is used for assisting navigation of the unmanned aerial vehicle and a controller; when the controller detects that the error between the pose obtained by the laser inertial odometer in the laser SLAM system and the reference true value is larger than a switching setting threshold, the laser SLAM system is switched to the vision SLAM system for unmanned plane navigation.
2. The unmanned aerial vehicle for automatically switching navigation systems according to claim 1, wherein the laser SLAM system comprises a laser radar unit and an inertial unit of an IMU, and the laser SLAM system adopts the laser radar to combine with the inertial odometer in the inertial unit to calibrate the pose information of the odometer in real time.
3. The unmanned aerial vehicle of the automatic switching navigation system according to claim 2, wherein the laser SLAM system calculates the pose of the unmanned aerial vehicle by using a scanning matching algorithm of laser radar point cloud data and uses the pose as a reference true value of the position of the inertial odometer; when the error between the pose obtained by the inertial odometer and the reference true value is larger than a first set threshold value, performing calibration operation once, and updating the original pose of the inertial odometer by using the pose after the calibration operation; otherwise, the calibration is not carried out.
4. The unmanned aerial vehicle of claim 3, wherein the scan matching algorithm that utilizes lidar data calculates the pose of the unmanned aerial vehicle and uses the pose as a reference true value for the odometer pose, comprising: and continuously acquiring environment scanning information provided by the laser radar, matching by utilizing front and rear two frames of point cloud data, calculating unmanned aerial vehicle displacement between the two continuous frames of laser point cloud data, and calculating the real-time pose of the unmanned aerial vehicle by calculating the unmanned aerial vehicle displacement between all laser frames and combining with the initial pose of the unmanned aerial vehicle, wherein the real-time pose is used as a reference true value of the pose of the inertial odometer.
5. The unmanned aerial vehicle of claim 4, wherein the visual SLAM system comprises a binocular camera unit positioned in front of the unmanned aerial vehicle body, and wherein the visual SLAM system reconstructs the three-dimensional position of the feature points by using a multi-key frame matching point triangulated feature point depth acquisition method.
6. The unmanned aerial vehicle of claim 5, wherein the feature point depth acquisition method using multi-keyframe matching point triangularization comprises:
the three-dimensional position of an environmental characteristic point is represented by using a six-dimensional vector multi-key frame matching point triangularization characteristic point depth acquisition method, wherein the three-dimensional position comprises the position of a binocular shooting unit of the unmanned aerial vehicle, and the environmental characteristic point direction and depth information of the unmanned aerial vehicle;
continuously updating the environment, when the characteristic estimation covariance is smaller than a second set threshold value, converting six-dimensional representation of the environment characteristic points of the unmanned aerial vehicle into three-dimensional European coordinates, and utilizing the same frame of image to establish a plurality of characteristic points, and reducing the representation form of the characteristic points belonging to the same frame into a form of adding a plurality of depths to the gesture of a binocular shooting unit.
7. The unmanned aerial vehicle of claim 5, wherein the lidar unit and the IMU inertial unit are integrated into a frame, the lidar unit is located in front of the unmanned aerial vehicle body, and the binocular camera unit is located above the lidar.
8. The unmanned aerial vehicle for automatically switching navigation systems according to claim 6, wherein the laser SLAM system further comprises a laser system map updating module and a global map fusion module, wherein the laser system map updating module is used for updating map information in the unmanned aerial vehicle according to pose information updated by the laser SLAM system in real time, and the global map fusion module is used for fusing local map information of all local laser SLAM systems to generate a global three-dimensional map.
9. The unmanned aerial vehicle for automatically switching navigation systems of claim 8, wherein the visual SLAM system further comprises a visual system map updating module for updating map information in the visual SLAM system according to the pose information of the unmanned aerial vehicle updated in real time by the visual SLAM system.
10. The unmanned aerial vehicle of the automatic switching navigation system of claim 9, further comprising a multi-sensor fusion module for fusing the unmanned aerial vehicle pose motion information obtained by the laser SLAM system, the unmanned aerial vehicle pose motion information obtained by the visual SLAM system, and the updated global three-dimensional map information to obtain optimized unmanned aerial vehicle pose motion information; and when external GPS and/or RTK signals exist, the GPS and/or RTK information is fused at the same time, and pose movement information of the unmanned aerial vehicle with the earth coordinate system is obtained.
CN202210930365.9A 2022-08-03 2022-08-03 Unmanned aerial vehicle capable of automatically switching navigation system Pending CN117572459A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210930365.9A CN117572459A (en) 2022-08-03 2022-08-03 Unmanned aerial vehicle capable of automatically switching navigation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210930365.9A CN117572459A (en) 2022-08-03 2022-08-03 Unmanned aerial vehicle capable of automatically switching navigation system

Publications (1)

Publication Number Publication Date
CN117572459A true CN117572459A (en) 2024-02-20

Family

ID=89884967

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210930365.9A Pending CN117572459A (en) 2022-08-03 2022-08-03 Unmanned aerial vehicle capable of automatically switching navigation system

Country Status (1)

Country Link
CN (1) CN117572459A (en)

Similar Documents

Publication Publication Date Title
CN109341706B (en) Method for manufacturing multi-feature fusion map for unmanned vehicle
CN109887053B (en) SLAM map splicing method and system
CN110068335B (en) Unmanned aerial vehicle cluster real-time positioning method and system under GPS rejection environment
CN110969655B (en) Method, device, equipment, storage medium and vehicle for detecting parking space
CN106908775B (en) A kind of unmanned vehicle real-time location method based on laser reflection intensity
CN110243358A (en) The unmanned vehicle indoor and outdoor localization method and system of multi-source fusion
CN115082549A (en) Pose estimation method and device, related equipment and storage medium
CN109282808B (en) Unmanned aerial vehicle and multi-sensor fusion positioning method for bridge three-dimensional cruise detection
CN112197770A (en) Robot positioning method and positioning device thereof
CN112734765A (en) Mobile robot positioning method, system and medium based on example segmentation and multi-sensor fusion
CN112378397B (en) Unmanned aerial vehicle target tracking method and device and unmanned aerial vehicle
CN115993825A (en) Unmanned vehicle cluster control system based on air-ground cooperation
CN117554989A (en) Visual fusion laser radar SLAM positioning navigation method and unmanned aerial vehicle system thereof
CN112380933B (en) Unmanned aerial vehicle target recognition method and device and unmanned aerial vehicle
US11525697B2 (en) Limited-sensor 3D localization system for mobile vehicle
CN111157008B (en) Local autonomous navigation system and method based on multidimensional environment information perception
CN117572459A (en) Unmanned aerial vehicle capable of automatically switching navigation system
CN112731918B (en) Ground unmanned platform autonomous following system based on deep learning detection tracking
CN115588036A (en) Image acquisition method and device and robot
CN113403942A (en) Label-assisted bridge detection unmanned aerial vehicle visual navigation method
CN114911223A (en) Robot navigation method and device, robot and storage medium
CN117572879A (en) Unmanned aerial vehicle based on laser radar SLAM positioning navigation
CN117554990A (en) Laser radar SLAM positioning navigation method and unmanned aerial vehicle system thereof
JP7302966B2 (en) moving body
CN102818570A (en) Method for Mars acquisition by using SINS/image matching combination navigation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination