CN116380062A - Robot positioning method and device, mobile robot and readable storage medium - Google Patents

Robot positioning method and device, mobile robot and readable storage medium Download PDF

Info

Publication number
CN116380062A
CN116380062A CN202211698946.0A CN202211698946A CN116380062A CN 116380062 A CN116380062 A CN 116380062A CN 202211698946 A CN202211698946 A CN 202211698946A CN 116380062 A CN116380062 A CN 116380062A
Authority
CN
China
Prior art keywords
data
map
target
mobile robot
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211698946.0A
Other languages
Chinese (zh)
Inventor
韦和钧
焦继超
赖有仿
温焕宇
何婉君
熊金冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ubtech Technology Co ltd
Original Assignee
Shenzhen Ubtech Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ubtech Technology Co ltd filed Critical Shenzhen Ubtech Technology Co ltd
Priority to CN202211698946.0A priority Critical patent/CN116380062A/en
Publication of CN116380062A publication Critical patent/CN116380062A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application provides a robot positioning method and device, a mobile robot and a readable storage medium, and relates to the technical field of robots. According to the method, whether the visual sensor, the laser radar and the GNSS system are effective in data in the current running environment of the mobile robot or not is detected, then fusion pose estimation is carried out by utilizing the IMU, the first target sensing equipment with effective data determined from the visual sensor and the laser radar, and the actual sensing data of the second target sensing equipment with effective data determined from the GNSS system and the wheel type odometer, so that the adaptive degree of the various sensors to different working condition environments can be respectively obtained, adaptive and effective multi-sensor data can be adaptively screened according to the change of the running environment of the random robot, fusion pose positioning is carried out, and the complementary advantages of the various sensors can be realized in the process of positioning the robot pose, and the positioning accuracy of the robot pose in the outdoor complex environment is improved.

Description

Robot positioning method and device, mobile robot and readable storage medium
Technical Field
The present application relates to the field of robotics, and in particular, to a method and apparatus for positioning a robot, a mobile robot, and a readable storage medium.
Background
With the continuous development of science and technology, the application of robot technology in various industries is more and more extensive, and the mobile robot does not need to execute expected tasks in a mobile manner in an outdoor complex environment, but for the mobile robot, the pose positioning accuracy of the mobile robot in the outdoor complex environment is an important factor affecting the mobile control accuracy of the mobile robot. Therefore, how to realize the accurate positioning effect on the pose of the mobile robot in a complex environment with changeable outdoor working conditions is an important research direction of the current robot technology.
Disclosure of Invention
In view of this, an object of the present application is to provide a robot positioning method and apparatus, a mobile robot, and a readable storage medium, which can adaptively screen and adapt to various sensor data and perform fusion pose positioning according to the adaptation degree of various sensors to different working condition environments, so as to realize the complementary advantages of various sensors in the robot pose positioning process, and improve the robot pose positioning accuracy under the outdoor complex environment.
In order to achieve the above purpose, the technical solution adopted in the embodiment of the present application is as follows:
In a first aspect, the present application provides a robot positioning method applied to a mobile robot, wherein the mobile robot includes a vision sensor, a laser radar, an IMU, a GNSS system, and a wheel odometer, the method including:
acquiring actual sensing data acquired by the vision sensor, the laser radar, the IMU, the GNSS system and the wheel type odometer at the current moment;
detecting whether the vision sensor, the laser radar and the GNSS system are valid in the current running environment of the mobile robot according to the actual sensing data of the vision sensor, the laser radar and the GNSS system;
determining a first object-sensing device with valid data in the vision sensor and the lidar, and determining a second object-sensing device with valid data in the GNSS system and the wheel odometer;
performing factor graph optimization pose estimation on the respective actual sensing data of the first target sensing equipment and the IMU to obtain corresponding target tight coupling pose information;
invoking a preset filter to perform fusion pose estimation on the respective actual sensing data of the second target sensing equipment and the IMU, and obtaining corresponding target loose coupling pose information;
And performing factor graph optimization correction on the target tight coupling pose information and the target loose coupling pose information to obtain actual estimated pose information of the mobile robot at the current moment.
In an alternative embodiment, the step of detecting, for the vision sensor, whether the vision sensor is valid in the current operating environment of the mobile robot according to the actual sensing data of the vision sensor includes:
carrying out gray level image conversion processing on the actual sensing data of the visual sensor to obtain a corresponding gray level image to be detected;
performing Shi-Tomasi corner detection processing on the gray image to be detected to obtain the number of the Shi-Tomasi corners of the gray image to be detected;
comparing the number of the Shi-Tomasi corner points of the gray level image to be detected with a preset corner point number threshold value;
judging that the data of the vision sensor in the current running environment of the mobile robot is valid under the condition that the number of the Shi-Tomasi corner points of the gray level image to be detected is larger than or equal to the preset corner point number threshold value;
and under the condition that the number of the Shi-Tomasi corner points of the gray level image to be detected is smaller than the preset corner point number threshold value, judging that the data of the vision sensor in the current running environment of the mobile robot is invalid.
In an alternative embodiment, for the lidar, according to actual sensing data of the lidar, the step of detecting whether the lidar is valid in the current operating environment of the mobile robot includes:
carrying out map estimation according to the actual estimated pose information of the mobile robot at the moment previous to the current moment and the actual sensing data of the laser radar to obtain a corresponding local laser map;
extracting a local moving map corresponding to the local laser map from a pre-stored robot moving map;
calculating a map feature difference value between the local laser map and the local moving map;
comparing the map feature difference value with a preset feature difference threshold;
judging that the data of the laser radar in the current running environment of the mobile robot is invalid under the condition that the map characteristic difference value is larger than or equal to the preset characteristic difference threshold value;
and under the condition that the map feature difference value is smaller than the preset feature difference threshold value, judging that the data of the laser radar is valid in the current running environment of the mobile robot.
In an alternative embodiment, the step of calculating a map feature difference value between the local laser map and the local moving map includes:
Respectively carrying out map rasterization processing on the local laser map and the local moving map according to the preset grid number;
for each map grid in the local laser map, calculating a height Cheng Chazhi between a maximum map elevation value of the map grid and a maximum map elevation value of a target grid corresponding to the map grid position in the local moving map;
carrying out average value operation on the respective elevation difference values of all map grids in the local laser map to obtain corresponding elevation difference average values;
and taking the calculated elevation difference mean value as the map characteristic difference value.
In an alternative embodiment, the step of detecting, for the GNSS system, whether the data of the GNSS system is valid in the current operating environment of the mobile robot according to the actual sensor data of the GNSS system includes:
detecting the data continuity condition of the GNSS system according to the historical sensing data of the GNSS system before the current moment and the actual sensing data of the GNSS system;
under the condition that the detected data continuity condition is in a data continuity state, judging that the GNSS system is valid in the current running environment of the mobile robot;
And under the condition that the detected data continuity condition is in a data discontinuity state, judging that the GNSS system fails in the current running environment of the mobile robot.
In an optional embodiment, the step of performing factor graph optimization pose estimation on the actual sensing data of each of the first target sensing device and the IMU to obtain corresponding target close-coupled pose information includes:
performing pre-integration processing on the actual sensing data of the IMU to obtain corresponding inertial pre-integration data;
aiming at each first target sensing device, performing odometer information prediction processing according to actual sensing data of the first target sensing device to obtain device odometer information matched with the first target sensing device;
and performing close-coupling pose estimation processing on the inertial pre-integral data and the respective device odometer information of all the first target sensing devices by using a factor graph optimization algorithm to obtain target close-coupling pose information.
In an optional embodiment, the step of calling a preset filter to perform fusion pose estimation on respective actual sensing data of the second target sensing device and the IMU to obtain corresponding target loose coupling pose information includes:
Performing pre-integration processing on the actual sensing data of the IMU to obtain corresponding inertial pre-integration data;
aiming at each second target sensing device, performing odometer information prediction processing according to actual sensing data of the second target sensing device to obtain device odometer information matched with the second target sensing device;
and taking the inertial pre-integration data as a pose prediction value of a preset ESKF filter, taking the equipment mileage information of all second target sensing equipment as a pose observation value of the ESKF filter, and calling the ESKF filter to perform loose coupling pose estimation processing to obtain the target loose coupling pose information.
In a second aspect, the present application provides a robotic positioning device for use with a mobile robot, wherein the mobile robot includes a vision sensor, a lidar, an IMU, a GNSS system, and a wheel odometer, the device comprising:
the sensing data acquisition module is used for acquiring actual sensing data acquired by the vision sensor, the laser radar, the IMU, the GNSS system and the wheel type odometer at the current moment;
the equipment failure detection module is used for detecting whether the data of each of the vision sensor, the laser radar and the GNSS system are valid in the current running environment of the mobile robot according to the actual sensing data of each of the vision sensor, the laser radar and the GNSS system;
An effective device screening module, configured to determine a first target sensing device with effective data in the vision sensor and the laser radar, and determine a second target sensing device with effective data in the GNSS system and the wheel odometer;
the first pose estimation module is used for carrying out factor graph optimization pose estimation on the actual sensing data of the first target sensing equipment and the IMU respectively to obtain corresponding target close-coupled pose information;
the second pose estimation module is used for calling a preset filter to perform fusion pose estimation on the respective actual sensing data of the second target sensing equipment and the IMU so as to obtain corresponding target loosely-coupled pose information;
and the estimated pose correction module is used for carrying out factor graph optimization correction on the target close-coupling pose information and the target loose-coupling pose information to obtain actual estimated pose information of the mobile robot at the current moment.
In a third aspect, the present application provides a mobile robot comprising a processor and a memory, the memory storing a computer program executable by the processor, the processor being executable by the computer program to implement the robot positioning method of any of the preceding embodiments.
In a fourth aspect, the present application provides a readable storage medium having stored thereon a computer program which, when executed by a processor, implements the robot positioning method of any of the foregoing embodiments.
In this case, the beneficial effects of the embodiments of the present application may include the following:
according to the method, whether data are valid or not in the current running environment of the mobile robot is detected through the visual sensor, the laser radar and the GNSS system, then a first target sensing device with valid data is determined in the visual sensor and the laser radar according to detection results, a second target sensing device with valid data is determined in the GNSS system and the wheel type odometer, then factor graph optimization pose estimation is conducted on actual sensing data of the first target sensing device and the IMU, corresponding target tight coupling pose information is obtained, a preset filter is called to conduct fusion pose estimation on actual sensing data of the second target sensing device and the IMU, corresponding target loose coupling pose information is obtained, factor graph optimization correction is conducted on the target tight coupling pose information and the target loose coupling pose information, actual estimated pose information of the mobile robot at the current moment is obtained, accordingly adaptive pose positioning can be conducted on adaptive and effective multiple sensor data according to the adaptation degree of the multiple sensors to different working condition environments, the running environment change of the random robot is conducted, and the complex environment of the multiple sensors is achieved in the process of fusion pose positioning, and the complementary environment of the multiple sensors is achieved.
In order to make the above objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered limiting the scope, and that other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a mobile robot according to an embodiment of the present disclosure;
fig. 2 is a flow chart of a robot positioning method according to an embodiment of the present application;
FIG. 3 is one of the flow charts of the sub-steps included in step S220 of FIG. 2;
FIG. 4 is a second flowchart illustrating the sub-steps included in the step S220 in FIG. 2;
FIG. 5 is a third flow chart illustrating the sub-steps included in the step S220 in FIG. 2;
FIG. 6 is a flow chart illustrating the sub-steps involved in step S240 in FIG. 2;
FIG. 7 is a flow chart illustrating the sub-steps involved in step S250 of FIG. 2;
Fig. 8 is a schematic diagram of the composition of the robot positioning device according to the embodiment of the present application.
Icon: 10-a mobile robot; 11-memory; 12-a processor; 13-a communication unit; 14-a moving assembly; 15-visual sensor; 16-lidar; 17-IMU; an 18-GNSS system; 19-wheel odometer; 100-a robotic positioning device; 110-a sensing data acquisition module; 120-an equipment failure detection module; 130-an effective device screening module; 140-a first pose estimation module; 150-a second pose estimation module; 160-estimating a pose correction module.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
In the description of the present application, it should be understood that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. The specific meaning of the terms in this application will be understood by those of ordinary skill in the art in a specific context.
In the description of the present application, it should also be noted that, unless explicitly specified and limited otherwise, the terms "disposed," "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the terms in this application will be understood by those of ordinary skill in the art in a specific context.
The applicant finds through hard investigation that the existing scheme for carrying out robot pose positioning by mutually fusing a plurality of sensors comprises two schemes, wherein one scheme is based on a factor graph optimization algorithm for fusing the plurality of sensors, and the other scheme is based on a filtering algorithm for fusing the plurality of sensors. It should be noted that the various sensors aimed at by the former scheme only correspond to any two or three combinations of a laser radar, an IMU (Inertial Measurement Unit, an inertial measurement unit) and a vision sensor, wherein the fusion scheme corresponding to the laser radar and the IMU is not applicable to a rainy scene and/or an open scene, and the fusion scheme corresponding to the vision sensor and the IMU is not applicable to an illumination change scene; the various sensors aimed at by the latter scheme are IMU and GNSS (Global Navigation Satellite System, global satellite navigation system) systems, wherein the fusion scheme of IMU and GNSS systems is not suitable for dense occlusion scenes. In other words, the existing multiple sensor fusion positioning robot pose schemes cannot effectively adapt to complex environments with changeable outdoor working conditions, and cannot effectively guarantee the robot pose positioning accuracy under the complex outdoor environments.
In this regard, the embodiment of the application provides a robot positioning method and device, a mobile robot and a readable storage medium, based on the adaptive degree of various sensors to different working condition environments, the random robot running environment adaptively screens adaptive and effective various sensor data for fusion type pose positioning, so as to realize the complementary advantages of various sensors in the robot pose positioning process, and improve the robot pose positioning accuracy in the outdoor complex environment.
Some embodiments of the present application are described in detail below with reference to the accompanying drawings. The embodiments described below and features of the embodiments may be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating a mobile robot 10 according to an embodiment of the present disclosure. In the embodiment of the present application, the mobile robot 10 may utilize the adaptation degree of each of the multiple sensors to different working condition environments, adaptively screen and adapt and perform fusion type pose positioning according to the running environment of the robot in the moving process of the robot, so as to implement advantage complementation of the multiple sensors in the pose positioning process of the robot, and improve the pose positioning accuracy of the robot in the outdoor complex environment. The mobile robot 10 may be a wheeled robot or a tracked robot.
In the embodiment of the present application, the mobile robot 10 may include a memory 11, a processor 12, a communication unit 13, a mobile component 14, a vision sensor 15, a lidar 16, an IMU17, a GNSS system 18, a wheel odometer 19, and a robot positioning device 100. The memory 11, the processor 12, the communication unit 13, the mobile unit 14, the vision sensor 15, the lidar 16, the IMU17, the GNSS system 18 and the wheel odometer 19 are electrically connected directly or indirectly to each other, so as to realize data transmission or interaction. For example, the memory 11, the processor 12, the communication unit 13, the mobile unit 14, the vision sensor 15, the lidar 16, the IMU17, the GNSS system 18, and the wheel odometer 19 may be electrically connected to each other through one or more communication buses or signal lines.
In this embodiment, the Memory 11 may be, but is not limited to, a random access Memory (Random Access Memory, RAM), a Read Only Memory (ROM), a programmable Read Only Memory (Programmable Read-Only Memory, PROM), an erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), an electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), or the like. Wherein the memory 11 is configured to store a computer program, and the processor 12, upon receiving an execution instruction, can execute the computer program accordingly.
In this embodiment, the processor 12 may be an integrated circuit chip having signal processing capabilities. The processor 12 may be a general purpose processor including at least one of a central processing unit (Central Processing Unit, CPU), a graphics processor (Graphics Processing Unit, GPU) and a network processor (Network Processor, NP), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like that may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present application.
In this embodiment, the communication unit 13 is configured to establish a communication connection between the mobile robot 10 and other electronic devices through a wireless communication network, and transmit and receive data through the wireless communication network.
In the present embodiment, the moving assembly 14 is used to achieve a position moving effect of the mobile robot 10. The movement assembly 14 may include tracks, transmission means, drive motors, wheels, etc. to ensure that the mobile robot 10 can perform a position movement function with the movement assembly 14.
In this embodiment, the vision sensor 15 may be a binocular camera, which is used to implement an image capturing function of the mobile robot 10, so as to capture an image of the current operating environment of the mobile robot 10.
In this embodiment, the laser radar 16 is configured to perform laser point cloud data acquisition on the current operating environment of the mobile robot 10, so as to obtain a specific distribution status of each thing around the mobile robot 10 in the current operating environment of the mobile robot 10.
In this embodiment, the IMU17 is configured to detect information such as a movement acceleration condition, a movement inclination condition, a robot impact condition, a robot vibration condition, and a robot rotation condition of the mobile robot 10 during movement of the robot.
In this embodiment, the GNSS system 18 is configured to detect an actual position condition of the mobile robot 10.
In the present embodiment, the wheel odometer 19 is used to detect the actual movement condition of the mobile robot 10.
In this embodiment, the robot positioning device 100 may include at least one software functional module that can be stored in the memory 11 in the form of software or firmware or cured in the operating system of the mobile robot 10. The processor 12 may be configured to execute executable modules stored in the memory 11, such as software functional modules and computer programs included in the robotic positioning device 100. The mobile robot 10 may perform fusion type pose positioning by using multiple sensor data (including the vision sensor 15, the laser radar 16, the IMU17, the GNSS system 18 and the wheel odometer 19) of the mobile robot positioning device 100, where the adaptive degree of the multiple sensor data is adaptive and effective, so as to implement advantage complementation of the multiple sensors in the pose positioning process of the robot, and improve the pose positioning accuracy of the robot in the outdoor complex environment.
It will be appreciated that the block diagram shown in fig. 1 is merely a schematic diagram of one component of the mobile robot 10, and that the mobile robot 10 may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.
In this application, in order to ensure that the mobile robot 10 can utilize the adaptive degrees of the various sensors to different working condition environments, the random robot running environment adaptively screens adaptive and effective various sensor data to perform fusion pose positioning, so as to realize the advantage complementation of the various sensors in the pose positioning process of the robot, and improve the pose positioning accuracy of the robot in the outdoor complex environment. The robot positioning method provided by the application is described in detail below.
Referring to fig. 2, fig. 2 is a flow chart of a robot positioning method according to an embodiment of the disclosure. In the embodiment of the present application, the robot positioning method may include steps S210 to S260.
And S210, acquiring actual sensing data acquired by each of the vision sensor, the laser radar, the IMU, the GNSS system and the wheel type odometer at the current moment.
In this embodiment, the actual sensing data corresponding to the vision sensor 15 may include an environmental image of a current operating environment of the mobile robot 10, the actual sensing data corresponding to the lidar 16 may include laser point cloud data of the current operating environment of the mobile robot 10, the actual sensing data corresponding to the IMU17 may include information such as a moving acceleration condition, a moving inclination condition, a robot impact condition, a robot vibration condition, and a robot rotation condition of the mobile robot 10 at a current time, the actual sensing data corresponding to the GNSS system 18 may include an actual position condition of the mobile robot 10 at the current time, and the actual sensing data corresponding to the wheel odometer 19 may include an actual moving condition of the mobile robot 10 at the current time.
Step S220, detecting whether the data of each of the vision sensor, the laser radar and the GNSS system are valid in the current running environment of the mobile robot according to the actual sensing data of each of the vision sensor, the laser radar and the GNSS system.
In this embodiment, after the mobile robot 10 obtains the actual sensing data of the vision sensor 15, the laser radar 16, the IMU17, the GNSS system 18, and the wheel odometer 19 at the current time, the preprocessed actual sensing data of the IMU17 may be obtained by performing wavelet noise reduction processing on the actual sensing data of the IMU 17; the actual sensing data of the laser radar 16 may be subjected to downsampling, denoising and outlier removal by using a voxel filtering algorithm, and then the actual sensing data of the laser radar 16 may be subjected to motion distortion correction by using the actual sensing data of the IMU17, so as to obtain preprocessed actual sensing data of the laser radar 16; the actual sensing data of the vision sensor 15 after preprocessing can be obtained by performing de-distortion processing on the actual sensing data of the vision sensor 15 by using a camera internal reference matrix and a distortion coefficient calibrated in advance for the vision sensor 15; the pre-processed actual sensing data of the GNSS system 18 and the wheel odometer 19 may be obtained by performing a data noise reduction process and a data filtering process on the actual sensing data of the GNSS system 18 and the wheel odometer 19.
The mobile robot 10 will then perform a sensor failure detection for each of the vision sensor 15, the lidar 16 and the GNSS system 18 to determine whether each of the vision sensor 15, the lidar 16 and the GNSS system 18 is valid for data in the current operating environment of the mobile robot 10.
It should be noted that, because the device stability of the IMU17 and the wheel odometer 19 is extremely high, and the device failure problem is extremely difficult to occur, the IMU17 and the wheel odometer 19 can normally operate in any robot operating environment by default, and the actual sensing data of each of the IMU17 and the wheel odometer 19 can keep a data valid state in any robot operating environment.
Optionally, referring to fig. 3, fig. 3 is a schematic flow chart of the sub-steps included in step S220 in fig. 2. In the embodiment of the present application, for the vision sensor 15, the step S220 may include sub-steps S2211 to S2215 to accurately detect the device valid status of the vision sensor 15 in the current operating environment.
In sub-step S2211, gray level image conversion processing is performed on actual sensing data of the visual sensor, so as to obtain a corresponding gray level image to be detected.
And step S2212, carrying out Shi-Tomasi corner detection processing on the gray image to be detected to obtain the number of the Shi-Tomasi corners of the gray image to be detected.
And step S2213, comparing the number of the Shi-Tomasi corner points of the gray level image to be detected with a threshold value of the number of the preset corner points.
Substep S2214, when the number of Shi-Tomasi corner points of the gray level image to be detected is greater than or equal to the threshold value of the preset corner points, determining that the data of the vision sensor is valid in the current running environment of the mobile robot.
Substep S2215, determining that the data of the vision sensor in the current running environment of the mobile robot is invalid when the number of Shi-Tomasi corner points of the gray level image to be detected is smaller than the preset corner point number threshold value.
When the number of Shi-Tomasi corner points of the gray level image to be detected corresponding to the visual sensor 15 is smaller than a preset corner point number threshold value, it is indicated that an illumination change scene may exist in the current running environment, and the visual sensor 15 fails in data in the current running environment; when the number of the Shi-Tomasi corner points of the gray level image to be detected, which corresponds to the visual sensor 15, is greater than or equal to a threshold value of the number of the preset corner points, it is indicated that no illumination change scene exists in the current running environment, and the visual sensor 15 has valid data in the current running environment.
Thus, the present application can accurately detect the effective status of the device of the vision sensor 15 in the current operating environment by executing the above sub-steps S2211 to S2215.
Optionally, referring to fig. 4, fig. 4 is a second flowchart illustrating the sub-steps included in step S220 in fig. 2. In the embodiment of the present application, for the lidar 16, the step S220 may include sub-steps S2221 to S2216 to accurately detect the device effective status of the lidar 16 in the current operating environment.
In sub-step S2221, map estimation is performed according to the actual estimated pose information of the mobile robot at the previous time of the current time and the actual sensing data of the laser radar, so as to obtain a corresponding local laser map.
The mobile robot 10 may perform the robot pose estimation processing by combining the actual sensing data of the lidar 16 based on the actual estimated pose information at the previous time of the current time to obtain the preliminary estimated pose information corresponding to the lidar 16, and then perform the map estimation processing by using the preliminary estimated pose information and the actual sensing data of the lidar 16 to obtain the local laser map matched with the lidar 16 at the current time of the mobile robot 10.
Sub-step S2222 extracts a local moving map corresponding to the local laser map from the pre-stored robot moving map.
In this embodiment, after the mobile robot 10 determines the local laser map, the local laser map and the robot mobile map may be subjected to map feature matching, so as to extract a local map with a map feature similarity exceeding a preset similarity threshold from the robot mobile map, and then use the extracted local map as a local mobile map corresponding to the local laser map.
Sub-step S2223 calculates a map feature difference value between the local laser map and the local moving map.
In this embodiment, the mobile robot 10 may determine the map feature difference value between the local laser map and the local moving map by performing map feature comparison on the local laser map and the local moving map. The map feature difference value may be represented by a map elevation difference, and the step of calculating a map feature difference value between the local laser map and the local moving map may include:
respectively carrying out map rasterization processing on the local laser map and the local moving map according to the preset grid number;
For each map grid in the local laser map, calculating a height Cheng Chazhi between a maximum map elevation value of the map grid and a maximum map elevation value of a target grid corresponding to the map grid position in the local moving map;
carrying out average value operation on the respective elevation difference values of all map grids in the local laser map to obtain corresponding elevation difference average values;
and taking the calculated elevation difference mean value as the map characteristic difference value.
Thus, the present application may determine the specific map feature difference condition of the local laser map and the local moving map by executing the specific step flow of the above sub-step S2223.
In sub-step S2224, the map feature difference value is compared with a preset feature difference threshold.
In sub-step S2225, if the map feature difference value is greater than or equal to the preset feature difference threshold, it is determined that the laser radar fails in the current running environment of the mobile robot.
In sub-step S2226, in the case where the map feature difference value is smaller than the preset feature difference threshold, it is determined that the data of the lidar is valid in the current operating environment of the mobile robot.
When the map feature difference value between the local laser map and the local moving map corresponding to the laser radar 16 is greater than or equal to a preset feature difference threshold, it indicates that a rainy day scene and/or an open scene may exist in the current operating environment, where the data of the laser radar 16 is invalid in the current operating environment; when the map feature difference value between the local laser map and the local moving map corresponding to the laser radar 16 is smaller than the preset feature difference threshold value, it indicates that no rainy scene and/or no open scene exists in the current running environment, and the laser radar 16 has valid data in the current running environment.
Thus, the present application can accurately detect the effective status of the device of the lidar 16 in the current operating environment by executing the above-mentioned sub-steps S2221 to S2226.
Optionally, referring to fig. 5, fig. 5 is a third flowchart illustrating the sub-steps included in step S220 in fig. 2. In the embodiment of the present application, the step S220 may include sub-steps S2231 to S2233 for the GNSS system 18 to accurately detect the device validity status of the GNSS system 18 in the current operating environment.
Substep S2231, detecting a data coherence condition of the GNSS system according to historical sensing data of the GNSS system before the current time and actual sensing data of the GNSS system.
Substep S2332, in the case where the detected data continuity condition is in a data continuity state, determines that the GNSS system is data valid in the current operating environment of the mobile robot.
Substep S2333, in the case where the detected data continuity condition is in the data discontinuity state, determines that the GNSS system fails in the current operating environment of the mobile robot.
If the data continuity condition of the GNSS system 18 indicates that the sensor data collected by the GNSS system 18 has an obvious breakpoint condition or a severe data change condition, the data continuity condition of the GNSS system 18 is indicated to be in a data discontinuous state, the current operating environment may have a dense shielding scene, and the data of the GNSS system 18 in the current operating environment is invalid; if the data continuity condition of the GNSS system 18 indicates that the sensor data collected by the GNSS system 18 does not have an obvious breakpoint condition or a severe data change condition, it indicates that the data continuity condition of the GNSS system 18 is in a data continuity state, the current operating environment does not have a dense occlusion scene, and the GNSS system 18 has valid data in the current operating environment.
Thus, the present application may accurately detect the device validity status of the GNSS system 18 in the current operating environment by performing the above-mentioned sub-steps S2231 to S2233.
In step S230, a first object-sensing device with valid data is determined in the vision sensor and the lidar, and a second object-sensing device with valid data is determined in the GNSS system and the wheel odometer.
In this embodiment, the wheel odometer 19 may default to a second target sensing device that is data valid; if the vision sensor 15 is valid in the current operating environment, the vision sensor 15 can be used as a first target sensing device, otherwise, the vision sensor 15 is not used as a first target sensing device; if the data of the laser radar 16 is valid in the current operation environment, the laser radar 16 can be used as a first target sensing device, otherwise, the laser radar 16 is not used as the first target sensing device; if the GNSS system 18 is valid in the current operating environment, the GNSS system 18 may be configured as a second object-sensing device, otherwise the GNSS system 18 may not be configured as a second object-sensing device.
Therefore, by executing the step S230, the application can adaptively screen out multiple effective sensors adapted to the current running environment of the mobile robot 10 by using the adaptive degree of the multiple sensors to different working conditions.
And step S240, performing factor graph optimization pose estimation on the respective actual sensing data of the first target sensing equipment and the IMU to obtain corresponding target close-coupled pose information.
In this embodiment, when the first object sensing device includes the vision sensor 15 and/or the lidar 16, the object close-coupled pose information may include vision odometer (Visual Inertial Odometry, VIO) information corresponding to the vision sensor 15 and/or laser odometer (Laser Inertial Odometry, LIO) information corresponding to the lidar 16.
Optionally, referring to fig. 6, fig. 6 is a flowchart illustrating the sub-steps included in step S240 in fig. 2. In the embodiment of the present application, the step S240 may include sub-steps S241 to S243 to perform close-coupled fusion pose positioning on multiple valid sensor data adapted to the current running environment by using a factor graph optimization algorithm.
And S241, performing pre-integration processing on the actual sensing data of the IMU to obtain corresponding inertial pre-integration data.
In the substep S242, for each first target sensing device, the odometer information prediction processing is performed according to the actual sensing data of the first target sensing device, so as to obtain the device odometer information matched with the first target sensing device.
The device odometer information comprises robot pose information and a pose covariance matrix which are determined based on actual sensing data corresponding to the first target sensing device. If the first target sensing device includes the vision sensor 15, image processing may be performed on actual sensing data of the vision sensor 15 by using image processing means such as feature tracking, feature matching, and consistency detection, where device odometer information corresponding to the vision sensor 15 is Visual Odometer (VO) information; if the first target sensing device includes the laser radar 16, the actual sensing data of the laser radar 16 may be processed by using point cloud processing means such as point cloud feature extraction and radar inter-frame matching, and at this time, the device odometer information corresponding to the laser radar 16 is laser odometer (LaserOdometry, LO) information.
And step S243, performing close-coupling pose estimation processing on the inertial pre-integration data and the device odometer information of each of all the first target sensing devices by using a factor graph optimization algorithm to obtain target close-coupling pose information.
In this embodiment, the mobile robot 10 may obtain the target close-coupled pose information that is matched with the first target sensing device and represents the pose condition by using the inertial pre-integration data and the device odometer information of each first target sensing device as an input factor of a conventional factor graph optimization algorithm, and then performing data calculation by using the conventional factor graph optimization algorithm.
Therefore, the method can utilize a factor graph optimization algorithm to perform close-coupled fusion pose positioning on various effective sensor data which are adapted to the current running environment by executing the substeps S241-S243.
And step S250, calling a preset filter to perform fusion pose estimation on the respective actual sensing data of the second target sensing equipment and the IMU, and obtaining corresponding target loose coupling pose information.
In this embodiment, after determining the second target sensing devices with valid data in the current operating environment, the mobile robot 10 may perform loose coupling fusion pose estimation on the actual sensing data of all the second target sensing devices and the IMU17 by using an ESKF (Error state Kalman Filter, error kalman filter) filter, so as to obtain corresponding target loose coupling pose information.
Optionally, referring to fig. 7, fig. 7 is a flowchart illustrating the sub-steps included in step S250 in fig. 2. In the embodiment of the present application, the step S250 may include sub-steps S251 to S253 to perform loosely-coupled fusion pose positioning on multiple valid sensor data adapted to the current running environment by using an ESKF filtering algorithm.
And step S251, carrying out pre-integration processing on the actual sensing data of the IMU to obtain corresponding inertial pre-integration data.
In the substep S252, for each second target sensing device, the odometer information prediction processing is performed according to the actual sensing data of the second target sensing device, so as to obtain the device odometer information matched with the second target sensing device.
In this embodiment, the second object-sensing device includes at least the wheel odometer 19. If the second target sensing device includes the GNSS system 18 in addition to the wheel odometer 19, the corresponding predicted device odometer information may include robot pose information and a pose covariance matrix determined based on actual sensing data of the corresponding second target sensing device.
And step 253, taking the inertial pre-integration data as a position and pose predicted value of a preset ESKF filter, taking the device mileage information of all second target sensing devices as position and pose observed values of the ESKF filter, and calling the ESKF filter to perform loose coupling position and pose estimation processing to obtain target loose coupling position and pose information.
Therefore, the application can utilize the ESKF filtering algorithm to perform loose coupling type fusion pose positioning on various effective sensor data which are adapted to the current running environment by executing the substeps S251 to S253.
And step S260, performing factor graph optimization correction on the target tight coupling pose information and the target loose coupling pose information to obtain actual estimated pose information of the mobile robot at the current moment.
In this embodiment, the mobile robot 10 may respectively use the target tightly coupled pose information obtained by using the factor graph optimization algorithm and the target loosely coupled pose information obtained by using the ESKF filtering algorithm as an input factor of the conventional factor graph optimization algorithm, and then perform data calculation by using the conventional factor graph optimization algorithm, so as to perform pose correction on the target tightly coupled pose information by using the target loosely coupled pose information, thereby organically combining the factor graph optimization algorithm and the ESKF filtering algorithm in a process of realizing robot pose positioning by using dominant complementation of multiple sensors, ensuring that the finally determined output actual estimated pose information can accurately represent the real-time pose condition of the mobile robot 10, and further adaptively screening, adapting and effectively performing fusion pose positioning on multiple sensor data according to the respective adaptation degrees of multiple sensors to different working condition environments, so as to realize dominant complementation of multiple sensors in a process of robot pose positioning, and improve the robot pose positioning accuracy in an outdoor complex environment.
In addition, after calculating the actual estimated pose information, the mobile robot 10 may verify the obtained actual estimated pose information by using an NDT (Normal Distributions Transform, normal distribution transformation) -OMP (Orthogonal Matching Pursuit ) fine registration algorithm, and directly output the actual estimated pose information when verification is successful, otherwise, the mobile robot 10 will jump to the above steps S240 to S260 again for repeated execution, so as to ensure that the finally output actual estimated pose information is true and reliable.
Therefore, the application can adaptively screen adaptive and effective multiple sensor data according to the running environment of the robot in the moving process of the robot to perform fusion type pose positioning by utilizing the adaptive degree of the multiple sensors to different working condition environments, so that the complementary advantages of the multiple sensors can be realized in the pose positioning process of the robot, and the pose positioning accuracy of the robot in the outdoor complex environment can be improved.
In the present application, in order to ensure that the mobile robot 10 can perform the above-described robot positioning method by using the robot positioning device 100, the present application implements the above-described functions by dividing the functional modules of the robot positioning device 100. The specific composition of the robotic positioning device 100 provided herein will be described accordingly.
Referring to fig. 8, fig. 8 is a schematic diagram illustrating a composition of a robotic positioning device 100 according to an embodiment of the disclosure. In this embodiment, the robot positioning device 100 may include a sensing data acquisition module 110, an equipment failure detection module 120, an effective equipment screening module 130, a first pose estimation module 140, a second pose estimation module 150, and an estimated pose correction module 160.
The sensing data acquisition module 110 is configured to acquire actual sensing data acquired by each of the vision sensor, the laser radar, the IMU, the GNSS system, and the wheel type odometer at the current moment.
And the equipment failure detection module 120 is configured to detect whether the data of each of the vision sensor, the laser radar and the GNSS system is valid in the current running environment of the mobile robot according to the actual sensing data of each of the vision sensor, the laser radar and the GNSS system.
An active device screening module 130, configured to determine a first target sensing device with valid data in the vision sensor and the lidar, and determine a second target sensing device with valid data in the GNSS system and the wheel odometer.
And the first pose estimation module 140 is configured to perform factor graph optimization pose estimation on the actual sensing data of each of the first target sensing device and the IMU, so as to obtain corresponding target close-coupled pose information.
And the second pose estimation module 150 is configured to invoke a preset filter to perform fusion pose estimation on respective actual sensing data of the second target sensing device and the IMU, so as to obtain corresponding target loosely-coupled pose information.
The estimated pose correction module 160 is configured to perform factor graph optimization correction on the target tightly-coupled pose information and the target loosely-coupled pose information, so as to obtain actual estimated pose information of the mobile robot at the current moment.
It should be noted that, the basic principle and the technical effects of the robot positioning device 100 provided in the embodiment of the present application are the same as those of the aforementioned robot positioning method. For a brief description, reference is made to the description of the robot positioning method described above, where this embodiment section is not mentioned.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners as well. The apparatus embodiments described above are merely illustrative, for example, of the flowcharts and block diagrams in the figures that illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part. The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a readable storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned readable storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In summary, in the method and apparatus for positioning a robot, a mobile robot, and a readable storage medium provided in the embodiments of the present application, by detecting whether each of a vision sensor, a laser radar, and a GNSS system is valid in a current operation environment of the mobile robot, then determining a first target sensing device with valid data in the vision sensor and the laser radar according to a detection result, determining a second target sensing device with valid data in the GNSS system and a wheel odometer, then performing factor map optimization pose estimation on each of actual sensing data of the first target sensing device and the IMU to obtain corresponding target tight coupling pose information, and calling a preset filter to perform fusion pose estimation on each of actual sensing data of the second target sensing device and the IMU to obtain corresponding target loose coupling pose information, and then performing factor map optimization correction on the target tight coupling pose information and the target loose coupling pose information to obtain actual estimated pose information of the mobile robot at the current moment, so as to adaptively screen the corresponding target tight coupling pose information according to the degree of each of multiple sensors to different working condition environments, and implement the fusion pose estimation of the actual sensing data of the first target sensing device and the IMU to realize the multiple sensor pose estimation in the complex positioning process of the robot.
The foregoing is merely various embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of robot positioning, characterized by being applied to a mobile robot, wherein the mobile robot comprises a vision sensor, a lidar, an IMU, a GNSS system, and a wheel odometer, the method comprising:
acquiring actual sensing data acquired by the vision sensor, the laser radar, the IMU, the GNSS system and the wheel type odometer at the current moment;
detecting whether the vision sensor, the laser radar and the GNSS system are valid in the current running environment of the mobile robot according to the actual sensing data of the vision sensor, the laser radar and the GNSS system;
determining a first object-sensing device with valid data in the vision sensor and the lidar, and determining a second object-sensing device with valid data in the GNSS system and the wheel odometer;
Performing factor graph optimization pose estimation on the respective actual sensing data of the first target sensing equipment and the IMU to obtain corresponding target tight coupling pose information;
invoking a preset filter to perform fusion pose estimation on the respective actual sensing data of the second target sensing equipment and the IMU, and obtaining corresponding target loose coupling pose information;
and performing factor graph optimization correction on the target tight coupling pose information and the target loose coupling pose information to obtain actual estimated pose information of the mobile robot at the current moment.
2. The method according to claim 1, characterized in that for the vision sensor, the step of detecting whether the vision sensor is data-valid in the current operating environment of the mobile robot from the actual sensed data of the vision sensor comprises:
carrying out gray level image conversion processing on the actual sensing data of the visual sensor to obtain a corresponding gray level image to be detected;
performing Shi-Tomasi corner detection processing on the gray image to be detected to obtain the number of the Shi-Tomasi corners of the gray image to be detected;
comparing the number of the Shi-Tomasi corner points of the gray level image to be detected with a preset corner point number threshold value;
Judging that the data of the vision sensor in the current running environment of the mobile robot is valid under the condition that the number of the Shi-Tomasi corner points of the gray level image to be detected is larger than or equal to the preset corner point number threshold value;
and under the condition that the number of the Shi-Tomasi corner points of the gray level image to be detected is smaller than the preset corner point number threshold value, judging that the data of the vision sensor in the current running environment of the mobile robot is invalid.
3. The method according to claim 1, characterized in that for the lidar, the step of detecting whether the lidar is data valid in the current operating environment of the mobile robot based on actual sensor data of the lidar comprises:
carrying out map estimation according to the actual estimated pose information of the mobile robot at the moment previous to the current moment and the actual sensing data of the laser radar to obtain a corresponding local laser map;
extracting a local moving map corresponding to the local laser map from a pre-stored robot moving map;
calculating a map feature difference value between the local laser map and the local moving map;
Comparing the map feature difference value with a preset feature difference threshold;
judging that the data of the laser radar in the current running environment of the mobile robot is invalid under the condition that the map characteristic difference value is larger than or equal to the preset characteristic difference threshold value;
and under the condition that the map feature difference value is smaller than the preset feature difference threshold value, judging that the data of the laser radar is valid in the current running environment of the mobile robot.
4. A method according to claim 3, wherein the step of calculating a map feature difference value between the local laser map and the local moving map comprises:
respectively carrying out map rasterization processing on the local laser map and the local moving map according to the preset grid number;
for each map grid in the local laser map, calculating a height Cheng Chazhi between a maximum map elevation value of the map grid and a maximum map elevation value of a target grid corresponding to the map grid position in the local moving map;
carrying out average value operation on the respective elevation difference values of all map grids in the local laser map to obtain corresponding elevation difference average values;
And taking the calculated elevation difference mean value as the map characteristic difference value.
5. The method according to claim 1, wherein for the GNSS system, the step of detecting whether the GNSS system is data valid in the current operating environment of the mobile robot based on the actual sensor data of the GNSS system comprises:
detecting the data continuity condition of the GNSS system according to the historical sensing data of the GNSS system before the current moment and the actual sensing data of the GNSS system;
under the condition that the detected data continuity condition is in a data continuity state, judging that the GNSS system is valid in the current running environment of the mobile robot;
and under the condition that the detected data continuity condition is in a data discontinuity state, judging that the GNSS system fails in the current running environment of the mobile robot.
6. The method according to any one of claims 1-5, wherein the step of performing factor graph optimization pose estimation on the respective actual sensing data of the first target sensing device and the IMU to obtain corresponding target close-coupled pose information includes:
Performing pre-integration processing on the actual sensing data of the IMU to obtain corresponding inertial pre-integration data;
aiming at each first target sensing device, performing odometer information prediction processing according to actual sensing data of the first target sensing device to obtain device odometer information matched with the first target sensing device;
and performing close-coupling pose estimation processing on the inertial pre-integral data and the respective device odometer information of all the first target sensing devices by using a factor graph optimization algorithm to obtain target close-coupling pose information.
7. The method according to any one of claims 1-5, wherein the step of calling a preset filter to perform fusion pose estimation on respective actual sensing data of the second target sensing device and the IMU to obtain corresponding target loosely-coupled pose information includes:
performing pre-integration processing on the actual sensing data of the IMU to obtain corresponding inertial pre-integration data;
aiming at each second target sensing device, performing odometer information prediction processing according to actual sensing data of the second target sensing device to obtain device odometer information matched with the second target sensing device;
And taking the inertial pre-integration data as a pose prediction value of a preset ESKF filter, taking the equipment mileage information of all second target sensing equipment as a pose observation value of the ESKF filter, and calling the ESKF filter to perform loose coupling pose estimation processing to obtain the target loose coupling pose information.
8. A robotic positioning device for use with a mobile robot, wherein the mobile robot includes a vision sensor, a lidar, an IMU, a GNSS system, and a wheel odometer, the device comprising:
the sensing data acquisition module is used for acquiring actual sensing data acquired by the vision sensor, the laser radar, the IMU, the GNSS system and the wheel type odometer at the current moment;
the equipment failure detection module is used for detecting whether the data of each of the vision sensor, the laser radar and the GNSS system are valid in the current running environment of the mobile robot according to the actual sensing data of each of the vision sensor, the laser radar and the GNSS system;
an effective device screening module, configured to determine a first target sensing device with effective data in the vision sensor and the laser radar, and determine a second target sensing device with effective data in the GNSS system and the wheel odometer;
The first pose estimation module is used for carrying out factor graph optimization pose estimation on the actual sensing data of the first target sensing equipment and the IMU respectively to obtain corresponding target close-coupled pose information;
the second pose estimation module is used for calling a preset filter to perform fusion pose estimation on the respective actual sensing data of the second target sensing equipment and the IMU so as to obtain corresponding target loosely-coupled pose information;
and the estimated pose correction module is used for carrying out factor graph optimization correction on the target close-coupling pose information and the target loose-coupling pose information to obtain actual estimated pose information of the mobile robot at the current moment.
9. A mobile robot comprising a processor and a memory, the memory storing a computer program executable by the processor, the processor being executable by the computer program to implement the robot positioning method of any of claims 1-7.
10. A readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the robot positioning method according to any of the claims 1-7.
CN202211698946.0A 2022-12-28 2022-12-28 Robot positioning method and device, mobile robot and readable storage medium Pending CN116380062A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211698946.0A CN116380062A (en) 2022-12-28 2022-12-28 Robot positioning method and device, mobile robot and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211698946.0A CN116380062A (en) 2022-12-28 2022-12-28 Robot positioning method and device, mobile robot and readable storage medium

Publications (1)

Publication Number Publication Date
CN116380062A true CN116380062A (en) 2023-07-04

Family

ID=86975643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211698946.0A Pending CN116380062A (en) 2022-12-28 2022-12-28 Robot positioning method and device, mobile robot and readable storage medium

Country Status (1)

Country Link
CN (1) CN116380062A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117906598A (en) * 2024-03-19 2024-04-19 深圳市其域创新科技有限公司 Positioning method and device of unmanned aerial vehicle equipment, computer equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117906598A (en) * 2024-03-19 2024-04-19 深圳市其域创新科技有限公司 Positioning method and device of unmanned aerial vehicle equipment, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
US11763568B2 (en) Ground plane estimation in a computer vision system
US10275649B2 (en) Apparatus of recognizing position of mobile robot using direct tracking and method thereof
CN106952308B (en) Method and system for determining position of moving object
CN1940591B (en) System and method of target tracking using sensor fusion
US9129523B2 (en) Method and system for obstacle detection for vehicles using planar sensor data
CN110986988A (en) Trajectory estimation method, medium, terminal and device fusing multi-sensor data
CN112904359B (en) Speed estimation based on remote laser detection and measurement
CN115063454B (en) Multi-target tracking matching method, device, terminal and storage medium
US11874666B2 (en) Self-location estimation method
CN116380062A (en) Robot positioning method and device, mobile robot and readable storage medium
CN114593735B (en) Pose prediction method and device
US20230281872A1 (en) System for calibrating extrinsic parameters for a camera in an autonomous vehicle
KR20120048958A (en) Method for tracking object and for estimating
CN110673607A (en) Feature point extraction method and device in dynamic scene and terminal equipment
Zhang et al. Visual odometry based on random finite set statistics in urban environment
CN113034538B (en) Pose tracking method and device of visual inertial navigation equipment and visual inertial navigation equipment
Won et al. Robust vision-based displacement measurement and acceleration estimation using RANSAC and Kalman filter
CN114510031A (en) Robot visual navigation method and device, robot and storage medium
Gobee et al. Guided vehicle based with robot operating system for mapping and navigation task
Basit et al. Joint localization and target tracking with a monocular camera
Coble et al. Motion model and filtering techniques for scaled vehicle localization with Fiducial marker detection
US11893799B2 (en) Computer vision system for object tracking and time-to-collision
US20240134009A1 (en) Method and apparatus of filtering dynamic objects in radar-based ego-emotion estimation
Zhang et al. Dominant orientation tracking for path following
EP4195151A2 (en) Computer vision system for object tracking and time-to-collision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination