CN115493579A - Positioning correction method, positioning correction device, mowing robot and storage medium - Google Patents

Positioning correction method, positioning correction device, mowing robot and storage medium Download PDF

Info

Publication number
CN115493579A
CN115493579A CN202211074121.1A CN202211074121A CN115493579A CN 115493579 A CN115493579 A CN 115493579A CN 202211074121 A CN202211074121 A CN 202211074121A CN 115493579 A CN115493579 A CN 115493579A
Authority
CN
China
Prior art keywords
data
positioning
feature point
wheel speed
binocular
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211074121.1A
Other languages
Chinese (zh)
Inventor
罗元泰
魏基栋
韩明名
邱朦文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Songling Robot Chengdu Co ltd
Agilex Robotics Shenzhen Lt
Original Assignee
Songling Robot Chengdu Co ltd
Agilex Robotics Shenzhen Lt
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Songling Robot Chengdu Co ltd, Agilex Robotics Shenzhen Lt filed Critical Songling Robot Chengdu Co ltd
Priority to CN202211074121.1A priority Critical patent/CN115493579A/en
Publication of CN115493579A publication Critical patent/CN115493579A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01DHARVESTING; MOWING
    • A01D34/00Mowers; Mowing apparatus of harvesters
    • A01D34/006Control or measuring arrangements
    • A01D34/008Control or measuring arrangements for automated or remotely controlled operation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/48Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/48Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
    • G01S19/49Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system whereby the further system is an inertial position system, e.g. loosely-coupled
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Environmental Sciences (AREA)
  • Electromagnetism (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The embodiment of the application discloses a positioning correction method, a positioning correction device, a mowing robot and a storage medium, wherein the positioning correction method comprises the following steps: when the mowing robot executes mowing operation, acquiring binocular images, inertial positioning data, satellite data and wheel speed data in continuous time; carrying out time synchronization processing on the acquired binocular image, the inertial positioning data, the satellite data and the wheel speed data; determining a feature point matching relationship between adjacent synchronous binocular images and a depth value corresponding to each feature point; and correcting the position of the mowing robot according to the feature point matching relation, the depth value corresponding to each feature point, the synchronized inertial positioning data, the synchronized satellite data and the synchronized rear wheel speed data.

Description

Positioning correction method, positioning correction device, mowing robot and storage medium
Technical Field
The application relates to the technical field of computers, in particular to a positioning correction method and device, a mowing robot and a storage medium.
Background
The mowing robot is widely applied to maintenance of home courtyard lawns and trimming of large lawns. The mowing robot integrates the technologies of motion control, multi-sensor fusion, path planning and the like. In order to control the mowing robot to perform mowing operation, a mowing path of the mowing robot needs to be planned so as to completely cover all working areas.
However, when the mowing robot performs mowing operation, positioning loss is easy to occur, for example, a certain position of a mowing area is blocked by an obstacle, so that the positioning of the mowing robot is inaccurate, and further subsequent mowing operation is influenced
Disclosure of Invention
The embodiment of the application provides a positioning correction method and device, a mowing robot and a storage medium, and the positioning accuracy of the mowing robot can be improved.
In a first aspect, an embodiment of the present application provides a positioning correction method, including:
when the mowing robot executes mowing operation, acquiring binocular images, inertial positioning data, satellite data and wheel speed data in continuous time;
performing time synchronization processing on the acquired binocular image, the inertial positioning data, the satellite data and the wheel speed data;
determining a feature point matching relationship between adjacent synchronous binocular images and a depth value corresponding to each feature point;
and correcting the position of the mowing robot according to the feature point matching relation, the depth value corresponding to each feature point, the synchronous rear inertial positioning data, the synchronous rear satellite data and the synchronous rear wheel speed data.
Optionally, in some embodiments, the correcting the position of the lawn mowing robot according to the feature point matching relationship, the depth value corresponding to each feature point, the post-synchronization inertial positioning data, the post-synchronization satellite data, and the post-synchronization wheel speed data includes:
updating the time stamps of the synchronized inertial positioning data and the synchronized wheel speed data according to the characteristic point matching relation;
performing pre-integration processing on the updated inertial positioning data and the updated wheel speed data;
performing single-point positioning on the mowing robot based on the updated satellite data;
constructing a positioning factor graph corresponding to the multiple sensors according to the pre-integration result and the single-point positioning result;
and correcting the position of the mowing robot based on the positioning factor graph and the depth value corresponding to each feature point.
Optionally, in some embodiments, the constructing a positioning factor graph corresponding to multiple sensors according to the feature point matching relationship, the pre-integration result, and the single-point positioning result includes:
constructing a positioning error item corresponding to the single-point positioning result;
constructing a pre-integral error term corresponding to a pre-integral result;
and constructing a positioning factor graph corresponding to the multi-sensor based on the positioning error term and the pre-integral error term.
Optionally, in some embodiments, the modifying the position of the mowing robot based on the localization factor graph and the depth value corresponding to each feature point includes:
performing nonlinear optimization calculation on the positioning factor graph to obtain a position estimation result corresponding to the mowing robot;
determining an image key frame in the binocular image based on the position estimation result;
and correcting the position of the mowing robot according to the image key frame.
Optionally, in some embodiments, the correcting the position of the mowing robot according to the image key frame comprises:
establishing a corresponding image map under the current mowing environment according to the image key frame;
detecting the image map based on a preset point cloud map;
and when the detection result meets a preset condition, correcting the position of the mowing robot.
Optionally, in some embodiments, the time synchronization processing the acquired binocular images, the inertial positioning data, the satellite data, and the wheel speed data includes:
acquiring a timestamp corresponding to each group of binocular images;
and performing time alignment on the inertial positioning data, the satellite data and the wheel speed data and the corresponding binocular images based on the corresponding timestamps of each group of binocular images.
Optionally, in some embodiments, the determining a feature point matching relationship between adjacent synchronized binocular images and a depth value corresponding to each feature point includes:
identifying characteristic point information corresponding to the characteristic point of the K-th frame of binocular image and characteristic point information corresponding to the characteristic point of the K-1-th frame of binocular image, wherein K is an integer greater than 1;
determining a characteristic point matching relation between adjacent synchronous binocular images based on the identified characteristic point information;
and inputting the binocular image into a preset depth recognition network to obtain the depth value of each feature point in the binocular image.
In a second aspect, an embodiment of the present application provides a positioning correction apparatus, including:
the acquisition module is used for acquiring binocular images, inertial positioning data, satellite data and wheel speed data in continuous time when the mowing robot executes mowing operation;
the synchronization module is used for carrying out time synchronization processing on the acquired binocular image, the inertial positioning data, the satellite data and the wheel speed data;
the determining module is used for determining the feature point matching relationship between adjacent synchronous binocular images and the depth value corresponding to each feature point;
and the correction module is used for correcting the position of the mowing robot according to the feature point matching relation, the depth value corresponding to each feature point, the synchronous inertial positioning data, the synchronous satellite data and the synchronous rear wheel speed data.
According to the embodiment of the application, when the mowing robot carries out mowing operation, binocular images, inertial positioning data, satellite data and wheel speed data in continuous time are collected, then time synchronization processing is carried out on the collected binocular images, the inertial positioning data, the satellite data and the wheel speed data, then the characteristic point matching relation between the adjacent synchronized binocular images and the corresponding depth value of each characteristic point are determined, and finally the position of the mowing robot is corrected according to the characteristic point matching relation, the corresponding depth value of each characteristic point, the synchronized inertial positioning data, the synchronized satellite data and the synchronized wheel speed data.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1a is a schematic view of a scene of a positioning correction method according to an embodiment of the present application;
fig. 1b is a schematic flowchart of a positioning correction method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a positioning correction apparatus according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It will be understood that when an element is referred to as being "secured to" or "disposed on" another element, it can be directly on the other element or be indirectly on the other element. When an element is referred to as being "connected to" another element, it can be directly connected to the other element or be indirectly connected to the other element. In addition, the connection may be for either a fixing or a circuit communication.
It is to be understood that the terms "length," "width," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like are used in an orientation or positional relationship indicated in the drawings to facilitate the description of the embodiments of the invention and to simplify the description, and are not intended to indicate or imply that the device or element so referred to must have a particular orientation, be constructed in a particular orientation, and be constructed in operation as a limitation of the invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the embodiments of the present application, "a plurality" means two or more unless specifically defined otherwise.
The embodiment of the application provides a positioning correction method and device, a mowing robot and a storage medium.
The positioning correction device can be integrated in a Micro Control Unit (MCU) of the mowing robot, and can also be integrated in an intelligent terminal or a server, the MCU is also called a Single Chip Microcomputer (Single Chip Microcomputer) or a Single Chip Microcomputer, the frequency and specification of a Central Processing Unit (CPU) are properly reduced, and peripheral interfaces such as a memory, a counter (Timer), a USB, an analog-to-digital conversion/digital-to-analog conversion, a UART, a PLC, a DMA and the like are used to form a Chip-level computer, so that different combination controls can be performed for different application occasions. The robot of mowing can walk voluntarily, and the collision prevention returns automatically within the scope and charges, possesses safety inspection and battery power detection, possesses certain climbing ability, is particularly suitable for places such as family's courtyard, public greenery patches to carry out the lawn mowing maintenance, and its characteristics are: automatic mowing, cleaning grass scraps, automatic rain sheltering, automatic charging, automatic obstacle sheltering, small and exquisite appearance, electronic virtual fence, network control and the like.
The terminal may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal and the server may be directly or indirectly connected through a wired or wireless communication manner, the server may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server that provides basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, network service, cloud communication, middleware service, domain name service, security service, CDN, and a big data and artificial intelligence platform, and the application is not limited herein.
For example, referring to fig. 1a, the present application provides a mowing system comprising a mowing robot 10, a server 20 and a user device 30, which are communicatively connected with each other. The user may control the mowing robot 10 to move through the user device 30 in advance, set a mowing area based on a moving track, and synchronize data corresponding to the mowing area to the mowing robot 10 and the server 20.
When the robot mower 10 performs mowing operation in a mowing area, binocular images, inertial positioning data, satellite data and wheel speed data in continuous time can be collected, for example, binocular images, inertial positioning data, satellite data and wheel speed data of the whole mowing operation process can be collected, binocular images, inertial positioning data, satellite data and wheel speed data in a certain period of time can also be collected, then, the collected binocular images, the inertial positioning data, the satellite data and the wheel speed data are subjected to time synchronization processing, then, a feature point matching relationship between adjacent synchronized binocular images and a depth value corresponding to each feature point are determined, and finally, the position of the robot mower is corrected according to the feature point matching relationship, the depth value corresponding to each feature point, the synchronized inertial positioning data, the synchronized satellite data and the synchronized wheel speed data.
In the positioning correction scheme provided by the application, the feature point matching relation between adjacent synchronous back binocular images and the depth value corresponding to each feature point are utilized, inertia positioning data, satellite data and wheel speed data are fused, the positioning of the mowing robot is corrected, the problem that the positioning of the mowing robot is inaccurate when the mowing robot is interfered by obstacles is avoided, and therefore the accuracy of positioning the mowing robot can be improved, and the mowing efficiency is improved.
The following are detailed descriptions. It should be noted that the description sequence of the following embodiments is not intended to limit the priority sequence of the embodiments.
A method of position correction, comprising: when the mowing robot carries out mowing operation, acquiring binocular images, inertial positioning data, satellite data and wheel speed data in continuous time; performing time synchronization processing on the acquired binocular image, the inertial positioning data, the satellite data and the wheel speed data; determining a feature point matching relationship between adjacent synchronous binocular images and a depth value corresponding to each feature point; and correcting the position of the mowing robot according to the matching relation of the characteristic points, the depth value corresponding to each characteristic point, the synchronous back inertial positioning data, the synchronous back satellite data and the synchronous back wheel speed data.
Referring to fig. 1b, fig. 1b is a schematic flow chart illustrating a positioning correction method according to an embodiment of the present disclosure. The specific flow of the positioning correction method can be as follows:
101. when the mowing robot carries out mowing operation, binocular images, inertial positioning data, satellite data and wheel speed data in continuous time are collected.
The binocular image is based on a parallax principle and acquires two images of the measured object from different positions by using an imaging device, namely, the binocular image specifically comprises a left eye image and a right eye image, inertial positioning data can be acquired by an inertial positioning unit, the inertial positioning data can comprise three-axis acceleration information, three-axis angular velocity information and the like of the mowing robot, satellite data can be acquired by a receiver, wheel speed data can be acquired by a wheel speed meter, and the wheel speed data can comprise the driving distance of the mowing robot, the rotating speed corresponding to each path of tire and the like.
Optionally, in some embodiments, binocular images, inertial positioning data, satellite data, and wheel speed data of the whole mowing operation process may be collected, and binocular images, inertial positioning data, satellite data, and wheel speed data of a part of mowing operation process may also be collected, for example, when it is detected that a satellite positioning signal of the mowing robot is smaller than a preset value, a data collection operation is triggered, that is, binocular images, inertial positioning data, satellite data, and wheel speed data in a continuous time are collected.
102. And carrying out time synchronization processing on the acquired binocular image, the inertial positioning data, the satellite data and the wheel speed data.
Because its speed of gathering data of different sensors is different, therefore, the binocular image of gathering can appear, inertial positioning data, satellite data and the fast data of wheel are asynchronous in time, the follow-up joint location of being not convenient for, and then revise mowing robot's position, therefore, in this application, need carry out time synchronization to the binocular image of gathering, inertial positioning data, satellite data and the fast data of wheel carry out time synchronization and handle, optionally, in some embodiments, can be based on the timestamp of binocular image, to inertial positioning data, satellite data and the fast data of wheel carry out time synchronization and handle, namely, the step "to the binocular image of gathering, inertial positioning data, satellite data and the fast data of wheel carry out time synchronization and handle", specifically can include:
(11) Acquiring a timestamp corresponding to each group of binocular images;
(12) And performing time alignment on the inertial positioning data, the satellite data and the wheel speed data and the corresponding binocular images based on the corresponding timestamps of each group of binocular images.
It should be noted that the binocular camera acquires a set of binocular images, and therefore, in the present application, the inertial positioning data, the satellite data, and the wheel speed data may be time-aligned with the corresponding binocular images by using the timestamp of the left eye image or the timestamp of the right eye image in the same set of binocular images.
103. And determining the matching relationship of the characteristic points between the adjacent synchronous binocular images and the depth value corresponding to each characteristic point.
In the present application, the feature point is a pixel point containing special information in the binocular image, such as a pixel point containing position information, a pixel point containing angle information, or a connection point (also referred to as a corner point) of an object contour line, and the like, and the position of the mowing robot may be subsequently corrected based on a feature point matching relationship and a depth value, where the feature point matching relationship may be used to assist in determining a relative position between the mowing robot and a target (i.e., an object in the image), and the depth value may be used to assist in determining a distance between the mowing robot and the target (i.e., an object in the image).
Optionally, in some embodiments, the feature point may be a corner point, and the feature point matching relationship is a corner point matching relationship, it should be noted that corner point matching (corner matching) refers to finding a corresponding relationship of feature pixel points between two images, so as to determine a position relationship between the two images, and the corner point matching may be divided into the following three steps:
step 1: and searching the pixel points (corner points) which are most easily identified in the two images to be matched, such as edge points of objects with rich textures and the like.
And 2, step: for the detected corner, it is described by some mathematical features, such as gradient histogram, local random binary feature, etc.
And step 3: and judging the corresponding relation of the corner points in the two images through the descriptors of the corner points.
Meanwhile, the synchronized binocular images may also be input into a preset depth recognition network, and a depth value corresponding to each feature point in the synchronized binocular images is output, that is, optionally, in some embodiments, the step "determining a feature point matching relationship between adjacent synchronized binocular images and a depth value corresponding to each feature point" may specifically include:
(21) Identifying characteristic point information corresponding to the characteristic points of the K frame of binocular image and characteristic point information corresponding to the characteristic points of the K-1 frame of binocular image;
(22) Determining a characteristic point matching relation between adjacent synchronous binocular images based on the identified characteristic point information;
(23) And inputting the binocular image into a preset depth recognition network to obtain the depth value of each feature point in the binocular image.
K is an integer greater than 1, for example, the feature point information corresponding to the feature point of the 2 nd frame left eye image and the feature point information corresponding to the feature point of the 1 st frame left eye image are identified, the feature point information may be descriptor information, the descriptor information may be gradient histogram information, and the descriptor information is used to describe the occurrence frequency of the gradient direction in the local region of the image, of course, the descriptor information may be feature transformation information or acceleration robust feature information, and may be specifically selected according to an actual situation, which is not described herein again.
In addition, after the binocular images are input into the preset depth recognition network, a disparity map corresponding to the binocular images is output first, the disparity map refers to the position deviation of pixels of the same scene imaged under two cameras, and the position deviation is generally reflected in the horizontal direction because the two binocular cameras are placed in water. For example, an X-point in the scene is abscissa (X-coordinate) at the left camera, and then (X + d) coordinates at the right camera. Wherein d is the value of the x coordinate point in the disparity map, then, acquiring a baseline and a focal length of acquisition equipment (such as a binocular camera), and then, calculating a depth value corresponding to each feature point in the binocular image based on the sample disparity map, the baseline and the focal length.
It should be noted that, in the present application, the execution sequence of the step of "determining the feature point matching relationship between the adjacent synchronized binocular images" and the step of "determining the depth value corresponding to each feature point" is not limited, and may be specifically set according to the actual situation.
104. And correcting the position of the mowing robot according to the matching relation of the characteristic points, the depth value corresponding to each characteristic point, the synchronous rear inertial positioning data, the synchronous rear satellite data and the synchronous rear wheel speed data.
Because binocular image, synchronous back inertial positioning data, synchronous back satellite data and synchronous back wheel speed data are gathered by different sensors respectively, revise the position of robot lawnmower for follow-up, consequently, need carry out data fusion to the data of gathering, optionally, in some embodiments of this application, can carry out data fusion to the data of gathering through the factor graph, and then revise the position of robot lawnmower, namely, step "according to characteristic point matching relation, the degree of depth that every characteristic point corresponds, synchronous back inertial positioning data, synchronous back satellite data and synchronous back wheel speed data, revise the position of robot lawnmower", specifically can include:
(31) Updating the synchronized inertial positioning data and the timestamp of the synchronized wheel speed data according to the characteristic point matching relation;
(32) Performing pre-integration processing on the updated inertial positioning data and the updated wheel speed data;
(33) Performing single-point positioning on the mowing robot based on the updated satellite data;
(34) Constructing a positioning factor graph corresponding to the multiple sensors according to the pre-integration result and the single-point positioning result;
(35) And correcting the position of the mowing robot based on the positioning factor graph and the depth value corresponding to each feature point.
The factor graph is used as a modeling tool for expressing factorization, has simple universality and particularly has wide application value in the fields of coding, statistics, signal processing and artificial intelligence. The factor graph is a probabilistic graphical model, unlike a Bayesian network or Markov random field, that is represented by a bipartite graph composed of variable and factor nodes. In the application, a positioning factor graph is constructed by taking a sensor measurement value as a variable node and taking a probability relation between the measurement value and the pose of the mowing robot as a factor node.
Because the sampling frequency of the inertia detection unit is higher than that of the binocular camera, and the time alignment is performed based on the time of the binocular images in the time synchronization, it is necessary to perform pre-integration processing on the synchronized inertial positioning data and the synchronized wheel speed data, it is to be noted that, since the matched feature points in different binocular images may be smaller than the identified feature points, for example, the number of the feature points identified in the 1 st frame of binocular image is 100, the number of the feature points identified in the 2 nd frame of binocular image is 150, the number of the feature points matched between the 1 st frame of image and the 2 nd frame of binocular image is 60, that is, the timestamps of the matched feature points are taken as the reference, the timestamps of the synchronized inertial positioning data and the synchronized wheel speed data are updated, the pre-integration solution results of the synchronized inertial positioning data and the synchronized wheel speed data are all information such as speed and acceleration, further, errors corresponding to the feature point matching relationship, the pre-integration results and the single-point positioning results can be established based on the binocular images, the error of the positioning factor graph is further obtained, the corresponding to the mowing position correction result can be optionally established according to the pre-estimation optimization algorithm, the corresponding to the mowing coordinate system, the corresponding to the mowing position correction of the mowing robot, and the mowing position correction method can be optionally established by the method, and the method comprises the steps of the following steps:
(41) Constructing a positioning error item corresponding to the single-point positioning result;
(42) Constructing a pre-integration error term corresponding to a pre-integration result;
(43) And constructing a positioning factor graph corresponding to the multi-sensor based on the positioning error term and the pre-integral error term.
For example, specifically, the mowing robot may be single-point positioned by a single-point positioning technology, thereby determining a single-point positioning corresponding to each binocular image, and then, an error of each single-point positioning may be estimated by using a parameter estimation method or a model method, thereby constructing an error term corresponding to a single-point positioning result; for the pre-integration result which comprises a pre-integration result corresponding to the inertial positioning data and a pre-integration result corresponding to the wheel speed data, an error item corresponding to the inertial positioning data and an error item corresponding to the wheel speed data can be constructed through an acceleration error model and a gyroscope error model, and finally, a positioning factor graph corresponding to the multi-sensor is constructed based on the positioning error item and the pre-integration error item.
Then, a non-linear optimization solution may be performed on the localization factor graph, for example, a least square method is used to solve the localization factor graph, and then, an marginalized data residual error process is performed on the solution result to predict an estimated position of the mowing robot, and the position of the mowing robot is corrected based on the estimated position, that is, optionally, in some embodiments, the step "correcting the position of the mowing robot based on the localization factor graph and a depth value corresponding to each feature point" may specifically include:
(51) Carrying out nonlinear optimization calculation on the positioning factor graph to obtain a position estimation result corresponding to the mowing robot;
(52) Determining an image key frame in the binocular image based on the position estimation result;
(53) And correcting the position of the mowing robot according to the image key frame.
For example, specifically, according to the position estimation result, determining a binocular image corresponding to the change of the positioning of the mowing robot, determining the binocular image with the changed position as an image key frame, establishing a corresponding image map based on the image key frame, and finally, correcting the position of the mowing robot based on the image map, that is, optionally, in some embodiments, the step "correcting the position of the mowing robot according to the image key frame" may specifically include:
(61) Establishing a corresponding image map under the current mowing environment according to the image key frame;
(62) Detecting the image map based on a preset point cloud map;
(63) And when the detection result meets the preset condition, correcting the position of the mowing robot.
For example, specifically, a preset image bag-of-words model is obtained, a binocular image is input into the image bag-of-words model, an image category corresponding to each binocular image is output, then, an image map corresponding to the current mowing environment is established based on the image category, next, the image map is detected based on a preset point cloud map, if geometric consistency detection is carried out, closed loop detection is carried out when closed loop is detected, and finally, the position of the mowing robot is corrected according to a closed loop error corresponding to the closed loop detection
According to the embodiment of the application, when the mowing robot carries out mowing operation, binocular images in continuous time, inertial positioning data, satellite data and wheel speed data are collected, then time synchronization processing is carried out on the collected binocular images, the inertial positioning data, the satellite data and the wheel speed data, then the characteristic point matching relation between the adjacent synchronized binocular images and the corresponding depth value of each characteristic point are determined, finally, according to the characteristic point matching relation, the corresponding depth value of each characteristic point, the synchronized inertial positioning data, the synchronized satellite data and the synchronized wheel speed data, the position of the mowing robot is corrected, in the positioning correction scheme provided by the application, the characteristic point matching relation between the adjacent synchronized binocular images and the corresponding depth value of each characteristic point are utilized, the inertial positioning data, the satellite data and the wheel speed data are fused, the positioning of the mowing robot is corrected, the problem that the positioning is inaccurate when the mowing robot is interfered by obstacles is avoided, therefore, the accuracy of the positioning of the mowing robot can be improved, and the mowing efficiency is improved.
In order to better implement the positioning correction method according to the embodiment of the present application, an embodiment of the present application further provides a positioning correction device based on the foregoing positioning correction method. The terms are the same as those in the positioning correction method, and details of implementation may refer to the description in the method embodiment.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a positioning correction apparatus provided in an embodiment of the present application, where the positioning correction apparatus may include an acquisition module 201, a synchronization module 202, a determination module 203, and a correction module 204, which may specifically be as follows:
the acquisition module 201 is used for acquiring binocular images, inertial positioning data, satellite data and wheel speed data in continuous time when the mowing robot executes mowing operation.
For example, the collecting module 201 may collect binocular images, inertial positioning data, satellite data and wheel speed data of the whole mowing process, or may collect binocular images, inertial positioning data, satellite data and wheel speed data of a part of mowing process.
And the synchronization module 202 is configured to perform time synchronization processing on the acquired binocular image, the inertial positioning data, the satellite data, and the wheel speed data.
Because the speed of its collection data of different sensors is different, so, the binocular image that can appear gathering, inertial positioning data, satellite data and the fast data of wheel are asynchronous in time, and the follow-up joint location that is not convenient for, and then revise mowing robot's position, consequently, in this application, need carry out time synchronization to the binocular image that gathers, inertial positioning data, satellite data and the fast data of wheel and handle, optionally, in some embodiments, synchronization module 202 specifically can be used for: acquiring a timestamp corresponding to each group of binocular images; and performing time alignment on the inertial positioning data, the satellite data and the wheel speed data and the corresponding binocular images based on the corresponding timestamps of each group of binocular images.
And the determining module 203 is used for determining the feature point matching relationship between the adjacent synchronous binocular images and the depth value corresponding to each feature point.
Optionally, in some embodiments, the determining module 203 may specifically be configured to: identifying characteristic point information corresponding to the characteristic point of the K-th frame of binocular image and characteristic point information corresponding to the characteristic point of the K-1-th frame of binocular image; determining a characteristic point matching relation between adjacent synchronous binocular images based on the identified characteristic point information; and inputting the binocular image into a preset depth recognition network to obtain the depth value of each feature point in the binocular image.
And the correcting module 204 is configured to correct the position of the mowing robot according to the feature point matching relationship, the depth value corresponding to each feature point, the synchronized inertial positioning data, the synchronized satellite data, and the synchronized rear wheel speed data.
Because binocular image, synchronous back inertial positioning data, synchronous back satellite data and synchronous fast data of back wheel are gathered by different sensors respectively, revise for follow-up position to the robot mower, consequently, need carry out data fusion to the data of gathering, optionally, in some embodiments of this application, revise module 204 specifically can include:
the updating unit is used for updating the synchronized inertial positioning data and the timestamp of the synchronized wheel speed data according to the characteristic point matching relation;
the processing unit is used for performing pre-integration processing on the updated inertial positioning data and the updated wheel speed data;
the positioning unit is used for carrying out single-point positioning on the mowing robot based on the updated satellite data;
the construction unit is used for constructing a positioning factor graph corresponding to the multi-sensor according to the pre-integration result and the single-point positioning result;
and the correcting unit is used for correcting the position of the mowing robot based on the positioning factor graph and the depth value corresponding to each feature point.
Optionally, in some embodiments of the present application, the construction unit may specifically be configured to: constructing a positioning error item corresponding to the single-point positioning result; constructing a pre-integral error term corresponding to a pre-integral result; and constructing a positioning factor graph corresponding to the multi-sensor based on the positioning error term and the pre-integration error term.
Optionally, in some embodiments of the present application, the modifying unit may specifically include:
the calculating subunit is used for carrying out nonlinear optimization calculation on the positioning factor graph to obtain a position estimation result corresponding to the mowing robot;
the determining subunit is used for determining an image key frame in the binocular image based on the position estimation result;
and the correction subunit is used for correcting the position of the mowing robot according to the image key frame.
Optionally, in some embodiments of the present application, the modifying subunit may specifically be configured to: establishing a corresponding image map under the current mowing environment according to the image key frame; detecting the image map based on a preset point cloud map; and when the detection result meets the preset condition, correcting the position of the mowing robot.
The acquisition module 201 of the embodiment of the application acquires binocular images in continuous time, inertial positioning data, satellite data and wheel speed data when the mowing robot performs mowing operation, then the processing module 202 performs time synchronization processing on the acquired binocular images, the inertial positioning data, the satellite data and the wheel speed data, then the determination module 203 determines a feature point matching relationship between adjacent synchronized binocular images and a depth value corresponding to each feature point, and finally the correction module 204 corrects the position of the mowing robot according to the feature point matching relationship, the depth value corresponding to each feature point, synchronized inertial positioning data, synchronized satellite data and synchronized wheel speed data.
In addition, an embodiment of the present application further provides a robot mower, as shown in fig. 3, which shows a schematic structural diagram of the robot mower according to the embodiment of the present application, specifically:
the mowing robot may include components such as a control module 301, a travel mechanism 302, a cutting module 303, and a power supply 304. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 3 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the control module 301 is a control center of the robot mower, and the control module 301 may specifically include a Central Processing Unit (CPU), a memory, an input/output port, a system bus, a timer/counter, a digital-to-analog converter, an analog-to-digital converter, and other components, where the CPU executes various functions and processes data of the robot mower by running or executing software programs and/or modules stored in the memory and calling data stored in the memory; preferably, the CPU may integrate an application processor, which mainly handles an operating system, application programs, and the like, and a modem processor, which mainly handles wireless communication. It will be appreciated that the modem processor described above may not be integrated into the CPU.
The memory may be used to store software programs and modules, and the CPU executes various functional applications and data processing by operating the software programs and modules stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory may also include a memory controller to provide the CPU access to the memory.
The moving mechanism 302 is electrically connected to the control module 301, and is configured to adjust a moving speed and a moving direction of the mowing robot in response to the control signal transmitted by the control module 301, so as to implement a self-moving function of the mowing robot.
The cutting module 303 is electrically connected with the control module 301 and used for adjusting the height and the rotating speed of the cutter disc in response to the control signal transmitted by the control module to realize mowing operation.
The power supply 304 may be logically connected to the control module 301 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The power supply 304 may also include any component including one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown, the mowing robot may further include a communication module, a sensor module, a prompt module, and the like, which are not described in detail herein.
The communication module is used for receiving and sending signals in the process of receiving and sending information, and realizes the signal receiving and sending with the user equipment, the base station or the server by establishing communication connection with the user equipment, the base station or the server.
The sensor module is used for collecting internal environment information or external environment information, and feeding back collected environment data to the control module for decision making, so that the accurate positioning and intelligent obstacle avoidance functions of the mowing robot are realized. Optionally, the sensor may comprise: without limitation, ultrasonic sensors, infrared sensors, collision sensors, rain sensors, lidar sensors, inertial measurement units, wheel speed gauges, image sensors, position sensors, and other sensors.
The prompting module is used for prompting the working state of the current mowing robot of a user. In this scheme, the prompt module includes but is not limited to pilot lamp, bee calling organ etc.. For example, the mowing robot can prompt a user of the current power state, the working state of the motor, the working state of the sensor and the like through the indicator lamp. For another example, when it is detected that the robot lawnmower has a malfunction or is stolen, an alarm prompt may be implemented by a buzzer.
Specifically, in this embodiment, the processor in the control module 301 loads the executable file corresponding to the process of one or more application programs into the memory according to the following instructions, and the processor runs the application programs stored in the memory, so as to implement various functions, as follows:
when the mowing robot executes mowing operation, acquiring binocular images, inertial positioning data, satellite data and wheel speed data in continuous time; carrying out time synchronization processing on the acquired binocular image, the inertial positioning data, the satellite data and the wheel speed data; determining a feature point matching relationship between adjacent synchronous binocular images and a depth value corresponding to each feature point; and correcting the position of the mowing robot according to the matching relation of the characteristic points, the depth value corresponding to each characteristic point, the synchronous back inertial positioning data, the synchronous back satellite data and the synchronous back wheel speed data.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
According to the positioning correction scheme, the positioning accuracy of the mowing robot can be improved, and the mowing efficiency is improved.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, the present application provides a storage medium, in which a plurality of instructions are stored, where the instructions can be loaded by a processor to execute the steps in any one of the positioning correction methods provided in the present application. For example, the instructions may perform the steps of:
when the mowing robot executes mowing operation, acquiring binocular images, inertial positioning data, satellite data and wheel speed data in continuous time; carrying out time synchronization processing on the acquired binocular image, the inertial positioning data, the satellite data and the wheel speed data; determining the matching relationship of the characteristic points between the adjacent synchronous binocular images and the depth value corresponding to each characteristic point; and correcting the position of the mowing robot according to the matching relation of the characteristic points, the depth value corresponding to each characteristic point, the synchronous back inertial positioning data, the synchronous back satellite data and the synchronous back wheel speed data.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in any positioning correction method provided in the embodiments of the present application, beneficial effects that can be achieved by any positioning correction method provided in the embodiments of the present application can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The positioning correction method, the positioning correction device, the mowing robot and the storage medium provided by the embodiment of the application are introduced in detail, and specific examples are applied to the description to explain the principle and the implementation of the application, and the description of the embodiment is only used for helping to understand the method and the core idea of the application; meanwhile, for those skilled in the art, according to the idea of the present application, the specific implementation manner and the application scope may be changed, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A method of position correction, comprising:
when the mowing robot executes mowing operation, acquiring binocular images, inertial positioning data, satellite data and wheel speed data in continuous time;
carrying out time synchronization processing on the acquired binocular image, the inertial positioning data, the satellite data and the wheel speed data;
determining a feature point matching relationship between adjacent synchronous binocular images and a depth value corresponding to each feature point;
and correcting the position of the mowing robot according to the feature point matching relation, the depth value corresponding to each feature point, the synchronized inertial positioning data, the synchronized satellite data and the synchronized rear wheel speed data.
2. The method of claim 1, wherein the correcting the position of the robot lawnmower according to the feature point matching relationship, the depth value corresponding to each feature point, the post-synchronization inertial positioning data, the post-synchronization satellite data, and the post-synchronization wheel speed data comprises:
updating the time stamps of the synchronized inertial positioning data and the synchronized wheel speed data according to the characteristic point matching relation;
performing pre-integration processing on the updated inertial positioning data and the updated wheel speed data;
performing single-point positioning on the mowing robot based on the updated satellite data;
constructing a positioning factor graph corresponding to the multiple sensors according to the pre-integration result and the single-point positioning result;
and correcting the position of the mowing robot based on the positioning factor graph and the depth value corresponding to each feature point.
3. The method of claim 2, wherein constructing a multi-sensor corresponding location factor graph according to the pre-integration result and the single-point location result comprises:
constructing a positioning error item corresponding to the single-point positioning result;
constructing a pre-integration error term corresponding to a pre-integration result;
and constructing a positioning factor graph corresponding to the multi-sensor based on the positioning error term and the pre-integral error term.
4. The method of claim 2, wherein modifying the position of the lawn mowing robot based on the localization factor graph and the depth value corresponding to each feature point comprises:
performing nonlinear optimization calculation on the positioning factor graph to obtain a position estimation result corresponding to the mowing robot;
determining an image key frame in the binocular image based on the position estimation result;
and correcting the position of the mowing robot according to the image key frame.
5. The method of claim 4, wherein the modifying the position of the lawn mowing robot according to the image keyframe comprises:
establishing a corresponding image map under the current mowing environment according to the image key frame;
detecting the image map based on a preset point cloud map;
and when the detection result meets a preset condition, correcting the position of the mowing robot.
6. The method according to any one of claims 1 to 5, wherein the time synchronization processing of the acquired binocular images, inertial positioning data, satellite data and wheel speed data comprises:
acquiring a timestamp corresponding to each group of binocular images;
and performing time alignment on the inertial positioning data, the satellite data and the wheel speed data and the corresponding binocular images based on the corresponding timestamps of each group of binocular images.
7. The method according to any one of claims 1 to 5, wherein the determining of the feature point matching relationship between the adjacent synchronized binocular images and the depth value corresponding to each feature point comprises:
identifying characteristic point information corresponding to the characteristic points of the K frame of binocular image and characteristic point information corresponding to the characteristic points of the K-1 frame of binocular image, wherein K is an integer greater than 1;
determining a characteristic point matching relation between adjacent synchronous binocular images based on the identified characteristic point information;
and inputting the binocular image into a preset depth recognition network to obtain the depth value of each feature point in the binocular image.
8. A positioning correction apparatus, characterized by comprising:
the acquisition module is used for acquiring binocular images, inertial positioning data, satellite data and wheel speed data in continuous time when the mowing robot executes mowing operation;
the synchronization module is used for carrying out time synchronization processing on the acquired binocular image, the inertial positioning data, the satellite data and the wheel speed data;
the determining module is used for determining the feature point matching relationship between adjacent synchronous binocular images and the depth value corresponding to each feature point;
and the correction module is used for correcting the position of the mowing robot according to the feature point matching relation, the depth value corresponding to each feature point, the synchronous inertial positioning data, the synchronous satellite data and the synchronous rear wheel speed data.
9. A robot lawnmower comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program performs the steps of the positioning correction method according to any one of claims 1 to 7.
10. A storage medium having stored thereon a computer program, wherein the computer program when executed by a processor performs the steps of the positioning correction method according to any of claims 1-7.
CN202211074121.1A 2022-09-02 2022-09-02 Positioning correction method, positioning correction device, mowing robot and storage medium Pending CN115493579A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211074121.1A CN115493579A (en) 2022-09-02 2022-09-02 Positioning correction method, positioning correction device, mowing robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211074121.1A CN115493579A (en) 2022-09-02 2022-09-02 Positioning correction method, positioning correction device, mowing robot and storage medium

Publications (1)

Publication Number Publication Date
CN115493579A true CN115493579A (en) 2022-12-20

Family

ID=84468114

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211074121.1A Pending CN115493579A (en) 2022-09-02 2022-09-02 Positioning correction method, positioning correction device, mowing robot and storage medium

Country Status (1)

Country Link
CN (1) CN115493579A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117804449A (en) * 2024-02-29 2024-04-02 锐驰激光(深圳)有限公司 Mower ground sensing method, device, equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112284379A (en) * 2020-09-17 2021-01-29 江苏大学 Inertia pre-integration method of combined motion measurement system based on nonlinear integral compensation
CN113358112A (en) * 2021-06-03 2021-09-07 北京超星未来科技有限公司 Map construction method and laser inertia odometer
CN113390408A (en) * 2021-06-30 2021-09-14 深圳市优必选科技股份有限公司 Robot positioning method and device, robot and storage medium
CN113405545A (en) * 2021-07-20 2021-09-17 阿里巴巴新加坡控股有限公司 Positioning method, positioning device, electronic equipment and computer storage medium
CN113432595A (en) * 2021-07-07 2021-09-24 北京三快在线科技有限公司 Equipment state acquisition method and device, computer equipment and storage medium
WO2021248636A1 (en) * 2020-06-12 2021-12-16 东莞市普灵思智能电子有限公司 System and method for detecting and positioning autonomous driving object
CN114491316A (en) * 2022-02-11 2022-05-13 松灵机器人(深圳)有限公司 Determining method, determining device, electronic equipment and related product

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021248636A1 (en) * 2020-06-12 2021-12-16 东莞市普灵思智能电子有限公司 System and method for detecting and positioning autonomous driving object
CN112284379A (en) * 2020-09-17 2021-01-29 江苏大学 Inertia pre-integration method of combined motion measurement system based on nonlinear integral compensation
CN113358112A (en) * 2021-06-03 2021-09-07 北京超星未来科技有限公司 Map construction method and laser inertia odometer
CN113390408A (en) * 2021-06-30 2021-09-14 深圳市优必选科技股份有限公司 Robot positioning method and device, robot and storage medium
CN113432595A (en) * 2021-07-07 2021-09-24 北京三快在线科技有限公司 Equipment state acquisition method and device, computer equipment and storage medium
CN113405545A (en) * 2021-07-20 2021-09-17 阿里巴巴新加坡控股有限公司 Positioning method, positioning device, electronic equipment and computer storage medium
CN114491316A (en) * 2022-02-11 2022-05-13 松灵机器人(深圳)有限公司 Determining method, determining device, electronic equipment and related product

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张琳;廉保旺;: "室内惯性导航***/相机拓扑测量的因子图合作定位算法", 西安交通大学学报, no. 03, 31 December 2020 (2020-12-31), pages 76 - 85 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117804449A (en) * 2024-02-29 2024-04-02 锐驰激光(深圳)有限公司 Mower ground sensing method, device, equipment and storage medium
CN117804449B (en) * 2024-02-29 2024-05-28 锐驰激光(深圳)有限公司 Mower ground sensing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
US20230260151A1 (en) Simultaneous Localization and Mapping Method, Device, System and Storage Medium
CN113296495B (en) Path forming method and device of self-mobile equipment and automatic working system
CN112634451A (en) Outdoor large-scene three-dimensional mapping method integrating multiple sensors
CN112179330A (en) Pose determination method and device of mobile equipment
CN111366153B (en) Positioning method for tight coupling of laser radar and IMU
Ding et al. Recent developments and applications of simultaneous localization and mapping in agriculture
Ji et al. Obstacle detection and recognition in farmland based on fusion point cloud data
WO2024022337A1 (en) Obstacle detection method and apparatus, mowing robot, and storage medium
CN112987728A (en) Robot environment map updating method, system, equipment and storage medium
CN111415417A (en) Mobile robot topology experience map construction method integrating sparse point cloud
CN115493579A (en) Positioning correction method, positioning correction device, mowing robot and storage medium
CN112684430A (en) Indoor old person walking health detection method and system, storage medium and terminal
CN115016502A (en) Intelligent obstacle avoidance method, mowing robot and storage medium
CN114897988A (en) Multi-camera positioning method, device and equipment in hinge type vehicle
CN114924287A (en) Map construction method, apparatus and medium
WO2022246812A1 (en) Positioning method and apparatus, electronic device, and storage medium
WO2024017034A1 (en) Route planning method and device, mowing robot, and storage medium
WO2024008016A1 (en) Operation map construction method and apparatus, mowing robot, and storage medium
CN115053690A (en) Mowing method, mowing device, mowing robot and storage medium
CN115136781A (en) Mowing method, mowing device, mowing robot and storage medium
CN115655288A (en) Intelligent laser positioning method and device, electronic equipment and storage medium
CN115088463A (en) Mowing method, mowing device, mowing robot and storage medium
CN115039561A (en) Mowing method, mowing device, mowing robot and storage medium
CN115268438A (en) Intelligent obstacle avoidance method and device, mowing robot and storage medium
CN115250720A (en) Mowing method, mowing device, mowing robot and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: 518000 9/F, Building A3, Nanshan Zhiyuan, No. 1001, Xueyuan Avenue, Changyuan Community, Taoyuan Street, Nanshan District, Shenzhen, Guangdong Province

Applicant after: Shenzhen Kuma Technology Co.,Ltd.

Applicant after: Songling Robot (Chengdu) Co.,Ltd.

Address before: 518000 1201, Tianlong mobile headquarters building, Tongfa South Road, Xili community, Xili street, Nanshan District, Shenzhen, Guangdong Province

Applicant before: Songling robot (Shenzhen) Co.,Ltd.

Country or region before: China

Applicant before: Songling Robot (Chengdu) Co.,Ltd.