Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It will be understood that when an element is referred to as being "secured to" or "disposed on" another element, it can be directly on the other element or be indirectly on the other element. When an element is referred to as being "connected to" another element, it can be directly connected to the other element or be indirectly connected to the other element. In addition, the connection may be for either a fixing or a circuit communication.
It is to be understood that the terms "length," "width," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like are used in an orientation or positional relationship indicated in the drawings to facilitate the description of the embodiments of the invention and to simplify the description, and are not intended to indicate or imply that the device or element so referred to must have a particular orientation, be constructed in a particular orientation, and be constructed in operation as a limitation of the invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the embodiments of the present application, "a plurality" means two or more unless specifically defined otherwise.
The embodiment of the application provides a positioning correction method and device, a mowing robot and a storage medium.
The positioning correction device can be integrated in a Micro Control Unit (MCU) of the mowing robot, and can also be integrated in an intelligent terminal or a server, the MCU is also called a Single Chip Microcomputer (Single Chip Microcomputer) or a Single Chip Microcomputer, the frequency and specification of a Central Processing Unit (CPU) are properly reduced, and peripheral interfaces such as a memory, a counter (Timer), a USB, an analog-to-digital conversion/digital-to-analog conversion, a UART, a PLC, a DMA and the like are used to form a Chip-level computer, so that different combination controls can be performed for different application occasions. The robot of mowing can walk voluntarily, and the collision prevention returns automatically within the scope and charges, possesses safety inspection and battery power detection, possesses certain climbing ability, is particularly suitable for places such as family's courtyard, public greenery patches to carry out the lawn mowing maintenance, and its characteristics are: automatic mowing, cleaning grass scraps, automatic rain sheltering, automatic charging, automatic obstacle sheltering, small and exquisite appearance, electronic virtual fence, network control and the like.
The terminal may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal and the server may be directly or indirectly connected through a wired or wireless communication manner, the server may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server that provides basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, network service, cloud communication, middleware service, domain name service, security service, CDN, and a big data and artificial intelligence platform, and the application is not limited herein.
For example, referring to fig. 1a, the present application provides a mowing system comprising a mowing robot 10, a server 20 and a user device 30, which are communicatively connected with each other. The user may control the mowing robot 10 to move through the user device 30 in advance, set a mowing area based on a moving track, and synchronize data corresponding to the mowing area to the mowing robot 10 and the server 20.
When the robot mower 10 performs mowing operation in a mowing area, binocular images, inertial positioning data, satellite data and wheel speed data in continuous time can be collected, for example, binocular images, inertial positioning data, satellite data and wheel speed data of the whole mowing operation process can be collected, binocular images, inertial positioning data, satellite data and wheel speed data in a certain period of time can also be collected, then, the collected binocular images, the inertial positioning data, the satellite data and the wheel speed data are subjected to time synchronization processing, then, a feature point matching relationship between adjacent synchronized binocular images and a depth value corresponding to each feature point are determined, and finally, the position of the robot mower is corrected according to the feature point matching relationship, the depth value corresponding to each feature point, the synchronized inertial positioning data, the synchronized satellite data and the synchronized wheel speed data.
In the positioning correction scheme provided by the application, the feature point matching relation between adjacent synchronous back binocular images and the depth value corresponding to each feature point are utilized, inertia positioning data, satellite data and wheel speed data are fused, the positioning of the mowing robot is corrected, the problem that the positioning of the mowing robot is inaccurate when the mowing robot is interfered by obstacles is avoided, and therefore the accuracy of positioning the mowing robot can be improved, and the mowing efficiency is improved.
The following are detailed descriptions. It should be noted that the description sequence of the following embodiments is not intended to limit the priority sequence of the embodiments.
A method of position correction, comprising: when the mowing robot carries out mowing operation, acquiring binocular images, inertial positioning data, satellite data and wheel speed data in continuous time; performing time synchronization processing on the acquired binocular image, the inertial positioning data, the satellite data and the wheel speed data; determining a feature point matching relationship between adjacent synchronous binocular images and a depth value corresponding to each feature point; and correcting the position of the mowing robot according to the matching relation of the characteristic points, the depth value corresponding to each characteristic point, the synchronous back inertial positioning data, the synchronous back satellite data and the synchronous back wheel speed data.
Referring to fig. 1b, fig. 1b is a schematic flow chart illustrating a positioning correction method according to an embodiment of the present disclosure. The specific flow of the positioning correction method can be as follows:
101. when the mowing robot carries out mowing operation, binocular images, inertial positioning data, satellite data and wheel speed data in continuous time are collected.
The binocular image is based on a parallax principle and acquires two images of the measured object from different positions by using an imaging device, namely, the binocular image specifically comprises a left eye image and a right eye image, inertial positioning data can be acquired by an inertial positioning unit, the inertial positioning data can comprise three-axis acceleration information, three-axis angular velocity information and the like of the mowing robot, satellite data can be acquired by a receiver, wheel speed data can be acquired by a wheel speed meter, and the wheel speed data can comprise the driving distance of the mowing robot, the rotating speed corresponding to each path of tire and the like.
Optionally, in some embodiments, binocular images, inertial positioning data, satellite data, and wheel speed data of the whole mowing operation process may be collected, and binocular images, inertial positioning data, satellite data, and wheel speed data of a part of mowing operation process may also be collected, for example, when it is detected that a satellite positioning signal of the mowing robot is smaller than a preset value, a data collection operation is triggered, that is, binocular images, inertial positioning data, satellite data, and wheel speed data in a continuous time are collected.
102. And carrying out time synchronization processing on the acquired binocular image, the inertial positioning data, the satellite data and the wheel speed data.
Because its speed of gathering data of different sensors is different, therefore, the binocular image of gathering can appear, inertial positioning data, satellite data and the fast data of wheel are asynchronous in time, the follow-up joint location of being not convenient for, and then revise mowing robot's position, therefore, in this application, need carry out time synchronization to the binocular image of gathering, inertial positioning data, satellite data and the fast data of wheel carry out time synchronization and handle, optionally, in some embodiments, can be based on the timestamp of binocular image, to inertial positioning data, satellite data and the fast data of wheel carry out time synchronization and handle, namely, the step "to the binocular image of gathering, inertial positioning data, satellite data and the fast data of wheel carry out time synchronization and handle", specifically can include:
(11) Acquiring a timestamp corresponding to each group of binocular images;
(12) And performing time alignment on the inertial positioning data, the satellite data and the wheel speed data and the corresponding binocular images based on the corresponding timestamps of each group of binocular images.
It should be noted that the binocular camera acquires a set of binocular images, and therefore, in the present application, the inertial positioning data, the satellite data, and the wheel speed data may be time-aligned with the corresponding binocular images by using the timestamp of the left eye image or the timestamp of the right eye image in the same set of binocular images.
103. And determining the matching relationship of the characteristic points between the adjacent synchronous binocular images and the depth value corresponding to each characteristic point.
In the present application, the feature point is a pixel point containing special information in the binocular image, such as a pixel point containing position information, a pixel point containing angle information, or a connection point (also referred to as a corner point) of an object contour line, and the like, and the position of the mowing robot may be subsequently corrected based on a feature point matching relationship and a depth value, where the feature point matching relationship may be used to assist in determining a relative position between the mowing robot and a target (i.e., an object in the image), and the depth value may be used to assist in determining a distance between the mowing robot and the target (i.e., an object in the image).
Optionally, in some embodiments, the feature point may be a corner point, and the feature point matching relationship is a corner point matching relationship, it should be noted that corner point matching (corner matching) refers to finding a corresponding relationship of feature pixel points between two images, so as to determine a position relationship between the two images, and the corner point matching may be divided into the following three steps:
step 1: and searching the pixel points (corner points) which are most easily identified in the two images to be matched, such as edge points of objects with rich textures and the like.
And 2, step: for the detected corner, it is described by some mathematical features, such as gradient histogram, local random binary feature, etc.
And step 3: and judging the corresponding relation of the corner points in the two images through the descriptors of the corner points.
Meanwhile, the synchronized binocular images may also be input into a preset depth recognition network, and a depth value corresponding to each feature point in the synchronized binocular images is output, that is, optionally, in some embodiments, the step "determining a feature point matching relationship between adjacent synchronized binocular images and a depth value corresponding to each feature point" may specifically include:
(21) Identifying characteristic point information corresponding to the characteristic points of the K frame of binocular image and characteristic point information corresponding to the characteristic points of the K-1 frame of binocular image;
(22) Determining a characteristic point matching relation between adjacent synchronous binocular images based on the identified characteristic point information;
(23) And inputting the binocular image into a preset depth recognition network to obtain the depth value of each feature point in the binocular image.
K is an integer greater than 1, for example, the feature point information corresponding to the feature point of the 2 nd frame left eye image and the feature point information corresponding to the feature point of the 1 st frame left eye image are identified, the feature point information may be descriptor information, the descriptor information may be gradient histogram information, and the descriptor information is used to describe the occurrence frequency of the gradient direction in the local region of the image, of course, the descriptor information may be feature transformation information or acceleration robust feature information, and may be specifically selected according to an actual situation, which is not described herein again.
In addition, after the binocular images are input into the preset depth recognition network, a disparity map corresponding to the binocular images is output first, the disparity map refers to the position deviation of pixels of the same scene imaged under two cameras, and the position deviation is generally reflected in the horizontal direction because the two binocular cameras are placed in water. For example, an X-point in the scene is abscissa (X-coordinate) at the left camera, and then (X + d) coordinates at the right camera. Wherein d is the value of the x coordinate point in the disparity map, then, acquiring a baseline and a focal length of acquisition equipment (such as a binocular camera), and then, calculating a depth value corresponding to each feature point in the binocular image based on the sample disparity map, the baseline and the focal length.
It should be noted that, in the present application, the execution sequence of the step of "determining the feature point matching relationship between the adjacent synchronized binocular images" and the step of "determining the depth value corresponding to each feature point" is not limited, and may be specifically set according to the actual situation.
104. And correcting the position of the mowing robot according to the matching relation of the characteristic points, the depth value corresponding to each characteristic point, the synchronous rear inertial positioning data, the synchronous rear satellite data and the synchronous rear wheel speed data.
Because binocular image, synchronous back inertial positioning data, synchronous back satellite data and synchronous back wheel speed data are gathered by different sensors respectively, revise the position of robot lawnmower for follow-up, consequently, need carry out data fusion to the data of gathering, optionally, in some embodiments of this application, can carry out data fusion to the data of gathering through the factor graph, and then revise the position of robot lawnmower, namely, step "according to characteristic point matching relation, the degree of depth that every characteristic point corresponds, synchronous back inertial positioning data, synchronous back satellite data and synchronous back wheel speed data, revise the position of robot lawnmower", specifically can include:
(31) Updating the synchronized inertial positioning data and the timestamp of the synchronized wheel speed data according to the characteristic point matching relation;
(32) Performing pre-integration processing on the updated inertial positioning data and the updated wheel speed data;
(33) Performing single-point positioning on the mowing robot based on the updated satellite data;
(34) Constructing a positioning factor graph corresponding to the multiple sensors according to the pre-integration result and the single-point positioning result;
(35) And correcting the position of the mowing robot based on the positioning factor graph and the depth value corresponding to each feature point.
The factor graph is used as a modeling tool for expressing factorization, has simple universality and particularly has wide application value in the fields of coding, statistics, signal processing and artificial intelligence. The factor graph is a probabilistic graphical model, unlike a Bayesian network or Markov random field, that is represented by a bipartite graph composed of variable and factor nodes. In the application, a positioning factor graph is constructed by taking a sensor measurement value as a variable node and taking a probability relation between the measurement value and the pose of the mowing robot as a factor node.
Because the sampling frequency of the inertia detection unit is higher than that of the binocular camera, and the time alignment is performed based on the time of the binocular images in the time synchronization, it is necessary to perform pre-integration processing on the synchronized inertial positioning data and the synchronized wheel speed data, it is to be noted that, since the matched feature points in different binocular images may be smaller than the identified feature points, for example, the number of the feature points identified in the 1 st frame of binocular image is 100, the number of the feature points identified in the 2 nd frame of binocular image is 150, the number of the feature points matched between the 1 st frame of image and the 2 nd frame of binocular image is 60, that is, the timestamps of the matched feature points are taken as the reference, the timestamps of the synchronized inertial positioning data and the synchronized wheel speed data are updated, the pre-integration solution results of the synchronized inertial positioning data and the synchronized wheel speed data are all information such as speed and acceleration, further, errors corresponding to the feature point matching relationship, the pre-integration results and the single-point positioning results can be established based on the binocular images, the error of the positioning factor graph is further obtained, the corresponding to the mowing position correction result can be optionally established according to the pre-estimation optimization algorithm, the corresponding to the mowing coordinate system, the corresponding to the mowing position correction of the mowing robot, and the mowing position correction method can be optionally established by the method, and the method comprises the steps of the following steps:
(41) Constructing a positioning error item corresponding to the single-point positioning result;
(42) Constructing a pre-integration error term corresponding to a pre-integration result;
(43) And constructing a positioning factor graph corresponding to the multi-sensor based on the positioning error term and the pre-integral error term.
For example, specifically, the mowing robot may be single-point positioned by a single-point positioning technology, thereby determining a single-point positioning corresponding to each binocular image, and then, an error of each single-point positioning may be estimated by using a parameter estimation method or a model method, thereby constructing an error term corresponding to a single-point positioning result; for the pre-integration result which comprises a pre-integration result corresponding to the inertial positioning data and a pre-integration result corresponding to the wheel speed data, an error item corresponding to the inertial positioning data and an error item corresponding to the wheel speed data can be constructed through an acceleration error model and a gyroscope error model, and finally, a positioning factor graph corresponding to the multi-sensor is constructed based on the positioning error item and the pre-integration error item.
Then, a non-linear optimization solution may be performed on the localization factor graph, for example, a least square method is used to solve the localization factor graph, and then, an marginalized data residual error process is performed on the solution result to predict an estimated position of the mowing robot, and the position of the mowing robot is corrected based on the estimated position, that is, optionally, in some embodiments, the step "correcting the position of the mowing robot based on the localization factor graph and a depth value corresponding to each feature point" may specifically include:
(51) Carrying out nonlinear optimization calculation on the positioning factor graph to obtain a position estimation result corresponding to the mowing robot;
(52) Determining an image key frame in the binocular image based on the position estimation result;
(53) And correcting the position of the mowing robot according to the image key frame.
For example, specifically, according to the position estimation result, determining a binocular image corresponding to the change of the positioning of the mowing robot, determining the binocular image with the changed position as an image key frame, establishing a corresponding image map based on the image key frame, and finally, correcting the position of the mowing robot based on the image map, that is, optionally, in some embodiments, the step "correcting the position of the mowing robot according to the image key frame" may specifically include:
(61) Establishing a corresponding image map under the current mowing environment according to the image key frame;
(62) Detecting the image map based on a preset point cloud map;
(63) And when the detection result meets the preset condition, correcting the position of the mowing robot.
For example, specifically, a preset image bag-of-words model is obtained, a binocular image is input into the image bag-of-words model, an image category corresponding to each binocular image is output, then, an image map corresponding to the current mowing environment is established based on the image category, next, the image map is detected based on a preset point cloud map, if geometric consistency detection is carried out, closed loop detection is carried out when closed loop is detected, and finally, the position of the mowing robot is corrected according to a closed loop error corresponding to the closed loop detection
According to the embodiment of the application, when the mowing robot carries out mowing operation, binocular images in continuous time, inertial positioning data, satellite data and wheel speed data are collected, then time synchronization processing is carried out on the collected binocular images, the inertial positioning data, the satellite data and the wheel speed data, then the characteristic point matching relation between the adjacent synchronized binocular images and the corresponding depth value of each characteristic point are determined, finally, according to the characteristic point matching relation, the corresponding depth value of each characteristic point, the synchronized inertial positioning data, the synchronized satellite data and the synchronized wheel speed data, the position of the mowing robot is corrected, in the positioning correction scheme provided by the application, the characteristic point matching relation between the adjacent synchronized binocular images and the corresponding depth value of each characteristic point are utilized, the inertial positioning data, the satellite data and the wheel speed data are fused, the positioning of the mowing robot is corrected, the problem that the positioning is inaccurate when the mowing robot is interfered by obstacles is avoided, therefore, the accuracy of the positioning of the mowing robot can be improved, and the mowing efficiency is improved.
In order to better implement the positioning correction method according to the embodiment of the present application, an embodiment of the present application further provides a positioning correction device based on the foregoing positioning correction method. The terms are the same as those in the positioning correction method, and details of implementation may refer to the description in the method embodiment.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a positioning correction apparatus provided in an embodiment of the present application, where the positioning correction apparatus may include an acquisition module 201, a synchronization module 202, a determination module 203, and a correction module 204, which may specifically be as follows:
the acquisition module 201 is used for acquiring binocular images, inertial positioning data, satellite data and wheel speed data in continuous time when the mowing robot executes mowing operation.
For example, the collecting module 201 may collect binocular images, inertial positioning data, satellite data and wheel speed data of the whole mowing process, or may collect binocular images, inertial positioning data, satellite data and wheel speed data of a part of mowing process.
And the synchronization module 202 is configured to perform time synchronization processing on the acquired binocular image, the inertial positioning data, the satellite data, and the wheel speed data.
Because the speed of its collection data of different sensors is different, so, the binocular image that can appear gathering, inertial positioning data, satellite data and the fast data of wheel are asynchronous in time, and the follow-up joint location that is not convenient for, and then revise mowing robot's position, consequently, in this application, need carry out time synchronization to the binocular image that gathers, inertial positioning data, satellite data and the fast data of wheel and handle, optionally, in some embodiments, synchronization module 202 specifically can be used for: acquiring a timestamp corresponding to each group of binocular images; and performing time alignment on the inertial positioning data, the satellite data and the wheel speed data and the corresponding binocular images based on the corresponding timestamps of each group of binocular images.
And the determining module 203 is used for determining the feature point matching relationship between the adjacent synchronous binocular images and the depth value corresponding to each feature point.
Optionally, in some embodiments, the determining module 203 may specifically be configured to: identifying characteristic point information corresponding to the characteristic point of the K-th frame of binocular image and characteristic point information corresponding to the characteristic point of the K-1-th frame of binocular image; determining a characteristic point matching relation between adjacent synchronous binocular images based on the identified characteristic point information; and inputting the binocular image into a preset depth recognition network to obtain the depth value of each feature point in the binocular image.
And the correcting module 204 is configured to correct the position of the mowing robot according to the feature point matching relationship, the depth value corresponding to each feature point, the synchronized inertial positioning data, the synchronized satellite data, and the synchronized rear wheel speed data.
Because binocular image, synchronous back inertial positioning data, synchronous back satellite data and synchronous fast data of back wheel are gathered by different sensors respectively, revise for follow-up position to the robot mower, consequently, need carry out data fusion to the data of gathering, optionally, in some embodiments of this application, revise module 204 specifically can include:
the updating unit is used for updating the synchronized inertial positioning data and the timestamp of the synchronized wheel speed data according to the characteristic point matching relation;
the processing unit is used for performing pre-integration processing on the updated inertial positioning data and the updated wheel speed data;
the positioning unit is used for carrying out single-point positioning on the mowing robot based on the updated satellite data;
the construction unit is used for constructing a positioning factor graph corresponding to the multi-sensor according to the pre-integration result and the single-point positioning result;
and the correcting unit is used for correcting the position of the mowing robot based on the positioning factor graph and the depth value corresponding to each feature point.
Optionally, in some embodiments of the present application, the construction unit may specifically be configured to: constructing a positioning error item corresponding to the single-point positioning result; constructing a pre-integral error term corresponding to a pre-integral result; and constructing a positioning factor graph corresponding to the multi-sensor based on the positioning error term and the pre-integration error term.
Optionally, in some embodiments of the present application, the modifying unit may specifically include:
the calculating subunit is used for carrying out nonlinear optimization calculation on the positioning factor graph to obtain a position estimation result corresponding to the mowing robot;
the determining subunit is used for determining an image key frame in the binocular image based on the position estimation result;
and the correction subunit is used for correcting the position of the mowing robot according to the image key frame.
Optionally, in some embodiments of the present application, the modifying subunit may specifically be configured to: establishing a corresponding image map under the current mowing environment according to the image key frame; detecting the image map based on a preset point cloud map; and when the detection result meets the preset condition, correcting the position of the mowing robot.
The acquisition module 201 of the embodiment of the application acquires binocular images in continuous time, inertial positioning data, satellite data and wheel speed data when the mowing robot performs mowing operation, then the processing module 202 performs time synchronization processing on the acquired binocular images, the inertial positioning data, the satellite data and the wheel speed data, then the determination module 203 determines a feature point matching relationship between adjacent synchronized binocular images and a depth value corresponding to each feature point, and finally the correction module 204 corrects the position of the mowing robot according to the feature point matching relationship, the depth value corresponding to each feature point, synchronized inertial positioning data, synchronized satellite data and synchronized wheel speed data.
In addition, an embodiment of the present application further provides a robot mower, as shown in fig. 3, which shows a schematic structural diagram of the robot mower according to the embodiment of the present application, specifically:
the mowing robot may include components such as a control module 301, a travel mechanism 302, a cutting module 303, and a power supply 304. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 3 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the control module 301 is a control center of the robot mower, and the control module 301 may specifically include a Central Processing Unit (CPU), a memory, an input/output port, a system bus, a timer/counter, a digital-to-analog converter, an analog-to-digital converter, and other components, where the CPU executes various functions and processes data of the robot mower by running or executing software programs and/or modules stored in the memory and calling data stored in the memory; preferably, the CPU may integrate an application processor, which mainly handles an operating system, application programs, and the like, and a modem processor, which mainly handles wireless communication. It will be appreciated that the modem processor described above may not be integrated into the CPU.
The memory may be used to store software programs and modules, and the CPU executes various functional applications and data processing by operating the software programs and modules stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory may also include a memory controller to provide the CPU access to the memory.
The moving mechanism 302 is electrically connected to the control module 301, and is configured to adjust a moving speed and a moving direction of the mowing robot in response to the control signal transmitted by the control module 301, so as to implement a self-moving function of the mowing robot.
The cutting module 303 is electrically connected with the control module 301 and used for adjusting the height and the rotating speed of the cutter disc in response to the control signal transmitted by the control module to realize mowing operation.
The power supply 304 may be logically connected to the control module 301 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The power supply 304 may also include any component including one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown, the mowing robot may further include a communication module, a sensor module, a prompt module, and the like, which are not described in detail herein.
The communication module is used for receiving and sending signals in the process of receiving and sending information, and realizes the signal receiving and sending with the user equipment, the base station or the server by establishing communication connection with the user equipment, the base station or the server.
The sensor module is used for collecting internal environment information or external environment information, and feeding back collected environment data to the control module for decision making, so that the accurate positioning and intelligent obstacle avoidance functions of the mowing robot are realized. Optionally, the sensor may comprise: without limitation, ultrasonic sensors, infrared sensors, collision sensors, rain sensors, lidar sensors, inertial measurement units, wheel speed gauges, image sensors, position sensors, and other sensors.
The prompting module is used for prompting the working state of the current mowing robot of a user. In this scheme, the prompt module includes but is not limited to pilot lamp, bee calling organ etc.. For example, the mowing robot can prompt a user of the current power state, the working state of the motor, the working state of the sensor and the like through the indicator lamp. For another example, when it is detected that the robot lawnmower has a malfunction or is stolen, an alarm prompt may be implemented by a buzzer.
Specifically, in this embodiment, the processor in the control module 301 loads the executable file corresponding to the process of one or more application programs into the memory according to the following instructions, and the processor runs the application programs stored in the memory, so as to implement various functions, as follows:
when the mowing robot executes mowing operation, acquiring binocular images, inertial positioning data, satellite data and wheel speed data in continuous time; carrying out time synchronization processing on the acquired binocular image, the inertial positioning data, the satellite data and the wheel speed data; determining a feature point matching relationship between adjacent synchronous binocular images and a depth value corresponding to each feature point; and correcting the position of the mowing robot according to the matching relation of the characteristic points, the depth value corresponding to each characteristic point, the synchronous back inertial positioning data, the synchronous back satellite data and the synchronous back wheel speed data.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
According to the positioning correction scheme, the positioning accuracy of the mowing robot can be improved, and the mowing efficiency is improved.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, the present application provides a storage medium, in which a plurality of instructions are stored, where the instructions can be loaded by a processor to execute the steps in any one of the positioning correction methods provided in the present application. For example, the instructions may perform the steps of:
when the mowing robot executes mowing operation, acquiring binocular images, inertial positioning data, satellite data and wheel speed data in continuous time; carrying out time synchronization processing on the acquired binocular image, the inertial positioning data, the satellite data and the wheel speed data; determining the matching relationship of the characteristic points between the adjacent synchronous binocular images and the depth value corresponding to each characteristic point; and correcting the position of the mowing robot according to the matching relation of the characteristic points, the depth value corresponding to each characteristic point, the synchronous back inertial positioning data, the synchronous back satellite data and the synchronous back wheel speed data.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in any positioning correction method provided in the embodiments of the present application, beneficial effects that can be achieved by any positioning correction method provided in the embodiments of the present application can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The positioning correction method, the positioning correction device, the mowing robot and the storage medium provided by the embodiment of the application are introduced in detail, and specific examples are applied to the description to explain the principle and the implementation of the application, and the description of the embodiment is only used for helping to understand the method and the core idea of the application; meanwhile, for those skilled in the art, according to the idea of the present application, the specific implementation manner and the application scope may be changed, and in summary, the content of the present specification should not be construed as a limitation to the present application.