CN117804449A - Mower ground sensing method, device, equipment and storage medium - Google Patents

Mower ground sensing method, device, equipment and storage medium Download PDF

Info

Publication number
CN117804449A
CN117804449A CN202410224591.4A CN202410224591A CN117804449A CN 117804449 A CN117804449 A CN 117804449A CN 202410224591 A CN202410224591 A CN 202410224591A CN 117804449 A CN117804449 A CN 117804449A
Authority
CN
China
Prior art keywords
ground
camera
mower
image
grassland
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410224591.4A
Other languages
Chinese (zh)
Other versions
CN117804449B (en
Inventor
周士博
黄占阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ruichi Laser Shenzhen Co ltd
Original Assignee
Ruichi Laser Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ruichi Laser Shenzhen Co ltd filed Critical Ruichi Laser Shenzhen Co ltd
Priority to CN202410224591.4A priority Critical patent/CN117804449B/en
Publication of CN117804449A publication Critical patent/CN117804449A/en
Application granted granted Critical
Publication of CN117804449B publication Critical patent/CN117804449B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of intelligent mowers, and discloses a mower ground sensing method, device and equipment and a storage medium, wherein the method comprises the following steps: performing depth estimation on the grassland ground image based on multi-view geometry to obtain a ground depth image; performing time-frequency domain processing on inertial sensing data of the mower through the grassland ground image to obtain a camera scale of the mower; performing point cloud estimation on the ground depth image and the camera scale to obtain ground depth information; and fusing the ground depth information with an environment map of the grassland to obtain a planeness three-dimensional perception curve. According to the invention, the depth estimation is carried out on the grassland to obtain the ground depth image, and then the ground depth image is corrected according to the camera scale, so that the planeness three-dimensional sensing curve of the grassland is constructed, the problem that the navigation and obstacle avoidance are influenced by depth failure under high light of the traditional RGBD sensor is avoided, and the ground sensing precision of grassland operation of the mower can be greatly improved based on the planeness three-dimensional sensing curve.

Description

Mower ground sensing method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of intelligent mowers, in particular to a mower ground sensing method, device and equipment and a storage medium.
Background
Along with the development of scientific technology, the mowing robot can autonomously mow according to a planned path under the condition of no personnel participation, is widely applied to the scenes of maintenance of family courtyard lawns, trimming of large grasslands and the like, and provides great convenience for the production and life of people. When the mowing robot works outdoors, the environment sensing mode is greatly different from the environment sensing mode of human beings. Humans perceive the surrounding environment through a variety of senses, including vision, hearing, smell, taste, touch, and the like. While the mowing robot senses the environment through different sensors, the eyes of the mowing robot are image sensors, and RGBD sensing devices capable of sensing physical depth, such as TOF, structured light and the like, are widely applied to robot products. The mowing robot can acquire image information of surrounding environment through the RGBD sensor, then identify and understand content in the image, such as ground, obstacles, plants and the like through an image processing technology, and finally complete a series of functions of object identification, obstacle degree judgment, path planning and the like.
However, the existing RGBD sensors are generally affected by illumination in outdoor areas, and particularly under the conditions of backlight and high light, the RGBD sensors can hardly sense grassland ground information, so that effective depth information is difficult to normally output, and therefore obstacle avoidance and navigation performances of the mowing robot are affected.
The foregoing is provided merely for the purpose of facilitating understanding of the technical solutions of the present invention and is not intended to represent an admission that the foregoing is prior art.
Disclosure of Invention
The invention mainly aims to provide a ground sensing method, device and equipment for a mower and a storage medium, and aims to solve the technical problem that the effective depth information of the ground of the mower is difficult to sense due to the influence of strong illumination on the existing RGBD sensor, so that the obstacle avoidance and navigation performance of the mower robot are influenced.
To achieve the above object, the present invention provides a method for sensing the ground of a mower, comprising the steps of:
performing depth estimation on a grass ground image of a grass where a mower is located based on multi-view geometry, and obtaining a ground depth image corresponding to the grass ground image;
performing time-frequency domain processing on inertial sensing data of the mower through the grassland ground image to obtain a camera scale of a camera of the mower;
Performing point cloud estimation on the ground depth image and the camera scale to obtain ground depth information corresponding to the grassland ground image;
and fusing the ground depth information with an environment map corresponding to the grassland to obtain a planeness three-dimensional perception curve of the grassland.
Optionally, the depth estimation is performed on a grass ground image of a grass where the mower is located based on multi-view geometry, and the obtaining a ground depth image corresponding to the grass ground image includes:
acquiring a lawn ground image of a lawn where the mower is located through a camera of the mower;
performing image processing on the grassland ground image based on multi-view geometry to obtain a basic imaging model corresponding to the grassland ground image;
and performing parallax processing on the basic imaging model through a triangle similarity principle to obtain a ground depth image corresponding to the grassland ground image.
Optionally, the parallax processing is performed on the basic imaging model by using a triangle similarity principle, so as to obtain a ground depth image corresponding to the grassland ground image, including:
performing parallax processing on the basic imaging model through a triangle similarity principle to obtain ground parallax data corresponding to the grassland ground image;
Acquiring camera parameters of the camera, wherein the camera parameters comprise a baseline parameter and a focal length parameter;
and carrying out depth estimation on the ground parallax data, the baseline parameter and the focal length parameter according to a preset depth formula to obtain a ground depth image corresponding to the grassland ground image.
Optionally, the performing time-frequency domain processing on the inertial sensing data of the mower through the lawn image to obtain a camera scale of a camera of the mower, including:
acquiring inertial sensing data of an inertial sensing device of the mower, the inertial sensing data comprising gyroscope data;
extracting camera sensing data corresponding to the grassland ground image;
performing time-frequency domain synchronization on the gyroscope data and the camera sensing data to obtain synchronization parameters;
and carrying out scale estimation on the camera according to the synchronization parameters to obtain the camera scale of the camera.
Optionally, the synchronization parameters include a synchronization time parameter and a synchronization angular velocity parameter, and the performing time-frequency domain synchronization on the gyroscope data and the camera sensing data to obtain the synchronization parameters includes:
extracting the gyroscope time and the gyroscope angular velocity of the gyroscope data, and extracting the camera visual angular velocity time and the camera visual angular velocity of the camera sensing data;
Performing time domain synchronization on the gyroscope time and the camera visual angular velocity time to obtain a synchronization time parameter;
and carrying out frequency domain synchronization on the gyroscope reading angular velocity and the camera visual angular velocity to obtain a synchronous angular velocity parameter.
Optionally, the performing scale estimation on the camera according to the synchronization parameter to obtain a camera scale of the camera includes:
transforming the gyroscope reading angular velocity and the camera visual angular velocity in a time domain into a frequency domain according to the synchronization time parameter by fourier transformation;
minimizing the amplitude of the gyroscope angular velocity and the camera visual angular velocity converted into the frequency domain to obtain a minimized amplitude;
and carrying out scale estimation on the synchronous angular velocity parameter according to the minimized amplitude value to obtain the camera scale of the camera.
Optionally, the estimating the point cloud of the ground depth image and the camera scale to obtain ground depth information corresponding to the grassland ground image includes:
performing point cloud processing on the ground depth image to obtain a three-dimensional space voxel corresponding to the ground depth image;
performing coordinate conversion on the three-dimensional space voxels according to the camera scale to obtain image coordinate values;
Intercepting the image coordinate values by intercepting the signed distance field to obtain a distance value corresponding to each three-dimensional space voxel;
and determining the ground depth information corresponding to the grassland ground image according to the distance value.
In addition, in order to achieve the above object, the present invention also provides a ground sensing device for a mower, the device comprising:
the depth estimation module is used for carrying out depth estimation on a grassland ground image of the grassland where the mower is positioned based on multi-view geometry, and obtaining a ground depth image corresponding to the grassland ground image;
the time-frequency domain processing module is used for performing time-frequency domain processing on the inertial sensing data of the mower through the grassland ground image to obtain the camera scale of the camera of the mower;
the depth correction module is used for carrying out point cloud estimation on the ground depth image and the camera scale to obtain ground depth information corresponding to the grassland ground image;
and the three-dimensional curve module is used for fusing the ground depth information with an environment map corresponding to the grassland to obtain a planeness three-dimensional sensing curve of the grassland.
In addition, to achieve the above object, the present invention also proposes a mower ground sensing device, said device comprising: a memory, a processor, and a mower ground awareness program stored on the memory and executable on the processor, the mower ground awareness program configured to implement the steps of the mower ground awareness method as described above.
In addition, in order to achieve the above object, the present invention also proposes a storage medium having stored thereon a mower ground sensing program which, when executed by a processor, implements the steps of the mower ground sensing method as described above.
Firstly, estimating depth of a grassland ground image of a grassland where a mower is positioned based on multi-view geometry, and obtaining a ground depth image corresponding to the grassland ground image; then, performing time-frequency domain processing on inertial sensing data of the mower through the grassland ground image to obtain a camera scale of a camera of the mower; then, performing point cloud estimation on the ground depth image and the camera scale to obtain ground depth information corresponding to the grassland ground image; and finally, fusing the ground depth information with an environment map corresponding to the grassland to obtain a planeness three-dimensional perception curve of the grassland. According to the invention, the grassland is subjected to depth estimation based on the grassland ground image to obtain the ground depth image, and then the ground depth image is corrected according to the camera scale, so that a grassland flatness three-dimensional sensing curve is constructed, the navigation and obstacle avoidance problems influenced by depth failure under the conditions of backlight and high intensity light of the traditional RGBD sensor are avoided, and therefore, the grassland operation ground sensing precision of the mower can be greatly improved based on the flatness three-dimensional sensing curve.
Drawings
FIG. 1 is a schematic diagram of a mower floor awareness device of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a first embodiment of a method for sensing the ground of a mower according to the present invention;
FIG. 3 is a schematic flow chart of a second embodiment of a method for sensing the ground of a mower according to the present invention;
FIG. 4 is a schematic flow chart of a method for estimating depth of a lawn in a second embodiment of a lawn mower ground perception method according to the present invention;
fig. 5 is a schematic view of a scene of parallax processing according to triangle similarity principle in a second embodiment of the mower ground sensing method of the present invention;
FIG. 6 is a schematic flow chart of a third embodiment of a method for sensing the ground of a mower according to the present invention;
FIG. 7 is a schematic flow chart of calibrating a ground depth image according to a third embodiment of the ground sensing method of the mower of the present invention;
fig. 8 is a block diagram of a first embodiment of a mower floor sensing device according to the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a mower ground sensing device in a hardware running environment according to an embodiment of the present invention.
As shown in fig. 1, the mower ground sensing device may comprise: a processor 1001, such as a central processing unit (Central Processing Unit, CPU), a communication bus 1002, a user interface 1003, a network interface 1004, a memory 1005. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a Wireless interface (e.g., a Wireless-Fidelity (Wi-Fi) interface). The Memory 1005 may be a high-speed random access Memory (Random Access Memory, RAM) or a stable nonvolatile Memory (NVM), such as a disk Memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
It will be appreciated by those skilled in the art that the structure shown in fig. 1 is not limiting of the mower ground sensing device and may include more or fewer components than shown, or certain components in combination, or a different arrangement of components.
As shown in fig. 1, an operating system, a network communication module, a user interface module, and a mower floor awareness program may be included in the memory 1005 as one type of storage medium.
In the mower floor awareness apparatus shown in fig. 1, the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 in the mower ground sensing device of the present invention may be disposed in the mower ground sensing device, and the mower ground sensing device invokes the mower ground sensing program stored in the memory 1005 through the processor 1001 and executes the mower ground sensing method provided by the embodiment of the present invention.
The embodiment of the invention provides a ground sensing method of a mower, and referring to fig. 2, fig. 2 is a schematic flow chart of a first embodiment of the ground sensing method of the mower.
In order to overcome the problems, the ground sensing method of the mower provided by the invention solves the problems of depth information loss caused by RGBD sensor failure and the affected subsequent obstacle avoidance and navigation functions based on three aspects.
In this embodiment, the mower ground sensing method includes the following steps:
Step S10: and carrying out depth estimation on a grassland ground image of the grassland where the mower is positioned based on the multi-view geometry, and obtaining a ground depth image corresponding to the grassland ground image.
It should be noted that, the execution body of the method of the present embodiment may be a computing service device having functions of depth estimation, time-frequency domain processing, and ground sensing, such as an intelligent mower, a sweeping robot, etc., or may be other electronic devices capable of implementing the same or similar functions, such as the above-mentioned mower ground sensing device, which is not limited in this embodiment. The mower ground sensing equipment is arranged inside the mower, senses the lawn ground where the mower is located through the mower, so that the problems of navigation and obstacle avoidance influenced by depth failure under the conditions of backlight and high light of the traditional RGBD sensor are avoided, and the lawn operation ground sensing precision of the mower is improved. Here, the present embodiment and the following embodiments will be specifically described with reference to the above-described mower ground sensing device (simply referred to as sensing device).
It is understood that a mower is a robot device capable of automatically trimming a lawn, and a camera, an IMU module, and other built-in sensor devices can be mounted inside the mower, and external environment detection can be performed through the camera and the IMU module, and the mower can autonomously move in a predetermined area to cut the lawn by using a razor or a blade.
It should be understood that multi-view geometry is a technique that uses the geometric relationships between multiple perspectives or cameras of a mower to infer the depth of an object in an image. Depth information of a grass floor map may be obtained from multiple perspectives using multi-view geometry techniques.
It is understood that the grass ground image is an image of the grass surface taken or acquired by the mower through the camera.
The ground depth image is an image obtained by estimating the depth of a ground image of a grass ground by multi-view geometry to infer depth information of the grass ground. Specifically, the depth between each pixel point in the grassland ground image can be calculated through triangle similarity principle and parameters in the camera, such as focal length, so as to obtain the ground depth image.
In a specific implementation, after a lawn mower obtains a lawn ground image of the lawn by shooting with a camera, in the first aspect, the sensing device may firstly perform depth estimation based on the multi-view geometric lawn ground image, specifically, may calculate and obtain the depth between each pixel point in the lawn ground image by using a triangle similarity principle and parameters in the camera, such as a focal length, and obtain a ground depth image.
Step S20: and performing time-frequency domain processing on the inertial sensing data of the mower through the grassland ground image to obtain the camera scale of the camera of the mower.
It should be noted that the inertial sensing data is raw measurement data acquired by an IMU inertial sensor device inside the mower. The IMU inertial measurement device is a device integrating a plurality of inertial sensors, and can measure the motion state of an object, and comprises a gyroscope and an accelerometer.
In particular, the inertial sensing data may consist of raw measurement data of gyroscopes and accelerometers acquired by the IMU device. The motion state of the mower can be obtained in real time through the inertial sensing data, so that the motion tracking of the mower is realized, and the ground sensing function of the mower is improved.
It is understood that the camera scale is the proportional relationship between the object size in the camera image of the mower and the actual object size. The real-world dimensions of the object can be mapped into the camera pixel coordinate system by internal parameters of the camera for accurate depth estimation.
Step S30: and performing point cloud estimation on the ground depth image and the camera scale to obtain ground depth information corresponding to the grassland ground image.
The point cloud estimation of the ground depth image and the camera scale refers to converting the depth image obtained by the camera and the internal and external reference information of the camera into dense point cloud data, so as to determine the ground depth information corresponding to the grassland ground image.
In a specific implementation, in the second aspect, after the ground depth image is obtained, calibration is further performed. At the moment, the sensing device can process the inertial sensing data of the mower in a time-frequency domain through the grassland image, and the camera scale of the camera is obtained through the introduced inertial sensing data so as to perform accurate depth estimation. And then, performing point cloud estimation on the ground depth image and the camera scale, and converting the point cloud estimation into dense point cloud data so as to determine ground depth information corresponding to the grassland ground image.
Step S40: and fusing the ground depth information with an environment map corresponding to the grassland to obtain a planeness three-dimensional perception curve of the grassland.
The environment map is a map model for modeling and representing the spatial and geographical information of the lawn where the mower is located. The mower presents information such as objects, structures, features and the like in a lawn environment in a visual manner by collecting and integrating sensor data, geographic information and other related data. By establishing an environment map, the functions of precise positioning, path planning and the like of the mower are realized.
It is understood that the flatness three-dimensional perception curve is a curve describing the depth of the ground in the three-dimensional space of the grass. Through the flatness three-dimensional sensing curve, the mower can avoid the navigation and obstacle avoidance problems influenced by the depth failure under the conditions of the backlight and high-intensity light of the traditional RGBD sensor, thereby greatly improving the ground sensing precision of the mower for grassland operation.
In a specific implementation, in a third aspect, the ground depth information is obtained after calibration. The sensing device can visually present information such as objects, structures, features and the like in the grassland environment by collecting and integrating sensor data, geographic information and other related data, and an environment map is built. And then fusing the ground depth information with an environment map corresponding to the grassland to obtain a complete planeness three-dimensional perception curve.
After the lawn mower obtains a lawn ground image of the lawn by shooting through a camera, the problem of depth information loss caused by RGBD sensor failure and the influence of subsequent obstacle avoidance and navigation functions are solved based on three aspects. In the first aspect, the sensing device may perform depth estimation based on the multi-view geometric grassland ground image, specifically, may calculate depth between each pixel point in the grassland ground image according to triangle similarity principle and parameters inside the camera, such as focal length, so as to obtain a ground depth image. In a second aspect, after the ground depth image is obtained, it is also calibrated. At the moment, the sensing device can process the inertial sensing data of the mower in a time-frequency domain through the grassland image, and the camera scale of the camera is obtained through the introduced inertial sensing data so as to perform accurate depth estimation. And then, performing point cloud estimation on the ground depth image and the camera scale, and converting the point cloud estimation into dense point cloud data so as to determine ground depth information corresponding to the grassland ground image. In a third aspect, the surface depth information is obtained after calibration. The sensing device can visually present information such as objects, structures, features and the like in the grassland environment by collecting and integrating sensor data, geographic information and other related data, and an environment map is built. And finally, fusing the ground depth information with an environment map corresponding to the grassland to obtain a complete planeness three-dimensional perception curve. According to the embodiment, the grassland is subjected to depth estimation based on the grassland ground image to obtain the ground depth image, and then the ground depth image is corrected according to the camera scale, so that a grassland flatness three-dimensional sensing curve is constructed, the problems of navigation and obstacle avoidance influenced by depth failure under the conditions of backlight and high intensity light of a traditional RGBD sensor are avoided, and therefore the grassland operation ground sensing precision of the mower can be greatly improved based on the flatness three-dimensional sensing curve.
Referring to fig. 2 and 3, fig. 3 is a schematic flow chart of a second embodiment of a method for sensing the ground of a mower according to the present invention.
Based on the first embodiment, in this embodiment, the step S10 includes:
step S11: and acquiring a lawn ground image of the lawn where the mower is located through a camera of the mower.
Step S12: and performing image processing on the grassland ground image based on multi-view geometry to obtain a basic imaging model corresponding to the grassland ground image.
It should be noted that the basic imaging model is a simplified model for describing the relationship of mower camera to depth of grass ground in the grass ground image.
Step S13: and performing parallax processing on the basic imaging model through a triangle similarity principle to obtain a ground depth image corresponding to the grassland ground image.
It should be noted that the triangle similarity principle is a principle of data derivation by the similar nature of two triangles in shape. Triangle-like principles such as AAA like theorem, AA like theorem, SAS like theorem, etc. And measuring and calculating the ground depth corresponding to the grassland ground image according to the triangle similarity principle.
It will be appreciated that the process of parallax processing is based on the parallax effect of the multi-camera of the mower, by comparing the differences between images observed at different angles for more cameras to infer the depth of the grass ground.
In practical implementation, referring to fig. 4, fig. 4 is a schematic flow chart of depth estimation of a lawn in a second embodiment of a lawn mower ground sensing method according to the present invention. After capturing a grass floor image of the grass, the camera of the mower inputs the image to the sensing device. And then the perception equipment performs a series of preliminary processing on the ground image, wherein the preliminary processing comprises the steps of matching cost calculation, cost aggregation, parallax calculation, parallax optimization and the like. Wherein, the disparity value of each pixel in the image can be calculated and determined by matching cost calculation and cost aggregation. And then performing image processing on the grassland ground image based on the multi-view geometry to obtain a basic imaging model for describing the depth relation between the mower camera and the grassland ground in the grassland ground image. And finally, performing parallax calculation and parallax optimization on the basic imaging model according to a triangle similarity principle, and measuring and calculating the ground depth corresponding to the grassland ground image according to the triangle similarity principle. And finally outputting a ground depth image corresponding to the grassland ground image. So that an accurate ground depth image can be obtained through parallax processing.
Further, in the present embodiment, step S13 includes: performing parallax processing on the basic imaging model through a triangle similarity principle to obtain ground parallax data corresponding to the grassland ground image; acquiring camera parameters of the camera, wherein the camera parameters comprise a baseline parameter and a focal length parameter; and carrying out depth estimation on the ground parallax data, the baseline parameter and the focal length parameter according to a preset depth formula to obtain a ground depth image corresponding to the grassland ground image.
The ground parallax data is data obtained by performing parallax processing on the basic imaging model according to a triangle similarity principle to obtain parallax information between the camera and the ground.
It is understood that the baseline parameter is the distance between two cameras of a binocular camera (e.g., stereo camera). For example, the binocular cameras shoot images with different visual angles through the left camera and the right camera at the same time, and the baseline parameter is the distance between the two cameras. The size of the baseline parameter has an important influence on the stereoscopic vision effect, and larger baseline parameters can acquire more depth information.
It should be understood that the focal length parameter is an important indicator describing the imaging capability of the camera lens, and represents the focal length size of the camera. The focal length parameters determine the field size, depth of field range, image distortion, etc. of the camera imaging. Through the baseline parameter and the focal length parameter, the imaging of the camera can be subjected to depth estimation, so that the accuracy of the ground depth image is improved.
The preset depth formula is a formula preset by the sensing device, and is described by taking focal length parameter f, baseline parameter b, ground parallax data d and ground depth z as examples. Referring to fig. 5, fig. 5 is a schematic view of a scene of parallax processing according to triangle similarity principle in a second embodiment of the mower ground sensing method of the present invention. Specifically, the relationship between the parameters is as follows:
In a practical implementation, the depth estimation is performed on the grass ground based on multi-view geometry, and the basic imaging model is shown in fig. 5. The distance between the camera L and the camera R is the baseline parameter b of the binocular camera, the focal length parameter of the two is f, and the depth of the two and the grass ground is z. Based on imaging model and triangle similarity principle, the relationship among focal length parameter f, baseline parameter b, ground parallax data d and ground depth Z is calculated by X-Z coordinate system, wherein X is shown in the formula l 、x r X-b is the side length calculated and determined by the triangle similarity principle by taking P= (x, z) as the triangle vertex. Because the cameras b and f are internal characteristics of the cameras, the ground parallax data d can be obtained, namely depth information can be obtained according to the formula z= (b x f)/d, and finally, the ground depth image corresponding to the grassland ground image is determined according to the depth information.
The camera of the mower of the present embodiment inputs an image of the grass floor to the sensing device after acquiring the image of the grass floor. And then the perception equipment performs a series of preliminary processing on the ground image, wherein the preliminary processing comprises the steps of matching cost calculation, cost aggregation, parallax calculation, parallax optimization and the like. Wherein, the disparity value of each pixel in the image can be calculated and determined by matching cost calculation and cost aggregation. And then performing image processing on the grassland ground image based on the multi-view geometry to obtain a basic imaging model for describing the depth relation between the mower camera and the grassland ground in the grassland ground image. And finally, performing parallax calculation and parallax optimization on the basic imaging model according to a triangle similarity principle, and measuring and calculating the ground depth corresponding to the grassland ground image according to the triangle similarity principle. And finally outputting a ground depth image corresponding to the grassland ground image. So that an accurate ground depth image can be obtained through parallax processing.
Referring to fig. 2 and 6, fig. 6 is a schematic flow chart of a third embodiment of a method for sensing the ground of a mower according to the present invention.
Based on the above embodiments, in this embodiment, the step S20 includes:
step S21: inertial sensing data of an inertial sensing device of the mower is acquired, the inertial sensing data including gyroscope data.
The gyro data refers to measurement values of rotation and angular velocity obtained by a gyro sensor, and includes a gyro angular velocity and a gyro time.
Step S22: and extracting camera sensing data corresponding to the grassland ground image.
The camera sensing data refers to measurement data related to the time when the camera of the mower obtains the image information through the slam technology, and includes the visual angular velocity and time of the camera.
Step S23: and carrying out time-frequency domain synchronization on the gyroscope data and the camera sensing data to obtain synchronization parameters.
Since the data acquired by the camera and the IMU sensor of the mower are in two different coordinate systems, the time domain synchronization is required for the gyroscope time and the visual angular velocity time of the camera, respectively, to obtain the synchronized time. Then, frequency domain synchronization is needed to be carried out on the gyroscope angular speed and the visual angular speed of the camera, and finally, the scale of the camera is estimated in the synchronized frequency domain.
Step S24: and carrying out scale estimation on the camera according to the synchronization parameters to obtain the camera scale of the camera.
In a specific implementation, the sensing device may acquire inertial sensing data of an inertial sensing device of the mower, including a gyroscope angular reading speed and a gyroscope time; then extracting camera sensing data corresponding to the grassland ground image; including angular velocity and time of vision of the camera. Because the data acquired by the camera and the IMU sensor of the mower are respectively in two different coordinate systems, time domain synchronization is needed to be carried out on the time of the gyroscope and the time of the visual angular speed of the camera respectively, so that the synchronized time is obtained. And then carrying out frequency domain synchronization on the gyroscope angular velocity and the visual angular velocity of the camera, and finally carrying out scale estimation on the camera in the synchronized frequency domain to obtain the camera scale of the camera.
Further, the synchronization parameters include a synchronization time parameter and a synchronization angular velocity parameter, and in this embodiment, step S23 includes: extracting the gyroscope time and the gyroscope angular velocity of the gyroscope data, and extracting the camera visual angular velocity time and the camera visual angular velocity of the camera sensing data; performing time domain synchronization on the gyroscope time and the camera visual angular velocity time to obtain a synchronization time parameter; and carrying out frequency domain synchronization on the gyroscope reading angular velocity and the camera visual angular velocity to obtain a synchronous angular velocity parameter.
The gyro time is time information of measuring, recording and reporting data by the gyro sensor, and represents a time stamp of the related data measured by the gyro sensor when the lawn travels.
It is understood that the gyroscope angular rate of reading is the angular rate of rotation recorded by the gyroscope sensor as the grass travels.
In a practical implementation, inertial sensing data may be introduced to obtain dimensional information of the camera. Specifically, the gyroscope data of the IMU and the camera sensing data acquired by the slam can be aligned in time-frequency domain, and then the estimated depth and the camera pose with the scale information are fused to obtain the calibrated ground depth. Referring to fig. 7, fig. 7 is a schematic flow chart of calibrating a ground depth image according to a third embodiment of the ground sensing method of the mower of the present invention.
Time domain synchronization is a process of solving the minimum time offset of IMU data and a camera. Specifically, the minimum square difference between the gyroscope time of the IMU and the angular velocity time obtained by slam can be obtained, namely the optimal time offset, and then the smaller value of the gyroscope time and the slam angular velocity time is added with the offset to obtain the synchronized time.
The frequency domain synchronization is that the visual angular velocity and the gyroscope angular velocity are respectively in two different coordinate systems, and the parameters of the visual and IMU need to be synchronized, and the solving mode is as follows:
Wherein:rotation matrix representing IMU coordinate system to camera coordinate system,/->Zero bias for gyroscopes in camera coordinate system,/-for the gyroscope>For the optimal timestamp offset, +.>Indicating the angular velocity of vision +.>Represents the angular velocity of the IMU, +.>Time is indicated.
Finally, the camera may be scale estimated in the synchronized frequency domain to obtain a camera scale for the camera.
Further, in the present embodiment, step S24 includes: transforming the gyroscope reading angular velocity and the camera visual angular velocity in a time domain into a frequency domain according to the synchronization time parameter by fourier transformation; minimizing the amplitude of the gyroscope angular velocity and the camera visual angular velocity converted into the frequency domain to obtain a minimized amplitude; and carrying out scale estimation on the synchronous angular velocity parameter according to the minimized amplitude value to obtain the camera scale of the camera.
It should be noted that the fourier transform (Fourier Transform) is a mathematical tool for decomposing a function or signal (represented in the time domain) into a superposition of sine and cosine functions (frequency components) in a set of frequency domains. The gyroscope reading angular velocity and the camera visual angular velocity in the time domain may be transformed into the frequency domain by fourier transformation.
In a practical implementation, the sensing device may estimate the dimensions of the camera in the frequency domain after completing the frequency domain synchronization. Specifically, the camera visual angular velocity in the camera coordinate system in the time domain and the gyroscope reading angular velocity of the IMU can be transformed into the frequency domain through fourier transformation, which is specifically shown as follows:
wherein s represents the initial scale factor of the camera;zero bias, indicative of accelerometer>Is a representation of the vision-based acceleration in the camera coordinate system over the frequency domain; />For the representation of the IMU-based acceleration in the camera coordinate system in the frequency domain, +.>、/>Respectively represent the vision under the camera coordinate system and the addition of IMUSpeed (I)>Rotation matrix representing IMU coordinate system to camera coordinate system,/->Indicating the gravitational acceleration.
The camera scale is then estimated in the frequency domain by minimizing the magnitude of the vision and IMU accelerations, as follows:
further, in the present embodiment, step S30 includes: performing point cloud processing on the ground depth image to obtain a three-dimensional space voxel corresponding to the ground depth image; performing coordinate conversion on the three-dimensional space voxels according to the camera scale to obtain image coordinate values; intercepting the image coordinate values by intercepting the signed distance field to obtain a distance value corresponding to each three-dimensional space voxel; and determining the ground depth information corresponding to the grassland ground image according to the distance value.
The three-dimensional voxel means a volume element in a three-dimensional space. The ground depth map may be converted to voxels for point cloud processing to obtain ground depth information.
It is appreciated that truncating the signed distance field (Truncated Signed Distance Field, TSDF) is a data structure for representing the boundary of an object in three-dimensional space. Wherein the signed distance field (Signed Distance Field, SDF) represents the directed distance of each point to the object surface. A positive distance indicates that the point is outside the object, a negative distance indicates that the point is inside the object, and zero indicates that the point is on the object surface. By truncating the signed distance field, the ground depth map of multiple views can be fused, supporting fast voxel data manipulation.
In practical implementation, as shown in fig. 7, the sensing device may perform point cloud processing on the ground depth image first, so as to obtain a three-dimensional space voxel corresponding to the ground depth image. And then carrying out point cloud estimation by combining a TSDF theory, and solving a cut-off value of the distance from the voxel center to the surface, wherein each voxel corresponds to the corrected distance D and weight W. After the ith frame point cloud is acquired, the specific process is as follows:
(1) Taking coordinates of voxels in a global coordinate system(x, y, z) and then converting the camera pose with scale information from the global coordinate system to a camera coordinate system by using a camera transformation matrix according to the camera pose with scale information of the slam, so as to obtain V (x, y, z).
(2) And converting the internal reference matrix of the camera into an image coordinate system to obtain an image coordinate (u, v).
(3) If the depth value at the i-th frame depth image D (u, V) is not 0, comparing D (u, V) with the magnitude of the voxel camera coordinates V (x, y, z), if D (u, V) > V (x, y, z), this voxel is indicated to be closer to the camera, outside the object estimation surface. Conversely, this voxel is illustrated farther from the camera, inside the estimated surface.
(4) Finally, the distance value D and the weight W in the voxel are updated according to the result in the step (3).
And determining ground depth information corresponding to the grassland ground image according to the distance value D and the weight W, and finally storing the pose of the vision aligned with the IMU in a map to obtain a pose curve of each track, and covering the pose curve after an environment map is constructed in a working area to obtain a complete planeness three-dimensional perception curve. Therefore, the problems of depth failure and affected navigation and obstacle avoidance of the RGBD sensor under the conditions of backlight and high light are solved, and meanwhile, the ground sensing precision of grassland operation is greatly improved based on the flatness three-dimensional sensing curve.
The sensing device of the embodiment can acquire inertial sensing data of the inertial sensing device of the mower, wherein the inertial sensing data comprise gyroscope angular reading speed and gyroscope time; then extracting camera sensing data corresponding to the grassland ground image; including angular velocity and time of vision of the camera. Because the data acquired by the camera and the IMU sensor of the mower are respectively in two different coordinate systems, time domain synchronization is needed to be carried out on the time of the gyroscope and the time of the visual angular speed of the camera respectively, so that the synchronized time is obtained. And then carrying out frequency domain synchronization on the gyroscope angular velocity and the visual angular velocity of the camera, and then carrying out scale estimation on the camera in the synchronized frequency domain to obtain the camera scale of the camera. And then performing point cloud processing on the ground depth image to obtain a three-dimensional space voxel corresponding to the ground depth image. And then carrying out point cloud estimation by combining a TSDF theory, and solving a cut-off value of the distance from the voxel center to the surface, wherein each voxel corresponds to the corrected distance D and weight W. After the i-th frame point cloud is acquired, the distance value D and the weight W in the voxel can be updated according to the above process. And determining ground depth information corresponding to the grassland ground image according to the distance value D and the weight W, and finally storing the pose aligned by the vision and the IMU in a map to obtain a pose curve of each track, and covering the pose curve after constructing an environment map in a working area to obtain a complete planeness three-dimensional perception curve. Therefore, the problems of depth failure and affected navigation and obstacle avoidance of the RGBD sensor under the conditions of backlight and high light are solved, and meanwhile, the ground sensing precision of grassland operation is greatly improved based on the flatness three-dimensional sensing curve.
In addition, the embodiment of the invention also provides a storage medium, wherein the storage medium is stored with a mower ground sensing program, and the mower ground sensing program realizes the steps of the mower ground sensing method when being executed by a processor.
Referring to fig. 8, fig. 8 is a block diagram of a first embodiment of a ground sensing device for a mower according to the present invention.
As shown in fig. 8, a mower ground sensing device according to an embodiment of the present invention includes:
the depth estimation module 801 is configured to perform depth estimation on a grass ground image of a grass where a mower is located based on multi-view geometry, and obtain a ground depth image corresponding to the grass ground image;
a time-frequency domain processing module 802, configured to perform time-frequency domain processing on inertial sensing data of the mower through the lawn ground image, to obtain a camera scale of a camera of the mower;
the depth correction module 803 is configured to perform point cloud estimation on the ground depth image and the camera scale, and obtain ground depth information corresponding to the grassland ground image;
and the three-dimensional curve module 804 is configured to fuse the ground depth information with an environment map corresponding to the grassland, and obtain a flatness three-dimensional sensing curve of the grassland.
After the lawn mower obtains a lawn ground image of the lawn by shooting through a camera, the problem of depth information loss caused by RGBD sensor failure and the influence of subsequent obstacle avoidance and navigation functions are solved based on three aspects. In the first aspect, the sensing device may perform depth estimation based on the multi-view geometric grassland ground image, specifically, may calculate depth between each pixel point in the grassland ground image according to triangle similarity principle and parameters inside the camera, such as focal length, so as to obtain a ground depth image. In a second aspect, after the ground depth image is obtained, it is also calibrated. At the moment, the sensing device can process the inertial sensing data of the mower in a time-frequency domain through the grassland image, and the camera scale of the camera is obtained through the introduced inertial sensing data so as to perform accurate depth estimation. And then, performing point cloud estimation on the ground depth image and the camera scale, and converting the point cloud estimation into dense point cloud data so as to determine ground depth information corresponding to the grassland ground image. In a third aspect, the surface depth information is obtained after calibration. The sensing device can visually present information such as objects, structures, features and the like in the grassland environment by collecting and integrating sensor data, geographic information and other related data, and an environment map is built. And finally, fusing the ground depth information with an environment map corresponding to the grassland to obtain a complete planeness three-dimensional perception curve. According to the embodiment, the grassland is subjected to depth estimation based on the grassland ground image to obtain the ground depth image, and then the ground depth image is corrected according to the camera scale, so that a grassland flatness three-dimensional sensing curve is constructed, the problems of navigation and obstacle avoidance influenced by depth failure under the conditions of backlight and high intensity light of a traditional RGBD sensor are avoided, and therefore the grassland operation ground sensing precision of the mower can be greatly improved based on the flatness three-dimensional sensing curve.
Based on the first embodiment of the mower ground sensing device of the present invention, a second embodiment of the mower ground sensing device of the present invention is provided.
In this embodiment, the depth estimation module 801 is further configured to obtain, by using a camera of the mower, a lawn ground image of a lawn where the mower is located; performing image processing on the grassland ground image based on multi-view geometry to obtain a basic imaging model corresponding to the grassland ground image; and performing parallax processing on the basic imaging model through a triangle similarity principle to obtain a ground depth image corresponding to the grassland ground image.
Further, the depth estimation module 801 is further configured to perform parallax processing on the basic imaging model according to a triangle similarity principle, so as to obtain ground parallax data corresponding to the grassland ground image; acquiring camera parameters of the camera, wherein the camera parameters comprise a baseline parameter and a focal length parameter; and carrying out depth estimation on the ground parallax data, the baseline parameter and the focal length parameter according to a preset depth formula to obtain a ground depth image corresponding to the grassland ground image.
Further, the time-frequency domain processing module 802 is further configured to obtain inertial sensing data of an inertial sensing device of the mower, where the inertial sensing data includes gyroscope data; extracting camera sensing data corresponding to the grassland ground image; performing time-frequency domain synchronization on the gyroscope data and the camera sensing data to obtain synchronization parameters; and carrying out scale estimation on the camera according to the synchronization parameters to obtain the camera scale of the camera.
Further, the synchronization parameters include a synchronization time parameter and a synchronization angular velocity parameter, and the time-frequency domain processing module 802 is further configured to extract a gyroscope time and a gyroscope reading angular velocity of the gyroscope data, and extract a camera visual angular velocity time and a camera visual angular velocity of the camera sensing data; performing time domain synchronization on the gyroscope time and the camera visual angular velocity time to obtain a synchronization time parameter; and carrying out frequency domain synchronization on the gyroscope reading angular velocity and the camera visual angular velocity to obtain a synchronous angular velocity parameter.
Further, the time-frequency domain processing module 802 is further configured to transform, by fourier transform, the gyroscope angular reading speed and the camera visual angular speed in a time domain into a frequency domain according to the synchronization time parameter; minimizing the amplitude of the gyroscope angular velocity and the camera visual angular velocity converted into the frequency domain to obtain a minimized amplitude; and carrying out scale estimation on the synchronous angular velocity parameter according to the minimized amplitude value to obtain the camera scale of the camera.
Further, the depth correction module 803 is further configured to perform a point cloud process on the ground depth image to obtain a three-dimensional spatial voxel corresponding to the ground depth image; performing coordinate conversion on the three-dimensional space voxels according to the camera scale to obtain image coordinate values; intercepting the image coordinate values by intercepting the signed distance field to obtain a distance value corresponding to each three-dimensional space voxel; and determining the ground depth information corresponding to the grassland ground image according to the distance value.
Other embodiments or specific implementation manners of the ground sensing device for a mower of the present invention may refer to the above method embodiments, and will not be described herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. read-only memory/random-access memory, magnetic disk, optical disk), comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (10)

1. A method of mower ground awareness, the method comprising:
performing depth estimation on a grass ground image of a grass where a mower is located based on multi-view geometry, and obtaining a ground depth image corresponding to the grass ground image;
performing time-frequency domain processing on inertial sensing data of the mower through the grassland ground image to obtain a camera scale of a camera of the mower;
performing point cloud estimation on the ground depth image and the camera scale to obtain ground depth information corresponding to the grassland ground image;
and fusing the ground depth information with an environment map corresponding to the grassland to obtain a planeness three-dimensional perception curve of the grassland.
2. The method for sensing the ground of a mower according to claim 1, wherein the depth estimation of the ground image of the lawn on which the mower is positioned based on the multi-view geometry, and the obtaining of the ground depth image corresponding to the ground image of the lawn, comprises:
Acquiring a lawn ground image of a lawn where the mower is located through a camera of the mower;
performing image processing on the grassland ground image based on multi-view geometry to obtain a basic imaging model corresponding to the grassland ground image;
and performing parallax processing on the basic imaging model through a triangle similarity principle to obtain a ground depth image corresponding to the grassland ground image.
3. The mower ground sensing method according to claim 2, wherein said parallax processing is performed on said basic imaging model by triangle similarity principle to obtain a ground depth image corresponding to said grassland ground image, comprising:
performing parallax processing on the basic imaging model through a triangle similarity principle to obtain ground parallax data corresponding to the grassland ground image;
acquiring camera parameters of the camera, wherein the camera parameters comprise a baseline parameter and a focal length parameter;
and carrying out depth estimation on the ground parallax data, the baseline parameter and the focal length parameter according to a preset depth formula to obtain a ground depth image corresponding to the grassland ground image.
4. The mower ground awareness method of claim 1, wherein said time-frequency domain processing of inertial sensing data of said mower by said mower ground image to obtain a camera scale of a camera of said mower, comprising:
Acquiring inertial sensing data of an inertial sensing device of the mower, the inertial sensing data comprising gyroscope data;
extracting camera sensing data corresponding to the grassland ground image;
performing time-frequency domain synchronization on the gyroscope data and the camera sensing data to obtain synchronization parameters;
and carrying out scale estimation on the camera according to the synchronization parameters to obtain the camera scale of the camera.
5. The mower ground sensing method of claim 4, wherein said synchronization parameters include a synchronization time parameter and a synchronization angular velocity parameter, said time-frequency domain synchronizing said gyroscope data and said camera sensor data to obtain synchronization parameters, comprising:
extracting the gyroscope time and the gyroscope angular velocity of the gyroscope data, and extracting the camera visual angular velocity time and the camera visual angular velocity of the camera sensing data;
performing time domain synchronization on the gyroscope time and the camera visual angular velocity time to obtain a synchronization time parameter;
and carrying out frequency domain synchronization on the gyroscope reading angular velocity and the camera visual angular velocity to obtain a synchronous angular velocity parameter.
6. The mower ground perception method of claim 5, wherein said performing a scale estimation of said camera based on said synchronization parameters to obtain a camera scale of said camera comprises:
Transforming the gyroscope reading angular velocity and the camera visual angular velocity in a time domain into a frequency domain according to the synchronization time parameter by fourier transformation;
minimizing the amplitude of the gyroscope angular velocity and the camera visual angular velocity converted into the frequency domain to obtain a minimized amplitude;
and carrying out scale estimation on the synchronous angular velocity parameter according to the minimized amplitude value to obtain the camera scale of the camera.
7. The mower ground perception method of any one of claims 1-6, wherein said performing a point cloud estimation on said ground depth image and said camera scale to obtain ground depth information corresponding to said grass ground image comprises:
performing point cloud processing on the ground depth image to obtain a three-dimensional space voxel corresponding to the ground depth image;
performing coordinate conversion on the three-dimensional space voxels according to the camera scale to obtain image coordinate values;
intercepting the image coordinate values by intercepting the signed distance field to obtain a distance value corresponding to each three-dimensional space voxel;
and determining the ground depth information corresponding to the grassland ground image according to the distance value.
8. A mower floor sensing device, said device comprising:
The depth estimation module is used for carrying out depth estimation on a grassland ground image of the grassland where the mower is positioned based on multi-view geometry, and obtaining a ground depth image corresponding to the grassland ground image;
the time-frequency domain processing module is used for performing time-frequency domain processing on the inertial sensing data of the mower through the grassland ground image to obtain the camera scale of the camera of the mower;
the depth correction module is used for carrying out point cloud estimation on the ground depth image and the camera scale to obtain ground depth information corresponding to the grassland ground image;
and the three-dimensional curve module is used for fusing the ground depth information with an environment map corresponding to the grassland to obtain a planeness three-dimensional sensing curve of the grassland.
9. A mower floor sensing device, said device comprising: a memory, a processor, and a mower ground awareness program stored on the memory and operable on the processor, the mower ground awareness program configured to implement the steps of the mower ground awareness method of any one of claims 1 to 7.
10. A storage medium having stored thereon a mower ground awareness program which when executed by a processor performs the steps of the mower ground awareness method of any one of claims 1 to 7.
CN202410224591.4A 2024-02-29 2024-02-29 Mower ground sensing method, device, equipment and storage medium Active CN117804449B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410224591.4A CN117804449B (en) 2024-02-29 2024-02-29 Mower ground sensing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410224591.4A CN117804449B (en) 2024-02-29 2024-02-29 Mower ground sensing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117804449A true CN117804449A (en) 2024-04-02
CN117804449B CN117804449B (en) 2024-05-28

Family

ID=90430322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410224591.4A Active CN117804449B (en) 2024-02-29 2024-02-29 Mower ground sensing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117804449B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10901431B1 (en) * 2017-01-19 2021-01-26 AI Incorporated System and method for guiding heading of a mobile robotic device
CN113205549A (en) * 2021-05-07 2021-08-03 深圳市商汤科技有限公司 Depth estimation method and device, electronic equipment and storage medium
CN113284240A (en) * 2021-06-18 2021-08-20 深圳市商汤科技有限公司 Map construction method and device, electronic equipment and storage medium
US11241791B1 (en) * 2018-04-17 2022-02-08 AI Incorporated Method for tracking movement of a mobile robotic device
CN115493579A (en) * 2022-09-02 2022-12-20 松灵机器人(深圳)有限公司 Positioning correction method, positioning correction device, mowing robot and storage medium
US11548159B1 (en) * 2018-05-31 2023-01-10 AI Incorporated Modular robot
CN116630403A (en) * 2023-05-25 2023-08-22 浙江三锋实业股份有限公司 Lightweight semantic map construction method and system for mowing robot
CN117274519A (en) * 2023-10-16 2023-12-22 奥比中光科技集团股份有限公司 Map construction method and device and mowing robot

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10901431B1 (en) * 2017-01-19 2021-01-26 AI Incorporated System and method for guiding heading of a mobile robotic device
US11241791B1 (en) * 2018-04-17 2022-02-08 AI Incorporated Method for tracking movement of a mobile robotic device
US11548159B1 (en) * 2018-05-31 2023-01-10 AI Incorporated Modular robot
CN113205549A (en) * 2021-05-07 2021-08-03 深圳市商汤科技有限公司 Depth estimation method and device, electronic equipment and storage medium
CN113284240A (en) * 2021-06-18 2021-08-20 深圳市商汤科技有限公司 Map construction method and device, electronic equipment and storage medium
CN115493579A (en) * 2022-09-02 2022-12-20 松灵机器人(深圳)有限公司 Positioning correction method, positioning correction device, mowing robot and storage medium
CN116630403A (en) * 2023-05-25 2023-08-22 浙江三锋实业股份有限公司 Lightweight semantic map construction method and system for mowing robot
CN117274519A (en) * 2023-10-16 2023-12-22 奥比中光科技集团股份有限公司 Map construction method and device and mowing robot

Also Published As

Publication number Publication date
CN117804449B (en) 2024-05-28

Similar Documents

Publication Publication Date Title
CN207117844U (en) More VR/AR equipment collaborations systems
CN111275750B (en) Indoor space panoramic image generation method based on multi-sensor fusion
CN105424006B (en) Unmanned plane hovering accuracy measurement method based on binocular vision
KR100912715B1 (en) Method and apparatus of digital photogrammetry by integrated modeling for different types of sensors
US9091755B2 (en) Three dimensional image capture system for imaging building facades using a digital camera, near-infrared camera, and laser range finder
US8737720B2 (en) System and method for detecting and analyzing features in an agricultural field
EP2435984B1 (en) Point cloud assisted photogrammetric rendering method and apparatus
WO2018227576A1 (en) Method and system for detecting ground shape, method for drone landing, and drone
KR102016636B1 (en) Calibration apparatus and method of camera and rader
CN102072706B (en) Multi-camera positioning and tracking method and system
CN113673282A (en) Target detection method and device
CN113899375B (en) Vehicle positioning method and device, storage medium and electronic equipment
CN110501036A (en) The calibration inspection method and device of sensor parameters
CN107607090B (en) Building projection correction method and device
CN110470333B (en) Calibration method and device of sensor parameters, storage medium and electronic device
CN116625354B (en) High-precision topographic map generation method and system based on multi-source mapping data
CN114111776B (en) Positioning method and related device
CN112862966A (en) Method, device and equipment for constructing three-dimensional model of earth surface and storage medium
CN117804449B (en) Mower ground sensing method, device, equipment and storage medium
CN116957360A (en) Space observation and reconstruction method and system based on unmanned aerial vehicle
CN112405526A (en) Robot positioning method and device, equipment and storage medium
KR102130687B1 (en) System for information fusion among multiple sensor platforms
Li et al. 3D mobile mapping with a low cost uav system
CN114429515A (en) Point cloud map construction method, device and equipment
CN113432595A (en) Equipment state acquisition method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant