CN117496464A - Ground detection method and device for foot robot - Google Patents
Ground detection method and device for foot robot Download PDFInfo
- Publication number
- CN117496464A CN117496464A CN202311372254.1A CN202311372254A CN117496464A CN 117496464 A CN117496464 A CN 117496464A CN 202311372254 A CN202311372254 A CN 202311372254A CN 117496464 A CN117496464 A CN 117496464A
- Authority
- CN
- China
- Prior art keywords
- ground
- image
- foot
- robot
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 52
- 230000008447 perception Effects 0.000 claims abstract description 20
- 230000011218 segmentation Effects 0.000 claims description 31
- 238000005286 illumination Methods 0.000 claims description 20
- 230000004927 fusion Effects 0.000 claims description 13
- 230000005021 gait Effects 0.000 claims description 11
- 238000000034 method Methods 0.000 claims description 10
- 238000009434 installation Methods 0.000 claims description 6
- 239000004568 cement Substances 0.000 abstract description 7
- 244000025254 Cannabis sativa Species 0.000 abstract description 5
- 238000004458 analytical method Methods 0.000 abstract description 5
- 230000019771 cognition Effects 0.000 abstract description 5
- 230000007613 environmental effect Effects 0.000 abstract description 5
- 239000004576 sand Substances 0.000 abstract description 5
- 230000000007 visual effect Effects 0.000 description 6
- 239000000463 material Substances 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 230000003321 amplification Effects 0.000 description 2
- 239000004927 clay Substances 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000003199 nucleic acid amplification method Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 241000510097 Megalonaias nervosa Species 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Manipulator (AREA)
Abstract
The invention discloses a ground detection method and device for a foot robot. The foot robot may be provided with a representation of the height of the ground and the height of the ground may be used for modeling and analysis of the ground. And the information of the laser point cloud data and the camera image is combined by fusing the ground image, so that more comprehensive and accurate ground information can be provided. The cognition capability of the foot robot to the ground environment is improved. And identifying the fused ground image through a preset perception data set, so that a corresponding ground category can be obtained. This will help the foot robot to identify different types of ground, such as grass, cement, sand, etc., to accommodate the walking characteristics and environmental requirements of different grounds. The ground class identification can be provided for use by modules of a navigation system, a decision system, etc. to optimize the behavior strategy and path planning of the foot robot.
Description
Technical Field
The invention relates to the field of environment sensing of foot robots, in particular to a ground detection method and device of a foot robot.
Background
In the motion control of the foot robot, different adjustment is required due to different physical characteristics such as rigidity characteristics and friction characteristics of the ground. If the motion control of the foot robot is insufficient, the foot robot can slip, sink, overturn, and excessively large impact force/noise of foot contact.
In the prior art, the schemes for acquiring the ground physical characteristic information by the foot-type robot mainly comprise a contact ground attribute detection method and a non-contact ground attribute detection method.
The non-contact ground attribute detection scheme in the prior art is generally based on visual perception, and the visual detection can judge the ground attribute by perceiving texture, color and other information of the ground in front, so that the motion control has prediction capability. However, the current visual detection cannot obtain microscopic three-dimensional geometric information of the ground, is easily affected by light rays, shadows and the like, so that the system is unstable, and the ground attribute and the ground material cannot be accurately identified, thereby affecting the path planning and the motion control of the foot-type robot.
Disclosure of Invention
The invention provides a ground detection method and device for a foot robot, which are used for solving the problem that ground attributes and materials cannot be accurately identified due to the fact that ground microcosmic three-dimensional geometric information cannot be obtained during the existing non-contact ground attribute detection.
In a first aspect, the present application provides a ground detection method for a foot robot, including:
acquiring a camera ground image and a laser point cloud image of the front ground below the foot-type robot, and projecting each point cloud coordinate in the laser point cloud image to the ground below the foot-type robot to acquire a ground grid image;
the ground grid image is densified according to the pixel size of the camera ground image, and the densified ground grid image and the camera ground image are overlapped and fused to obtain a fused ground image;
and identifying the fused ground image through a preset perception data set to obtain a corresponding ground category.
Thus, by projecting each point cloud coordinate in the laser point cloud image onto the ground below the foot robot, a ground grid image can be obtained. The foot robot may be provided with a representation of the height of the ground and the height of the ground may be used for modeling and analysis of the ground. And the information of the laser point cloud data and the camera image is combined by fusing the ground image, so that more comprehensive and accurate ground information can be provided. The cognition capability of the foot robot to the ground environment is improved. And identifying the fused ground image through a preset perception data set, so that a corresponding ground category can be obtained. This will help the foot robot to identify different types of ground, such as grass, cement, sand, etc., to accommodate the walking characteristics and environmental requirements of different grounds. The ground class identification can be provided for use by modules of a navigation system, a decision system, etc. to optimize the behavior strategy and path planning of the foot robot.
Further, the projecting each point cloud coordinate in the laser point cloud image to the ground below the foot robot to obtain a ground grid image specifically includes:
according to the centroid height and the pitching angle of the foot-type robot, obtaining a projection height value and a position coordinate value of each point cloud coordinate in the laser point cloud image to the ground;
and according to a preset grid unit, inputting a projection height value and a position coordinate value of each point cloud coordinate to the ground, and inputting a blank grid image to obtain a ground grid image.
Therefore, corresponding ground height images, ground position images and ground grid images can be obtained according to different models of the foot-type robots, and accurate ground geometric information and space information are provided. Therefore, the perception capability of the foot robot to the complex ground environment can be improved, the foot robot can make accurate navigation and decision, and the motion and operation effects and safety of the foot robot are improved.
Further, identifying the fused ground image through a preset sensing data set to obtain a corresponding ground category, and then further including:
inputting the ground category and the fusion ground image into a preset semantic segmentation network model to obtain a semantic segmentation image;
and adjusting the running path and running gait of the foot robot according to the semantic segmentation image and the condensed ground grid image.
In this way, semantic segmentation images can be obtained by inputting the ground category and the fusion ground image into a preset semantic segmentation network model, so that semantic information of different areas on the ground can be identified, and the foot-type robot can make corresponding decisions and plans according to the semantic information of the different areas. The path and gait of the robot are adjusted according to the semantic segmentation image and the condensed ground grid image, so that the motion performance and adaptability of the foot robot in different ground environments can be improved.
Further, according to the preset grid unit, inputting a blank grid image into the projection height value and the position coordinate value of each point cloud coordinate to the ground to obtain the ground grid image, specifically:
determining the dimension of the grid unit according to the ground precision requirement and the resolution of the laser radar;
setting the dimension of a blank grid image according to the dimension of the grid unit and the detection area of the ground of the foot robot;
and filling the projection height value and the position coordinate value of each point cloud coordinate to the ground into the blank grid image to obtain a ground grid image.
Thus, the proper grid cell size is set according to the ground precision requirement and the resolution of the laser radar, and the ground grid image can be ensured to accurately represent the characteristics and details of the ground. Providing the foot robot with the height and position information of the ground so as to improve the motion accuracy of the foot robot in detection, planning and movement.
Further, the preset sensing data set specifically includes:
the method comprises the steps of respectively placing foot robots on a plurality of different types of ground, setting the intensity and the direction of a plurality of illumination on each type of ground, and collecting camera data and laser point cloud data in the foot robots in different illumination on each type of ground;
and setting a perception data set according to the ground category and the illumination intensity corresponding to each camera data and the laser point cloud data.
Further, the acquiring of the camera ground image and the laser point cloud image of the front ground below the foot robot specifically comprises:
the camera and the laser radar are arranged at the bottom of the robot body of the foot-type robot, and the distance between the camera and the laser radar when the fields of view of the camera and the laser radar do not interfere is determined to be the distance between the camera and the laser radar;
wherein, the center line of the view field of the camera and the laser radar forms an included angle between 45 degrees and 90 degrees with the ground;
and projecting the fields of view of the camera and the laser radar to the ground to form an intersection area, and determining a ground attention window for ground detection according to the intersection area.
Therefore, interference between the camera and the field of view of the laser radar can be avoided by determining the distance between the camera and the laser radar and the included angle between the center lines of the fields of view of the camera and the laser radar and the ground. This can avoid data collision or inaccuracy due to overlapping or mutual occlusion of fields of view. A ground-based window of interest for ground detection can be obtained by simultaneously projecting the fields of view of the camera and lidar onto the ground and determining their intersection areas. The effective area perceived by the foot robot on the ground can be determined according to the ground attention window, so that unnecessary data processing and calculation amount are reduced.
Further, a ground attention window for ground detection is determined according to the intersection area and the device information of the foot robot, specifically:
and determining the length of the ground attention window according to the motion direction of the foot robot and the highest moving speed of the foot robot, and determining the width of the ground attention window according to the width of the body of the foot robot.
Therefore, the length and the width of the ground attention window are determined, the front ground area can be effectively focused, the calculated amount is reduced, the movement range of the robot is accurately controlled, the path planning and obstacle avoidance strategies are optimized, and the perception and the movement performance of the foot-type robot are improved.
In a second aspect, the present application provides a foot robot ground detection device comprising: the system comprises a ground image module, a fusion image module and a category identification module;
the ground image module is used for acquiring a camera ground image and a laser point cloud image of the front ground below the foot-type robot, and projecting each point cloud coordinate in the laser point cloud image to the ground below the foot-type robot to acquire a ground grid image;
the fusion image module is used for densifying the ground grid image according to the pixel size of the camera ground image, and superposing and fusing the densified ground grid image and the camera ground image to obtain a fusion ground image;
the category identification module is used for identifying the fused ground image through a preset perception data set to obtain a corresponding ground category.
Further, the ground image module includes: the system comprises a projection unit and a ground grid image acquisition unit;
the projection unit is used for obtaining a projection height value and a position coordinate value of each point cloud coordinate in the laser point cloud image to the ground according to the centroid height and the pitching angle of the foot-type robot;
the ground grid image acquisition unit is used for inputting a blank grid image into the projection height value and the position coordinate value of each point cloud coordinate to the ground according to the preset grid unit to acquire a ground grid image.
Further, the category identification module further includes: the semantic segmentation module and the motion adjustment module;
the semantic segmentation module is used for inputting the ground category and the fusion ground image into a preset semantic segmentation network model to obtain a semantic segmentation image;
the motion adjustment module is used for adjusting the running path and running gait of the foot robot according to the semantic segmentation image and the condensed ground grid image.
Further, the ground grid image acquisition unit includes: the device comprises a grid setting unit, a dimension determining unit and a coordinate filling unit;
the grid setting unit is used for determining the dimension of the grid unit according to the ground precision requirement and the resolution of the laser radar;
the dimension determining unit is used for setting the dimension of the blank grid image according to the dimension size of the grid unit and the detection area of the ground of the foot-type robot;
the coordinate filling unit is used for filling the projection height value and the position coordinate value of each point cloud coordinate to the ground into the blank grid image to obtain the ground grid image.
Further, the category identification module includes: the system comprises an acquisition unit and a perception data setting unit;
the acquisition unit is used for respectively placing the foot robots on the ground of a plurality of different categories, setting the intensity and the direction of a plurality of illumination on the ground of each category, and acquiring camera data and laser point cloud data in the foot robots in different illumination on the ground of each category;
the sensing data setting unit is used for setting a sensing data set according to the ground category and illumination intensity corresponding to each camera data and the laser point cloud data.
Further, the ground image module includes: the method comprises the steps of collecting an installation unit and a ground attention unit;
the acquisition and installation unit is used for arranging the camera and the laser radar at the bottom of the robot body of the foot-type robot and determining that the distance between the camera and the laser radar when the fields of view of the camera and the laser radar do not interfere is the distance between the camera and the laser radar;
wherein, the center line of the view field of the camera and the laser radar forms an included angle between 45 degrees and 90 degrees with the ground;
the ground focusing unit is used for projecting the fields of view of the camera and the laser radar to the ground to form an intersection area, and a ground focusing window for ground detection is determined according to the intersection area.
Further, the ground attention unit includes: a size determining unit;
the size determining unit is used for determining the length of the ground attention window according to the motion direction of the foot robot and the highest moving speed of the foot robot and determining the width of the ground attention window according to the width of the body of the foot robot.
Thus, by projecting each point cloud coordinate in the laser point cloud image onto the ground below the foot robot, a ground grid image can be obtained. The foot robot may be provided with a representation of the height of the ground and the height of the ground may be used for modeling and analysis of the ground. And the information of the laser point cloud data and the camera image is combined by fusing the ground image, so that more comprehensive and accurate ground information can be provided. The cognition capability of the foot robot to the ground environment is improved. And identifying the fused ground image through a preset perception data set, so that a corresponding ground category can be obtained. This will help the foot robot to identify different types of ground, such as grass, cement, sand, etc., to accommodate the walking characteristics and environmental requirements of different grounds. The ground class identification can be provided for use by modules of a navigation system, a decision system, etc. to optimize the behavior strategy and path planning of the foot robot.
Drawings
Fig. 1: a schematic flow chart of an embodiment of a ground detection method of a foot robot is provided by the invention;
fig. 2: the invention provides a module structure diagram of one embodiment of a foot-type robot ground detection device.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
Example 1
Referring to fig. 1, an embodiment of the present invention provides a method including steps S1 to S3, where each step is as follows:
step S1: acquiring a camera ground image and a laser point cloud image of the front ground below the foot-type robot, and projecting each point cloud coordinate in the laser point cloud image to the ground below the foot-type robot to acquire a ground grid image;
further, the acquiring of the camera ground image and the laser point cloud image of the front ground below the foot robot specifically comprises:
the camera and the laser radar are arranged at the bottom of the robot body of the foot-type robot, and the distance between the camera and the laser radar when the fields of view of the camera and the laser radar do not interfere is determined to be the distance between the camera and the laser radar;
in a specific embodiment, the distance when it is determined that the fields of view of the camera and the lidar do not interfere is the distance between the camera and the lidar, specifically: on the premise of ensuring that field interference does not occur, the installation distance of the camera and the laser radar is as close as possible to the length direction of the lower bottom of the foot-type robot.
Wherein, the center line of the view field of the camera and the laser radar forms an included angle between 45 degrees and 90 degrees with the ground;
in one embodiment, the camera and lidar have a field of view centerline at 60 degrees from the ground.
And projecting the fields of view of the camera and the laser radar to the ground to form an intersection area, and determining a ground attention window for ground detection according to the intersection area.
Therefore, interference between the camera and the field of view of the laser radar can be avoided by determining the distance between the camera and the laser radar and the included angle between the center lines of the fields of view of the camera and the laser radar and the ground. This can avoid data collision or inaccuracy due to overlapping or mutual occlusion of fields of view. A ground-based window of interest for ground detection can be obtained by simultaneously projecting the fields of view of the camera and lidar onto the ground and determining their intersection areas. The effective area perceived by the foot robot on the ground can be determined according to the ground attention window, so that unnecessary data processing and calculation amount are reduced.
In a specific embodiment, a camera ground image and a laser point cloud image of the front ground below the foot robot are obtained, specifically:
according to the visual images of the ground collected by the camera, downsampling the visual images, and cutting the downsampled visual images according to the ground attention window to obtain the ground images of the camera;
further, a ground attention window for ground detection is determined according to the intersection area and the device information of the foot robot, specifically:
and determining the length of the ground attention window according to the motion direction of the foot robot and the highest moving speed of the foot robot, and determining the width of the ground attention window according to the width of the body of the foot robot.
In a specific embodiment, the length of the ground attention window is the highest moving speed of the foot robot multiplied by the forepoling time;
the forepoling time is determined according to the requirement of the path planning advance and the laser radar angular resolution.
If the forward detection time is too short, the planning advance is affected to be insufficient; the forward detection time is too long, and due to the limitation of view angle resolution, the far-end laser point cloud causes the sparseness of the ground projection grid, so that the fine ground characteristics cannot be obtained.
And determining proper foresight time according to the requirement of the path planning advance and the laser radar angular resolution, and obtaining the maximum planning advance on the premise of ensuring the visual field angular resolution.
In a specific embodiment, the width of the ground attention window is determined according to the width of the body of the foot robot, specifically: the width of the ground attention window is determined according to the width of the machine body multiplied by the amplification factor.
In one embodiment, the vertical field angle of the camera is 100 degrees, and the vertical field angle of the laser radar is 90 degrees. The laser radar has a vertical view angle resolution of 0.4 degree, a horizontal view angle resolution of 2 degrees, a foot-type robot body height of about 0.7m, a width of 0.5m and a highest moving speed of 2m/s. When the forestation time is 1.5s and the amplification factor is 3, the length and the width of the ground attention window are respectively 3m and 1.5m, and the maximum grid size of the laser point cloud projection of the far end to the ground is about 0.1m x 0.1m.
Therefore, the length and the width of the ground attention window are determined, the front ground area can be effectively focused, the calculated amount is reduced, the movement range of the robot is accurately controlled, the path planning and obstacle avoidance strategies are optimized, and the perception and the movement performance of the foot-type robot are improved.
Further, the projecting each point cloud coordinate in the laser point cloud image to the ground below the foot robot to obtain a ground grid image specifically includes:
according to the centroid height and the pitching angle of the foot-type robot, obtaining a projection height value and a position coordinate value of each point cloud coordinate in the laser point cloud image to the ground;
in a specific embodiment, the body of the foot robot is arranged parallel to the ground, and the height of the mass center and the pitch angle of the foot robot are determined as references to obtain a ground reference.
In a specific embodiment, the obtaining a projection height value and a position coordinate value of each point cloud coordinate in the laser point cloud image to the ground according to the centroid height and the pitching angle of the foot robot specifically includes:
the method comprises the steps of obtaining a laser point cloud image according to a laser radar, downsampling the laser point cloud image, cutting the downsampled laser point cloud image according to a ground attention window, and obtaining a projection height value and a position coordinate value of each point cloud coordinate in the cut laser point cloud image to the ground;
and according to a preset grid unit, inputting a projection height value and a position coordinate value of each point cloud coordinate to the ground, and inputting a blank grid image to obtain a ground grid image.
Therefore, corresponding ground height images, ground position images and ground grid images can be obtained according to different models of the foot-type robots, and accurate ground geometric information and space information are provided. Therefore, the perception capability of the foot robot to the complex ground environment can be improved, the foot robot can make accurate navigation and decision, and the motion and operation effects and safety of the foot robot are improved.
Further, according to the preset grid unit, inputting a blank grid image into the projection height value and the position coordinate value of each point cloud coordinate to the ground to obtain the ground grid image, specifically:
determining the dimension of the grid unit according to the ground precision requirement and the resolution of the laser radar;
setting the dimension of a blank grid image according to the dimension of the grid unit and the detection area of the ground of the foot robot;
and filling the projection height value and the position coordinate value of each point cloud coordinate to the ground into the blank grid image to obtain a ground grid image.
Thus, the proper grid cell size is set according to the ground precision requirement and the resolution of the laser radar, and the ground grid image can be ensured to accurately represent the characteristics and details of the ground. Providing the foot robot with the height and position information of the ground so as to improve the motion accuracy of the foot robot in detection, planning and movement.
Step S2: the ground grid image is densified according to the pixel size of the camera ground image, and the densified ground grid image and the camera ground image are overlapped and fused to obtain a fused ground image;
in a specific embodiment, the overlapping and fusing the condensed ground grid image and the camera ground image to obtain a fused ground image specifically includes:
and overlapping the condensed ground grid image and the camera ground image according to the one-to-one correspondence of the pixel matrix subscripts to obtain a fused ground image.
Step S3: and identifying the fused ground image through a preset perception data set to obtain a corresponding ground category.
Further, the preset sensing data set specifically includes:
the method comprises the steps of respectively placing foot robots on a plurality of different types of ground, setting the intensity and the direction of a plurality of illumination on each type of ground, and collecting camera data and laser point cloud data in the foot robots in different illumination on each type of ground;
and setting a perception data set according to the ground category and the illumination intensity corresponding to each camera data and the laser point cloud data.
In a specific embodiment, the foot-type robots are respectively placed on the ground of a plurality of different categories, and the intensity and the direction of the illumination of the plurality of categories are set on the ground of each category, specifically:
selecting a representative ground as an acquisition object according to the texture, roughness, reflection characteristics, rigidity characteristics, friction characteristics and the like of the ground;
in this example, smooth tile floors, cement roads, grasslands, gravel floors, and clay roads were chosen as representative floors.
In this embodiment, the actual situation is also considered, and the same ground surface may have different shapes, such as smooth tile (without texture and with regular/irregular texture), cement road (flat road, washboard road, pothole, tree shadow projection), clay road (flat road and uneven road), etc.
In a specific embodiment, the foot type robot is placed in various selected ground surfaces, and weak, normal and strong three-level illumination intensity conditions are set; meanwhile, ground data under the same scene are collected through a camera and a laser radar which are carried on the foot-type robot.
Further, identifying the fused ground image through a preset sensing data set to obtain a corresponding ground category, and then further including:
inputting the ground category and the fusion ground image into a preset semantic segmentation network model to obtain a semantic segmentation image;
and adjusting the running path and running gait of the foot robot according to the semantic segmentation image and the condensed ground grid image.
In this way, semantic segmentation images can be obtained by inputting the ground category and the fusion ground image into a preset semantic segmentation network model, so that semantic information of different areas on the ground can be identified, and the foot-type robot can make corresponding decisions and plans according to the semantic information of the different areas. The path and gait of the robot are adjusted according to the semantic segmentation image and the condensed ground grid image, so that the motion performance and adaptability of the foot robot in different ground environments can be improved.
In a specific embodiment, the adjusting the running path and running gait of the foot robot according to the semantically segmented image and the condensed ground grid image specifically includes:
based on a world coordinate system of the foot-type robot, carrying out coordinate transformation on pixel coordinates of the densely-packed ground grid image, and converting the pixel coordinates into coordinates of the ground where the foot-type robot is positioned, so as to obtain an elevation map layer;
based on a world coordinate system of the foot robot, carrying out coordinate transformation on pixel coordinates of the semantic segmentation image, and converting the pixel coordinates into coordinates of the ground where the foot robot is positioned to obtain a semantic map layer;
aligning and superposing coordinates of the elevation map layer and the semantic map layer to obtain an elevation semantic map;
and carrying out real-time path planning and motion control on the foot robot according to the elevation semantic map.
In a specific embodiment, the real-time path planning and motion control of the foot robot according to the elevation semantic map is specifically:
the elevation information provided by the elevation semantic map is used as input of path planning control and motion control of the foot robot;
judging whether the foot robot can pass through or not according to the height of the map;
if the front ground is determined to be too high or too low to pass, bypassing the front ground through path planning;
if the front passable is determined, selecting a passable sport gait according to the height;
semantic information provided by the elevation semantic map is used as input of motion control of the foot-type robot, and different motion gait and speed are selected according to different ground semantics.
Thus, for the ground with poor properties, the ground can be bypassed through path planning.
By projecting each point cloud coordinate in the laser point cloud image onto the ground below the foot robot, a ground grid image may be obtained. The foot robot may be provided with a representation of the height of the ground and the height of the ground may be used for modeling and analysis of the ground. And the information of the laser point cloud data and the camera image is combined by fusing the ground image, so that more comprehensive and accurate ground information can be provided. The cognition capability of the foot robot to the ground environment is improved. And identifying the fused ground image through a preset perception data set, so that a corresponding ground category can be obtained. This will help the foot robot to identify different types of ground, such as grass, cement, sand, etc., to accommodate the walking characteristics and environmental requirements of different grounds. The ground class identification can be provided for use by modules of a navigation system, a decision system, etc. to optimize the behavior strategy and path planning of the foot robot.
Example two
Referring to fig. 2, a block diagram of an embodiment of a ground detection device for a foot robot according to the present invention is shown.
A foot robot ground detection device comprising: a ground image module 210, a fused image module 220, and a category identification module 230;
the ground image module 210 is configured to obtain a camera ground image and a laser point cloud image of the front ground below the foot-based robot, and project each point cloud coordinate in the laser point cloud image to the ground below the foot-based robot to obtain a ground grid image;
the fused image module 220 is configured to densify the ground grid image with the pixel size of the camera ground image, and superimpose and fuse the densified ground grid image and the camera ground image to obtain a fused ground image;
the class identification module 230 is configured to identify the fused ground image through a preset sensing dataset, so as to obtain a corresponding ground class.
Further, the ground image module 210 includes: a projection unit 211 and a ground grid image acquisition unit 212;
the projection unit 211 is configured to obtain a projection height value and a position coordinate value of each point cloud coordinate in the laser point cloud image to the ground according to the centroid height and the pitch angle of the foot robot;
the ground grid image obtaining unit 212 is configured to input a blank grid image to obtain a ground grid image according to a preset grid unit, and a projection height value and a position coordinate value of each point cloud coordinate to the ground.
Further, after the category identification module 230, it further includes: a semantic segmentation module 240 and a motion adjustment module 250;
the semantic segmentation module 240 is configured to input the ground category and the fused ground image into a preset semantic segmentation network model to obtain a semantic segmentation image;
the motion adjustment module 250 is configured to adjust a running path and a running gait of the foot robot according to the semantic segmentation image and the condensed ground grid image.
Further, the ground grid image acquisition unit 212 includes: a mesh setting unit 2121, a dimension determination unit 2122, and a coordinate filling unit 2123;
the grid setting unit 2121 is used for determining the dimension of the grid unit according to the ground precision requirement and the resolution of the laser radar;
the dimension determining unit 2122 is configured to set a dimension of the blank grid image according to a dimension size of the grid unit and a detection area of the ground of the foot-type robot;
the coordinate filling unit 2123 is configured to fill the projection height value and the position coordinate value of each point cloud coordinate to the ground into the blank grid image, and obtain a ground grid image.
Further, the category identification module 230 includes: an acquisition unit 231 and a perception data setting unit 232;
the collecting unit 231 is configured to place the foot-type robots on a plurality of different types of ground, set a plurality of illumination intensities and directions on each type of ground, and collect camera data and laser point cloud data in the foot-type robots in different illumination on each type of ground;
the sensing data setting unit 232 is configured to set a sensing data set according to the ground category and the illumination intensity corresponding to each camera data and the laser point cloud data.
Further, the ground image module 210 includes: an acquisition installation unit 213 and a ground attention unit 214;
the collecting and mounting unit 213 is configured to set the camera and the laser radar at the bottom of the body of the foot robot, and determine that the distance between the camera and the laser radar when the fields of view of the camera and the laser radar do not interfere is the distance between the camera and the laser radar;
wherein, the center line of the view field of the camera and the laser radar forms an included angle between 45 degrees and 90 degrees with the ground;
the ground attention unit 214 is configured to project fields of view of the camera and the lidar to the ground to form an intersection area, and determine a ground attention window for ground detection according to the intersection area.
Further, the ground attention unit 214 includes: a size determining unit 2141;
the size determining unit 2141 is configured to determine a length of the ground attention window according to a movement direction of the foot robot and a highest moving speed of the foot robot, and determine a width of the ground attention window according to a body width of the foot robot.
Thus, by projecting each point cloud coordinate in the laser point cloud image onto the ground below the foot robot, a ground grid image can be obtained. The foot robot may be provided with a representation of the height of the ground and the height of the ground may be used for modeling and analysis of the ground. And the information of the laser point cloud data and the camera image is combined by fusing the ground image, so that more comprehensive and accurate ground information can be provided. The cognition capability of the foot robot to the ground environment is improved. And identifying the fused ground image through a preset perception data set, so that a corresponding ground category can be obtained. This will help the foot robot to identify different types of ground, such as grass, cement, sand, etc., to accommodate the walking characteristics and environmental requirements of different grounds. The ground class identification can be provided for use by modules of a navigation system, a decision system, etc. to optimize the behavior strategy and path planning of the foot robot.
The foregoing embodiments have been provided for the purpose of illustrating the general principles of the present invention, and are not to be construed as limiting the scope of the invention. It should be noted that any modifications, equivalent substitutions, improvements, etc. made by those skilled in the art without departing from the spirit and principles of the present invention are intended to be included in the scope of the present invention.
Claims (14)
1. A foot robot ground detection method, comprising:
acquiring a camera ground image and a laser point cloud image of the front ground below the foot-type robot, and projecting each point cloud coordinate in the laser point cloud image to the ground below the foot-type robot to acquire a ground grid image;
the ground grid image is densified according to the pixel size of the camera ground image, and the densified ground grid image and the camera ground image are overlapped and fused to obtain a fused ground image;
and identifying the fused ground image through a preset perception data set to obtain a corresponding ground category.
2. The ground detection method of the foot robot according to claim 1, wherein the projecting each point cloud coordinate in the laser point cloud image onto the ground below the foot robot obtains a ground grid image, specifically:
according to the centroid height and the pitching angle of the foot-type robot, obtaining a projection height value and a position coordinate value of each point cloud coordinate in the laser point cloud image to the ground;
and according to a preset grid unit, inputting a projection height value and a position coordinate value of each point cloud coordinate to the ground, and inputting a blank grid image to obtain a ground grid image.
3. The ground detection method of claim 1, wherein after the identifying the fused ground image by a preset sensing data set to obtain a corresponding ground category, further comprising:
inputting the ground category and the fusion ground image into a preset semantic segmentation network model to obtain a semantic segmentation image;
and adjusting the running path and running gait of the foot robot according to the semantic segmentation image and the condensed ground grid image.
4. The ground detection method of the foot robot according to claim 2, wherein the projecting height value and position coordinate value of each point cloud coordinate to the ground are input into a blank grid image according to a preset grid unit, and the ground grid image is obtained specifically as follows:
determining the dimension of the grid unit according to the ground precision requirement and the resolution of the laser radar;
setting the dimension of a blank grid image according to the dimension of the grid unit and the detection area of the ground of the foot robot;
and filling the projection height value and the position coordinate value of each point cloud coordinate to the ground into the blank grid image to obtain a ground grid image.
5. The foot robot ground detection method according to claim 1, wherein the preset sensing dataset is specifically:
the method comprises the steps of respectively placing foot robots on a plurality of different types of ground, setting the intensity and the direction of a plurality of illumination on each type of ground, and collecting camera data and laser point cloud data in the foot robots in different illumination on each type of ground;
and setting a perception data set according to the ground category and the illumination intensity corresponding to each camera data and the laser point cloud data.
6. The ground detection method of the foot robot according to claim 1, wherein the acquiring the camera ground image and the laser point cloud image of the front ground below the foot robot specifically comprises:
the camera and the laser radar are arranged at the bottom of the robot body of the foot-type robot, and the distance between the camera and the laser radar when the fields of view of the camera and the laser radar do not interfere is determined to be the distance between the camera and the laser radar;
wherein, the center line of the view field of the camera and the laser radar forms an included angle between 45 degrees and 90 degrees with the ground;
and projecting the fields of view of the camera and the laser radar to the ground to form an intersection area, and determining a ground attention window for ground detection according to the intersection area.
7. The foot robot ground detection method according to claim 6, characterized in that a ground focus window for ground detection is determined from the intersection area and the device information of the foot robot, in particular:
and determining the length of the ground attention window according to the motion direction of the foot robot and the highest moving speed of the foot robot, and determining the width of the ground attention window according to the width of the body of the foot robot.
8. A foot robot ground detection device, comprising: the system comprises a ground image module, a fusion image module and a category identification module;
the ground image module is used for acquiring a camera ground image and a laser point cloud image of the front ground below the foot-type robot, and projecting each point cloud coordinate in the laser point cloud image to the ground below the foot-type robot to acquire a ground grid image;
the fusion image module is used for densifying the ground grid image according to the pixel size of the camera ground image, and superposing and fusing the densified ground grid image and the camera ground image to obtain a fusion ground image;
the category identification module is used for identifying the fused ground image through a preset perception data set to obtain a corresponding ground category.
9. The foot robot ground detection device of claim 8, wherein the ground image module comprises: the system comprises a projection unit and a ground grid image acquisition unit;
the projection unit is used for obtaining a projection height value and a position coordinate value of each point cloud coordinate in the laser point cloud image to the ground according to the centroid height and the pitching angle of the foot-type robot;
the ground grid image acquisition unit is used for inputting a blank grid image into the projection height value and the position coordinate value of each point cloud coordinate to the ground according to the preset grid unit to acquire a ground grid image.
10. The foot robot ground detection device of claim 8, further comprising, after the category identification module: the semantic segmentation module and the motion adjustment module;
the semantic segmentation module is used for inputting the ground category and the fusion ground image into a preset semantic segmentation network model to obtain a semantic segmentation image;
the motion adjustment module is used for adjusting the running path and running gait of the foot robot according to the semantic segmentation image and the condensed ground grid image.
11. The foot robot ground detection device of claim 9, wherein the ground grid image acquisition unit comprises: the device comprises a grid setting unit, a dimension determining unit and a coordinate filling unit;
the grid setting unit is used for determining the dimension of the grid unit according to the ground precision requirement and the resolution of the laser radar;
the dimension determining unit is used for setting the dimension of the blank grid image according to the dimension size of the grid unit and the detection area of the ground of the foot-type robot;
the coordinate filling unit is used for filling the projection height value and the position coordinate value of each point cloud coordinate to the ground into the blank grid image to obtain the ground grid image.
12. The foot robot ground detection device of claim 8, wherein the category identification module comprises: the system comprises an acquisition unit and a perception data setting unit;
the acquisition unit is used for respectively placing the foot robots on the ground of a plurality of different categories, setting the intensity and the direction of a plurality of illumination on the ground of each category, and acquiring camera data and laser point cloud data in the foot robots in different illumination on the ground of each category;
the sensing data setting unit is used for setting a sensing data set according to the ground category and illumination intensity corresponding to each camera data and the laser point cloud data.
13. The foot robot ground detection device of claim 8, wherein the ground image module comprises: the method comprises the steps of collecting an installation unit and a ground attention unit;
the acquisition and installation unit is used for arranging the camera and the laser radar at the bottom of the robot body of the foot-type robot and determining that the distance between the camera and the laser radar when the fields of view of the camera and the laser radar do not interfere is the distance between the camera and the laser radar;
wherein, the center line of the view field of the camera and the laser radar forms an included angle between 45 degrees and 90 degrees with the ground;
the ground focusing unit is used for projecting the fields of view of the camera and the laser radar to the ground to form an intersection area, and a ground focusing window for ground detection is determined according to the intersection area.
14. The foot robot ground detection device of claim 13, wherein the ground attention unit comprises: a size determining unit;
the size determining unit is used for determining the length of the ground attention window according to the motion direction of the foot robot and the highest moving speed of the foot robot and determining the width of the ground attention window according to the width of the body of the foot robot.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311372254.1A CN117496464B (en) | 2023-10-23 | 2023-10-23 | Ground detection method and device for foot robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311372254.1A CN117496464B (en) | 2023-10-23 | 2023-10-23 | Ground detection method and device for foot robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117496464A true CN117496464A (en) | 2024-02-02 |
CN117496464B CN117496464B (en) | 2024-05-24 |
Family
ID=89668115
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311372254.1A Active CN117496464B (en) | 2023-10-23 | 2023-10-23 | Ground detection method and device for foot robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117496464B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111583337A (en) * | 2020-04-25 | 2020-08-25 | 华南理工大学 | Omnibearing obstacle detection method based on multi-sensor fusion |
CN111915662A (en) * | 2019-05-07 | 2020-11-10 | 北京京东尚科信息技术有限公司 | Three-dimensional laser point cloud data preprocessing method and device |
CN113390411A (en) * | 2021-06-10 | 2021-09-14 | 中国北方车辆研究所 | Foot type robot navigation and positioning method based on variable configuration sensing device |
CN113627353A (en) * | 2021-08-12 | 2021-11-09 | 成都航维智芯科技有限公司 | Method for classifying ground points in point cloud data |
WO2022022694A1 (en) * | 2020-07-31 | 2022-02-03 | 北京智行者科技有限公司 | Method and system for sensing automated driving environment |
CN114842438A (en) * | 2022-05-26 | 2022-08-02 | 重庆长安汽车股份有限公司 | Terrain detection method, system and readable storage medium for autonomous driving vehicle |
CN116540206A (en) * | 2023-05-17 | 2023-08-04 | 成都理工大学 | Foot-type robot elevation estimation method, device and system |
CN116662930A (en) * | 2023-06-02 | 2023-08-29 | 北京数字绿土科技股份有限公司 | Road identification generation method and system based on ground mobile laser radar |
-
2023
- 2023-10-23 CN CN202311372254.1A patent/CN117496464B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111915662A (en) * | 2019-05-07 | 2020-11-10 | 北京京东尚科信息技术有限公司 | Three-dimensional laser point cloud data preprocessing method and device |
CN111583337A (en) * | 2020-04-25 | 2020-08-25 | 华南理工大学 | Omnibearing obstacle detection method based on multi-sensor fusion |
WO2022022694A1 (en) * | 2020-07-31 | 2022-02-03 | 北京智行者科技有限公司 | Method and system for sensing automated driving environment |
CN113390411A (en) * | 2021-06-10 | 2021-09-14 | 中国北方车辆研究所 | Foot type robot navigation and positioning method based on variable configuration sensing device |
CN113627353A (en) * | 2021-08-12 | 2021-11-09 | 成都航维智芯科技有限公司 | Method for classifying ground points in point cloud data |
CN114842438A (en) * | 2022-05-26 | 2022-08-02 | 重庆长安汽车股份有限公司 | Terrain detection method, system and readable storage medium for autonomous driving vehicle |
CN116540206A (en) * | 2023-05-17 | 2023-08-04 | 成都理工大学 | Foot-type robot elevation estimation method, device and system |
CN116662930A (en) * | 2023-06-02 | 2023-08-29 | 北京数字绿土科技股份有限公司 | Road identification generation method and system based on ground mobile laser radar |
Non-Patent Citations (3)
Title |
---|
VINICIO ROSAS-CERVANTES ET AL.: "Mobile robot 3D trajectory estimation on a multilevel surface with multimodal fusion of 2D camera features and a 3D light detection and ranging point cloud", 《INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS》, 31 December 2022 (2022-12-31), pages 1 - 11 * |
谭志国 等: "基于点云- 模型匹配的激光雷达目标识别", 《计算机工程与科学》, vol. 34, no. 4, 31 December 2012 (2012-12-31), pages 32 - 36 * |
陈元相 等: "非一致性稀疏采样的 LiDAR 点云压缩方法", 《福州大学学报 (自然科学版)》, vol. 49, no. 3, 30 June 2021 (2021-06-30), pages 329 - 335 * |
Also Published As
Publication number | Publication date |
---|---|
CN117496464B (en) | 2024-05-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109470254B (en) | Map lane line generation method, device, system and storage medium | |
US11288521B2 (en) | Automated road edge boundary detection | |
CA2950791C (en) | Binocular visual navigation system and method based on power robot | |
CN100494900C (en) | Environment sensing one-eye visual navigating method adapted to self-aid moving vehicle | |
EP4141737A1 (en) | Target detection method and device | |
CN111192295B (en) | Target detection and tracking method, apparatus, and computer-readable storage medium | |
CN110531376A (en) | Detection of obstacles and tracking for harbour automatic driving vehicle | |
CN105667518A (en) | Lane detection method and device | |
CN108444390A (en) | A kind of pilotless automobile obstacle recognition method and device | |
CN117441113A (en) | Vehicle-road cooperation-oriented perception information fusion representation and target detection method | |
CN105160702A (en) | Stereoscopic image dense matching method and system based on LiDAR point cloud assistance | |
CN108873013A (en) | A kind of road using multi-line laser radar can traffic areas acquisition methods | |
CN101075376A (en) | Intelligent video traffic monitoring system based on multi-viewpoints and its method | |
US20230266473A1 (en) | Method and system for object detection for a mobile robot with time-of-flight camera | |
CN110491156A (en) | A kind of cognitive method, apparatus and system | |
CN109375629A (en) | A kind of cruiser and its barrier-avoiding method that navigates | |
CN113009453A (en) | Mine road edge detection and map building method and device | |
CN109241855A (en) | Intelligent vehicle based on stereoscopic vision can travel area detection method | |
CN114821526A (en) | Obstacle three-dimensional frame detection method based on 4D millimeter wave radar point cloud | |
CN115752474A (en) | Robot navigation planning method and device under non-flat ground environment and robot | |
CN117496464B (en) | Ground detection method and device for foot robot | |
CN116817957A (en) | Unmanned vehicle driving path planning method and system based on machine vision | |
CN111781606A (en) | Novel miniaturization implementation method for fusion of laser radar and ultrasonic radar | |
CN116051818A (en) | Multi-sensor information fusion method of automatic driving system | |
CN112530270B (en) | Mapping method and device based on region allocation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |