CN114219770A - Ground detection method, ground detection device, electronic equipment and storage medium - Google Patents

Ground detection method, ground detection device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114219770A
CN114219770A CN202111424074.4A CN202111424074A CN114219770A CN 114219770 A CN114219770 A CN 114219770A CN 202111424074 A CN202111424074 A CN 202111424074A CN 114219770 A CN114219770 A CN 114219770A
Authority
CN
China
Prior art keywords
image
gray
ground
target
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111424074.4A
Other languages
Chinese (zh)
Inventor
宋西来
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ubtech Robotics Corp
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN202111424074.4A priority Critical patent/CN114219770A/en
Publication of CN114219770A publication Critical patent/CN114219770A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The application is applicable to the technical field of computers, and provides a ground detection method, a ground detection device, electronic equipment and a storage medium, wherein the ground detection method comprises the following steps: acquiring first point cloud data corresponding to a preset detection area; normalizing the height information of the first point cloud data to an image space to obtain a first gray image; filtering the first gray level image to obtain a second gray level image; determining second point cloud data according to the second gray scale image; determining a target ground image area in the second gray image according to the second gray image and a preset threshold segmentation algorithm; and positioning the passable ground area according to the target ground image area and the second point cloud data. The embodiment of the application can realize ground detection efficiently and accurately.

Description

Ground detection method, ground detection device, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a ground detection method and apparatus, an electronic device, and a storage medium.
Background
With the development of the robot technology, the robot can realize the autonomous movement function through the computer vision technology and the positioning navigation technology. When the robot realizes the autonomous movement function, the robot is required to sense the surrounding environment and autonomously detect the passable ground area so as to ensure the movement safety of the robot. Therefore, ground detection is a key link for realizing the autonomous movement function of the robot. However, the current ground detection method has the defects of high hardware cost and low accuracy.
Disclosure of Invention
In view of this, embodiments of the present application provide a ground detection method, an apparatus, an electronic device, and a storage medium, so as to solve the problem of how to accurately implement ground detection in the prior art.
A first aspect of an embodiment of the present application provides a ground detection method, including:
acquiring first point cloud data corresponding to a preset detection area;
normalizing the height information of the first point cloud data to an image space to obtain a first gray image;
filtering the first gray level image to obtain a second gray level image;
determining second point cloud data according to the second gray scale image;
determining a target ground image area in the second gray image according to the second gray image and a preset threshold segmentation algorithm;
and positioning the passable ground area according to the target ground image area and the second point cloud data.
Optionally, the acquiring first point cloud data corresponding to the preset detection area includes:
acquiring a depth image corresponding to the preset detection area;
according to the depth image, point cloud data under a camera coordinate system are determined;
and determining point cloud data under a robot coordinate system as the first point cloud data according to the point cloud data under the camera coordinate system and a preset coordinate conversion relation.
Optionally, normalizing the height information of the first point cloud data to an image space to obtain a first grayscale image includes:
converting the height information of the first point cloud data to a preset height interval to obtain target height data;
and normalizing the target height data to a preset gray scale interval to obtain a first gray scale image.
Optionally, the filtering the first grayscale image to obtain a second grayscale image includes:
and carrying out median filtering processing and dimension reduction cutting processing on the first gray level image to obtain the second gray level image.
Optionally, the determining a target ground image region in the second gray scale image according to the second gray scale image and a preset threshold segmentation algorithm includes:
performing gradient calculation on the second gray level image, and determining a target gradient image corresponding to the second gray level image;
performing threshold segmentation processing on the target gradient image according to a first gray threshold, and determining a first pixel point set of which the gray value is smaller than the first gray threshold in the target gradient image;
performing threshold segmentation processing on the second gray level image according to a target threshold interval, and determining a second pixel point set of which the gray level value is located in the target threshold interval in the second gray level image;
and determining a target ground image area according to the first pixel point set and the second pixel point set.
Optionally, the determining a target ground image region according to the first pixel point set and the second pixel point set includes:
determining a target contour according to the first pixel point set and the second pixel point set;
and if the area of the target contour is larger than a preset area threshold value, determining a target ground image area according to the target contour.
Optionally, the determining a target ground image area according to the target contour includes:
determining a center of gravity of the target profile;
determining an optimal seed point according to the gravity center;
and processing the second gray image through a flood filling algorithm according to the optimal seed point to obtain a target ground image area.
A second aspect of an embodiment of the present application provides a ground detection apparatus, including:
the device comprises a first point cloud data acquisition unit, a second point cloud data acquisition unit and a detection unit, wherein the first point cloud data acquisition unit is used for acquiring first point cloud data corresponding to a preset detection area;
the first gray image determining unit is used for normalizing the height information of the first point cloud data to an image space to obtain a first gray image;
the second gray image determining unit is used for carrying out filtering processing on the first gray image to obtain a second gray image;
the second point cloud data determining unit is used for determining second point cloud data according to the second gray scale image;
a target ground image area determination unit, configured to determine a target ground image area in the second gray image according to the second gray image and a preset threshold segmentation algorithm;
and the positioning unit is used for positioning the passable ground area according to the target ground image area and the second point cloud data.
A third aspect of embodiments of the present application provides an electronic device, comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, which when executed by the processor, causes the electronic device to implement the steps of the ground detection method as described.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, causes an electronic device to carry out the steps of the ground detection method as described.
A fifth aspect of embodiments of the present application provides a computer program product, which, when run on an electronic device, causes the electronic device to perform the ground detection method of any one of the first aspects.
Compared with the prior art, the embodiment of the application has the advantages that: in the embodiment of the application, after first point cloud data corresponding to a preset detection area is obtained, height information of the first point cloud data is normalized to an image space, and a first gray image is obtained. And then, filtering the first gray level image to obtain a second gray level image. According to the second gray scale image, the filtered second point cloud data can be determined, and according to the second gray scale image and a preset threshold segmentation algorithm, a target ground image area can be determined from the second gray scale image. Then, according to the target ground image area and the second point cloud data, a passable ground area can be positioned from a preset detection area. The first gray image is an image obtained based on the height information of the first point cloud data, and the second gray image is an image obtained by further filtering the first gray image, so that the second gray image is an image containing more accurate height information after filtering, the second point cloud data containing more accurate height information can be determined based on the second gray image subsequently, and the target ground image area can be accurately determined by using the second gray image carrying accurate height information and a preset threshold segmentation algorithm, so that the passable ground area can be accurately positioned according to the target ground image area and the second point cloud data. Namely, the point cloud data can be converted into the image space for filtering processing and threshold segmentation, and then the passable ground area can be accurately positioned in the point cloud data according to the filtering processing result and the threshold segmentation result; because the accuracy of the filtering processing and the threshold segmentation of the image space is higher and the algorithm complexity is lower, the accuracy and the efficiency of the ground detection can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the embodiments or the description of the prior art will be briefly described below.
Fig. 1 is a schematic flow chart illustrating an implementation of a ground detection method according to an embodiment of the present application;
fig. 2 is an exemplary diagram of a ground detection apparatus provided in an embodiment of the present application;
fig. 3 is a schematic diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In addition, in the description of the present application, the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
At present, ground detection is a key link for realizing an autonomous movement function of a robot. However, the existing ground detection method has the defect of low accuracy.
In order to solve the technical problem, an embodiment of the present application provides a ground detection method, an apparatus, an electronic device, and a storage medium, including: acquiring first point cloud data corresponding to a preset detection area; normalizing the height information of the first point cloud data to an image space to obtain a first gray image; filtering the first gray level image to obtain a second gray level image; determining second point cloud data according to the second gray scale image; determining a target ground image area in the second gray image according to the second gray image and a preset threshold segmentation algorithm; and positioning the passable ground area according to the target ground image area and the second point cloud data.
The first gray image is an image obtained based on the height information of the first point cloud data, and the second gray image is an image obtained by further filtering the first gray image, so that the second gray image is an image containing more accurate height information after filtering, the second point cloud data containing more accurate height information can be determined based on the second gray image subsequently, and the target ground image area can be accurately determined by using the second gray image carrying accurate height information and a preset threshold segmentation algorithm, so that the passable ground area can be accurately positioned according to the target ground image area and the second point cloud data. Namely, the point cloud data can be converted into the image space for filtering processing and threshold segmentation, and then the passable ground area can be accurately positioned in the point cloud data according to the filtering processing result and the threshold segmentation result; because the accuracy of the filtering processing and the threshold segmentation of the image space is higher and the algorithm complexity is lower, the accuracy and the efficiency of the ground detection can be improved.
The first embodiment is as follows:
fig. 1 shows a schematic flowchart of a first ground detection method provided in an embodiment of the present application, where an execution subject of the ground detection method is an electronic device, for example, a robot. The ground detection method shown in fig. 1 is detailed as follows:
in S101, first point cloud data corresponding to a preset detection area is acquired.
In this embodiment of the application, the preset detection area may be an area, located in a forward direction of the robot, where a distance from the robot is smaller than a preset distance in a moving process of the robot. In one embodiment, when the robot runs, the information of the preset detection area is collected through a detection device installed on the robot and inclining downwards, and first point cloud data is generated, and the first point cloud data carries three-dimensional information of the preset detection area. The detection means may be a vision means. The preset detection area may be a maximum area that can be detected by the detection device.
In S102, the height information of the first point cloud data is normalized to an image space, so as to obtain a first grayscale image.
The first point cloud data containing the three-dimensional information of the preset detection area acquired in step S101 specifically includes point cloud x-axis data, point cloud y-axis data, and point cloud z-axis data. The point cloud z-axis data is data used for representing height information of a preset detection area in the first point cloud data. And normalizing the point cloud z-axis data to an image space, namely mapping the numerical value of the point cloud z-axis data to a corresponding gray value in the gray image to obtain a first gray image. In the first gray image, the gray value of each pixel point corresponds to the z-axis coordinate value in the first point cloud data, so that the first gray image carries the height information of each point of the first point cloud data.
In S103, the first grayscale image is filtered to obtain a second grayscale image.
After the first gray image carrying the height information is obtained, filtering processing is carried out on the first gray image to obtain a filtered gray image, namely a second gray image. The filtering process may be a mean filtering process, a median filtering process, a gaussian filtering process, or other processing methods capable of filtering out image noise. Because the second gray scale image is the gray scale image with the image noise filtered, the height information carried in the second gray scale image is more accurate.
In S104, second point cloud data is determined according to the second grayscale image.
After the second gray image carrying more accurate height information is obtained, the gray value in the second gray image can be reversely mapped into the height information through a reverse normalization algorithm according to the second gray image, and the filtered z-axis data is obtained. And combining the filtered z-axis data and the x-axis data and the y-axis data in the first point cloud data into filtered point cloud data, namely second point cloud data. The second point cloud data is z-axis data which can accurately represent the height.
In S105, a target ground image area in the second gray scale image is determined according to the second gray scale image and a preset threshold segmentation algorithm.
In addition, since the gray value of the second gray image can represent the height information, the second gray image is threshold-segmented according to the second gray image and a preset threshold segmentation algorithm, and an image area with a gray value satisfying a preset threshold condition (i.e. a gray value condition corresponding to the ground area) can be segmented as the target ground image area. For example, it is determined that an image region having a gray value range [148, 188] in the gray image corresponds to a ground region having a height of approximately 0 m in real space, and thus a region having a gray value between [148, 188] can be divided from the second gray image as a target ground image region.
In S106, a passable ground area is located according to the target ground image area and the second point cloud data.
In the embodiment of the application, the passable ground area is a flat passable ground area in the preset detection area.
After the target ground image area of the second gray scale image is determined, point cloud data corresponding to the target ground image area can be determined from the second point cloud data to serve as point cloud data of the passable ground area according to the mapping relation between the image coordinate system of the second gray scale image and the point cloud coordinate system of the second point cloud data. Finally, the robot can make a motion decision according to the point cloud data of the passable ground area and move on the passable ground area in the actual space.
In the embodiment of the application, the first gray image is an image obtained based on the height information of the first point cloud data, and the second gray image is an image obtained by further filtering the first gray image, so that the second gray image is an image containing more accurate height information after filtering, the second point cloud data containing more accurate height information can be determined based on the second gray image subsequently, and the target ground image area can be accurately determined by using the second gray image carrying accurate height information and a preset threshold segmentation algorithm, so that the passable ground area can be accurately positioned according to the target ground image area and the second point cloud data. Namely, the point cloud data can be converted into the image space for filtering processing and threshold segmentation, and then the passable ground area can be accurately positioned in the point cloud data according to the filtering processing result and the threshold segmentation result; because the accuracy of the filtering processing and the threshold segmentation of the image space is higher and the algorithm complexity is lower, the accuracy and the efficiency of the ground detection can be improved.
Optionally, the acquiring first point cloud data corresponding to the preset detection area includes:
acquiring a depth image corresponding to the preset detection area;
according to the depth image, point cloud data under a camera coordinate system are determined;
and determining point cloud data under a robot coordinate system as the first point cloud data according to the point cloud data under the camera coordinate system and a preset coordinate conversion relation.
In this embodiment, the detection device may be a depth camera. In order to cope with the complicated and varied environment, a variety of sensors are usually installed on the robot to improve the perception capability of the robot, and the sensors may include an infrared sensor, an ultrasonic sensor, a laser radar, and the like. Due to different working principles of different sensors, the use scenarios are limited differently. On the other hand, different sensors are only suitable for some tasks, for example, a single-line laser radar is suitable for detecting obstacles and positioning and navigating, and a multi-line laser radar can be used for various tasks, including navigation positioning, object detection, ground detection and the like. Therefore, compared with other sensors, the depth camera has the advantages of large detection range, high detection spatial resolution and relatively low price, and is suitable for the application of the robot in detecting the ground passable area.
The depth camera is mounted on the robot, and the shooting angle of the depth camera is an inclined downward angle, so that the preset detection area comprises a ground area. And shooting the preset detection area through the depth camera to obtain a depth image corresponding to the preset detection area.
After the depth image is obtained, data dimension reduction processing (for example, data sampling processing) may be performed on the depth image to reduce the data amount of subsequent processing and improve the processing efficiency. And then, correspondingly converting the data of the depth image into point cloud data under a camera coordinate system of the depth camera according to a preset conversion formula.
After point cloud data under a camera coordinate system are determined, the point cloud data under the camera coordinate system are mapped to the point cloud data under the robot coordinate system correspondingly according to a coordinate conversion relation between the camera coordinate system and the robot coordinate system preset in advance, and first point cloud data are obtained.
In the embodiment of the application, the depth camera collects the depth image, the first point cloud data corresponding to the preset detection area can be efficiently and accurately determined on the premise of controlling the cost, and therefore the hardware cost can be reduced while the efficiency and the accuracy of ground detection are improved.
Optionally, normalizing the height information of the first point cloud data to an image space to obtain a first grayscale image includes:
converting the height information of the first point cloud data to a preset height interval to obtain target height data;
and normalizing the target height data to a preset gray scale interval to obtain a first gray scale image.
In the embodiment of the application, the preset height interval is an area with a height difference smaller than a preset value from the height of a conventional obstacle to the area of the passable ground. For example, assuming that the height value of the original flat ground without the obstacle is 0 m, the preset height interval may be [ -0.31,0.5] m, or [ -0.32, 0.32] m, or [ -0.42, 0.22] m. Where negative values indicate below the original flat ground and positive values indicate above the original flat ground. In the preset detection area, the height information exceeding the preset height interval cannot be the height information corresponding to the passable ground area, so that the height information higher than the preset height interval can be ignored, the height information higher than the maximum value of the preset height interval in the first point cloud data is converted into the maximum value of the preset height interval, the height information smaller than the minimum value of the preset height interval is converted into the minimum value of the preset height interval, and the target height data of which the height information is totally positioned in the preset height interval is obtained. For example, for a preset height interval of [ -0.31,0.5] meters, height information with a height of 1 meter in the first point cloud data may be converted into height information with a height of 0.5 meter.
After the target height data are obtained, normalization processing is carried out on the target height data, the target height data are converted into a [0,1] interval from an original preset height interval, and normalized height data are obtained. For example, for target height data located in a preset height interval [0.42,0.22] m, a value of 0.42 is added to each data, thereby converting the target height data into an interval [0,0.64 ]; then, each number in [0.0.64] is multiplied by 1.5625 to obtain normalized height data in the [0,1] interval. And then, converting the normalized height data into a preset gray scale interval according to the preset gray scale interval [0,255], so that each height information can accurately preset the gray scale value in the gray scale interval to correspondingly represent, and a first gray scale image is obtained.
In the embodiment of the application, the height information of the first point cloud data can be converted into the preset height interval to obtain the target height data, and then the target height data is normalized to the preset gray scale interval, so that the first gray scale image carrying the height information concerned in ground detection can be accurately obtained, and the efficiency and the accuracy of subsequent ground detection are improved.
Optionally, the filtering the first grayscale image to obtain a second grayscale image includes:
and carrying out median filtering processing and dimension reduction cutting processing on the first gray level image to obtain the second gray level image.
In this embodiment, the filtering process on the first grayscale image may specifically be a median filtering process. The median filtering processing can well reserve the image boundary, so that the boundary of points with different heights in the space can be reserved while filtering the scattered inaccurate data in the image, and the accuracy of ground detection is improved.
After the gray image subjected to median filtering is obtained through median filtering, dimension reduction cutting processing is further performed on the gray image, and the gray image with less data volume is obtained and serves as a second gray image, so that the data calculation complexity is further reduced, and the ground detection efficiency is improved. Specifically, the dimension reduction clipping process includes a dimension reduction process and a clipping process.
For the dimension reduction processing, because the image is subjected to the wave filtering processing, the dimension reduction processing with larger amplitude can not influence the final ground detection, so that the dimension reduction processing can be more sparse than the data sampled by the first dimension reduction processing, the data volume is reduced as much as possible, and the ground detection efficiency is improved.
For the cropping processing, specifically, a nearby range region that is concerned when the robot moves may be determined from a preset detection region, the range region may be retained in correspondence with a region in the grayscale image, and other regions outside the range region may be cropped and deleted, so as to obtain the second grayscale image. For example, some scenes do not need to detect the whole camera view field range, but only need to detect a part of the area, for example, when the robot moves at a speed of 1 meter per second, only data within 10 meters need to be detected, so that only images within 10 meters in the grayscale image can be retained through clipping to obtain a second grayscale image. Through cutting processing, the ground detection result is guaranteed to meet the motion requirement of the robot, processor resources are saved, operation is accelerated, and ground detection efficiency is improved.
In the embodiment of the application, the first gray level image is subjected to median filtering and dimension reduction cutting, so that the height information required in ground detection can be accurately kept, and the second gray level image with reduced data volume can be obtained, thereby ensuring the accuracy and efficiency of subsequent ground detection.
Optionally, the determining a target ground image region in the second gray scale image according to the second gray scale image and a preset threshold segmentation algorithm includes:
performing gradient calculation on the second gray level image, and determining a target gradient image corresponding to the second gray level image;
performing threshold segmentation processing on the target gradient image according to a first gray threshold, and determining a first pixel point set of which the gray value is smaller than the first gray threshold in the target gradient image;
performing threshold segmentation processing on the second gray level image according to a target threshold interval, and determining a second pixel point set of which the gray level value is located in the target threshold interval in the second gray level image;
and determining a target ground image area according to the first pixel point set and the second pixel point set.
In the embodiment of the application, after the second gray image is obtained, gradient calculation may be performed on the second gray image based on scharr and cable operators, and a target gradient image corresponding to the second gray image is determined. In one embodiment, the gradient calculation may be performed on the second gray scale image to obtain a first-order gradient map; and then, carrying out gradient calculation on the first-order gradient image to obtain a second-order target gradient image. In the target gradient image, the gray value of each pixel point is used for representing the gradient value of the original second gray image, the gradient of 0 represents that the position corresponding to the second gray image is the position of a plane area, and the larger the gradient, the more uneven the point at the position is. The planar area may include any area with a relatively flat surface, such as a horizontal plane, a gentle slope plane, and a steep slope plane.
After the target gradient image is obtained, threshold segmentation processing is carried out on the target gradient image according to a preset first gray threshold and gray values of all pixel points of the target gradient image, all pixel points of which the gray values are smaller than the first gray value in the target gradient image are determined, and a set formed by the pixel points is a first pixel point set. The first gray threshold is a gray threshold which is presented in the gradient map corresponding to the maximum fluctuation degree of the passable ground area determined according to the height fluctuation change condition of the actual ground. And performing threshold segmentation processing according to the first gray threshold to obtain that each pixel point in the first pixel point set corresponds to a position with a height fluctuation smaller than a preset range in the actual ground. For example, during threshold segmentation, the gray value of the pixel point with the gray value smaller than the first gray value in the target gradient image may be thresholded to 255, and the gray value of the pixel point with the gray value greater than or equal to the first gray value may be thresholded to 0, and at this time, the set formed by the pixel points with the gray value of 255 in the target gradient image is the first pixel point set.
In addition, threshold segmentation processing is carried out on the second gray level image according to the target threshold interval and the gray value of each pixel point in the second gray level image, and a set formed by all pixel points of which the gray values are located in the target threshold interval in the second gray level image is determined as a second pixel point set. The target threshold interval is a gray value interval of the ground height range in the gray level image, which is correspondingly determined according to the actual passable ground height range and the mapping relation between the height information and the gray level value. In the second gray image, the second pixel point set with the gray value in the target threshold interval corresponds to the position which is highly close to the ground height 0 in the actual space. For example, during threshold segmentation, the gray value of the pixel point of which the gray value is located in the target threshold interval in the second gray image may be thresholded to 255, and the gray value of the pixel point of which the gray value is located outside the target threshold interval is thresholded to 0, at this time, the set formed by the pixel points of which the gray value is 255 in the second gray image is the second pixel point set.
And after the first pixel point set and the second pixel point set are determined, taking the intersection of the first pixel point set and the second pixel point set from the second gray scale image to obtain a third pixel point set. And determining an image area according to the third pixel point set, namely the target ground image area. The target ground image region corresponds to a region in the actual ground where the height fluctuation is smaller than the preset fluctuation and the height is close to the ground height 0.
In the embodiment of the application, the first pixel point set with small height fluctuation can be determined through threshold segmentation processing of the target gradient image, and the second pixel point set with the height close to the ground 0 is determined through threshold segmentation processing of the second gray image, so that an image area corresponding to a ground area which is relatively flat and is near the ground height 0 can be determined as a target ground image area according to the first pixel point set and the second pixel point set, and accuracy of ground detection is improved.
Optionally, the determining a target ground image region according to the first pixel point set and the second pixel point set includes:
determining a target contour according to the first pixel point set and the second pixel point set;
and if the area of the target contour is larger than a preset area threshold value, determining a target ground image area according to the target contour.
In the embodiment of the application, after the first pixel point set and the second pixel point set are obtained, the intersection of the first pixel point set and the second pixel point set can be obtained, and a third pixel point set is obtained. And then, carrying out corrosion and expansion operation according to an image area formed by the third pixel point set to obtain an image to be processed. And carrying out contour detection on the image to be processed to obtain a target contour.
And then, judging whether the area of the target contour is larger than a preset area threshold value or not. If so, determining a target ground image area according to the target contour, where the target ground image area may be an area where the target contour is located directly or an image area including the target contour and a preset area around the target contour. If not, the situation that the current flat area is too small to fit the robot to pass through and the area where the current target contour is located is determined as the image area corresponding to the non-ground area is shown.
In the embodiment of the application, the target contour can be determined according to the first pixel point set and the second pixel point set, and when the area of the target contour is larger than the preset area threshold, the next step of determining the target ground image area is carried out, so that the image area corresponding to the passable ground area can be accurately screened out.
Optionally, the determining a target ground image area according to the target contour includes:
determining a center of gravity of the target profile;
determining an optimal seed point according to the gravity center;
and processing the second gray image through a flooding filling algorithm according to the optimal seed point and a preset filling threshold value to obtain a target ground image area.
In the embodiment of the application, for a target contour with an area larger than a preset area threshold, the center of gravity of the target contour can be determined through a preset image center of gravity algorithm.
And then, judging whether the gravity center is positioned in the target contour or not according to the coordinates of the gravity center, and if so, taking the gravity center of the target contour as an optimal seed point. And if not, determining a pixel point which is positioned in the same row or column with the gravity center and is positioned in the target contour and closest to the gravity center from the second gray scale image according to the coordinate of the gravity center, and determining the pixel point as the optimal seed point in the target contour.
After the optimal seed point is determined, the optimal seed point is used as a position parameter of a flood filling algorithm in a second gray scale image, a preset filling threshold value is combined, the second gray scale image is processed through the flood filling algorithm, a larger image area near the optimal seed point is determined, and therefore an image area which is expanded on the basis of a target contour and meets conditions can be obtained and used as a target ground image area.
In the embodiment of the application, the optimal seed points are accurately determined and the flood filling algorithm is used, so that the expanded area can be determined as the target ground image area based on the target contour, the image area corresponding to the ground filtered by strict threshold segmentation can be expanded into the target ground image area, and the corresponding passable ground area can be accurately positioned based on the target ground image area.
In some embodiments, after the target ground image region is determined, an image region in the second grayscale image other than the target ground image region is determined as a non-ground image region, where the non-ground image region is an image corresponding to the non-ground region in the actual space. The non-ground area may be an area corresponding to an obstacle, cliff, steep slope, step, or the like.
According to the non-ground image area and the second point cloud data, point cloud data corresponding to the non-ground area in the actual space can be determined from the second point cloud data. Then, the robot makes a corresponding motion decision according to the point cloud data corresponding to the non-ground area, for example, the robot bypasses the non-ground area during passing.
By the ground detection method, the passable ground area and the non-ground area can be accurately identified on the premise of controlling the hardware cost, so that the navigation and path planning capacity of the robot is improved, the scene adaptability of the robot is improved, and the robot can move more intelligently and safely.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Example two:
fig. 2 shows a schematic structural diagram of a ground detection apparatus provided in an embodiment of the present application, and for convenience of description, only parts related to the embodiment of the present application are shown:
this ground detection device includes: a first point cloud data acquisition unit 21, a first gray scale image determination unit 22, a second gray scale image determination unit 23, a second point cloud data determination unit 24, a target ground image area determination unit 25, and a positioning unit 26. Wherein:
the first point cloud data acquiring unit 21 is configured to acquire first point cloud data corresponding to a preset detection area.
The first grayscale image determining unit 22 is configured to normalize the height information of the first point cloud data to an image space, and obtain a first grayscale image.
The second gray image determining unit 23 is configured to perform filtering processing on the first gray image to obtain a second gray image.
And a second point cloud data determining unit 24, configured to determine second point cloud data according to the second grayscale image.
And a target ground image area determining unit 25, configured to determine a target ground image area in the second gray image according to the second gray image and a preset threshold segmentation algorithm.
A positioning unit 26, configured to position a passable ground area according to the target ground image area and the second point cloud data.
Optionally, the first point cloud data obtaining unit 21 is specifically configured to obtain a depth image corresponding to the preset detection area; according to the depth image, point cloud data under a camera coordinate system are determined; and determining point cloud data under a robot coordinate system as the first point cloud data according to the point cloud data under the camera coordinate system and a preset coordinate conversion relation.
Optionally, the first grayscale image determining unit 22 is specifically configured to convert the height information of the first point cloud data to a preset height interval to obtain target height data; and normalizing the target height data to a preset gray scale interval to obtain a first gray scale image.
Optionally, the second grayscale image determining unit 23 is specifically configured to perform median filtering processing and dimension reduction clipping processing on the first grayscale image to obtain the second grayscale image.
Optionally, the target ground image area determination unit 25 includes:
the target gradient image determining module is used for performing gradient calculation on the second gray level image and determining a target gradient image corresponding to the second gray level image;
the first pixel point set determining module is used for performing threshold segmentation processing on the target gradient image according to a first gray threshold value and determining a first pixel point set of which the gray value is smaller than the first gray threshold value in the target gradient image;
the second pixel point set determining module is used for performing threshold segmentation processing on the second gray level image according to a target threshold interval and determining a second pixel point set of which the gray level value is located in the target threshold interval in the second gray level image;
and the target ground image area determining module is used for determining a target ground image area according to the first pixel point set and the second pixel point set.
Optionally, the target ground image area determining module is specifically configured to determine a target contour according to the first pixel point set and the second pixel point set; and if the area of the target contour is larger than a preset area threshold value, determining a target ground image area according to the target contour.
Optionally, in the target ground image area determining module, the determining a target ground image area according to the target contour includes: determining a center of gravity of the target profile; determining an optimal seed point according to the gravity center; and processing the second gray image through a flood filling algorithm according to the optimal seed point to obtain a target ground image area.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
Example three:
fig. 3 is a schematic diagram of an electronic device according to an embodiment of the present application. As shown in fig. 3, the electronic apparatus 3 of this embodiment includes: a processor 30, a memory 31 and a computer program 32, such as a ground detection program, stored in said memory 31 and executable on said processor 30. The processor 30, when executing the computer program 32, implements the steps in the various embodiments of the ground detection method described above, such as the steps S101 to S106 shown in fig. 1. Alternatively, the processor 30 executes the computer program 32 to implement the functions of the modules/units in the above-mentioned device embodiments, for example, the functions of the first point cloud data obtaining unit 21 to the positioning unit 26 shown in fig. 2.
Illustratively, the computer program 32 may be partitioned into one or more modules/units that are stored in the memory 31 and executed by the processor 30 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 32 in the electronic device 3.
The electronic device 3 may be a computing device such as a robot, a desktop computer, a notebook, a palm computer, and a cloud server. The electronic device may include, but is not limited to, a processor 30, a memory 31. It will be appreciated by those skilled in the art that fig. 3 is merely an example of the electronic device 3, and does not constitute a limitation of the electronic device 3, and may include more or less components than those shown, or combine certain components, or different components, for example, the electronic device may also include input output devices, network access devices, buses, etc.
The Processor 30 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 31 may be an internal storage unit of the electronic device 3, such as a hard disk or a memory of the electronic device 3. The memory 31 may also be an external storage device of the electronic device 3, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device 3. Further, the memory 31 may also include both an internal storage unit and an external storage device of the electronic device 3. The memory 31 is used for storing the computer program and other programs and data required by the electronic device. The memory 31 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/electronic device and method may be implemented in other ways. For example, the above-described apparatus/electronic device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A ground detection method, comprising:
acquiring first point cloud data corresponding to a preset detection area;
normalizing the height information of the first point cloud data to an image space to obtain a first gray image;
filtering the first gray level image to obtain a second gray level image;
determining second point cloud data according to the second gray scale image;
determining a target ground image area in the second gray image according to the second gray image and a preset threshold segmentation algorithm;
and positioning the passable ground area according to the target ground image area and the second point cloud data.
2. The ground detection method according to claim 1, wherein the obtaining of the first point cloud data corresponding to the preset detection area includes:
acquiring a depth image corresponding to the preset detection area;
according to the depth image, point cloud data under a camera coordinate system are determined;
and determining point cloud data under a robot coordinate system as the first point cloud data according to the point cloud data under the camera coordinate system and a preset coordinate conversion relation.
3. The ground detection method of claim 1, wherein the normalizing the height information of the first point cloud data to an image space to obtain a first grayscale image comprises:
converting the height information of the first point cloud data to a preset height interval to obtain target height data;
and normalizing the target height data to a preset gray scale interval to obtain a first gray scale image.
4. The ground detection method of claim 1, wherein the filtering the first grayscale image to obtain a second grayscale image comprises:
and carrying out median filtering processing and dimension reduction cutting processing on the first gray level image to obtain the second gray level image.
5. The ground detection method according to claim 1, wherein the determining the target ground image region in the second gray scale image according to the second gray scale image and a preset threshold segmentation algorithm comprises:
performing gradient calculation on the second gray level image, and determining a target gradient image corresponding to the second gray level image;
performing threshold segmentation processing on the target gradient image according to a first gray threshold, and determining a first pixel point set of which the gray value is smaller than the first gray threshold in the target gradient image;
performing threshold segmentation processing on the second gray level image according to a target threshold interval, and determining a second pixel point set of which the gray level value is located in the target threshold interval in the second gray level image;
and determining a target ground image area according to the first pixel point set and the second pixel point set.
6. The ground detection method of claim 5, wherein said determining a target ground image region based on the first set of pixels and the second set of pixels comprises:
determining a target contour according to the first pixel point set and the second pixel point set;
and if the area of the target contour is larger than a preset area threshold value, determining a target ground image area according to the target contour.
7. The ground detection method of claim 6, wherein said determining a target ground image region from the target contour comprises:
determining a center of gravity of the target profile;
determining an optimal seed point according to the gravity center;
and processing the second gray image through a flood filling algorithm according to the optimal seed point to obtain a target ground image area.
8. A ground detection device, comprising:
the device comprises a first point cloud data acquisition unit, a second point cloud data acquisition unit and a detection unit, wherein the first point cloud data acquisition unit is used for acquiring first point cloud data corresponding to a preset detection area;
the first gray image determining unit is used for normalizing the height information of the first point cloud data to an image space to obtain a first gray image;
the second gray image determining unit is used for carrying out filtering processing on the first gray image to obtain a second gray image;
the second point cloud data determining unit is used for determining second point cloud data according to the second gray scale image;
a target ground image area determination unit, configured to determine a target ground image area in the second gray image according to the second gray image and a preset threshold segmentation algorithm;
and the positioning unit is used for positioning the passable ground area according to the target ground image area and the second point cloud data.
9. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the computer program, when executed by the processor, causes the electronic device to carry out the steps of the method according to any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, causes an electronic device to carry out the steps of the method according to any one of claims 1 to 7.
CN202111424074.4A 2021-11-26 2021-11-26 Ground detection method, ground detection device, electronic equipment and storage medium Pending CN114219770A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111424074.4A CN114219770A (en) 2021-11-26 2021-11-26 Ground detection method, ground detection device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111424074.4A CN114219770A (en) 2021-11-26 2021-11-26 Ground detection method, ground detection device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114219770A true CN114219770A (en) 2022-03-22

Family

ID=80698527

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111424074.4A Pending CN114219770A (en) 2021-11-26 2021-11-26 Ground detection method, ground detection device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114219770A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114638818A (en) * 2022-03-29 2022-06-17 广东利元亨智能装备股份有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN115508807A (en) * 2022-11-16 2022-12-23 苏州一径科技有限公司 Point cloud data processing method and device, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114638818A (en) * 2022-03-29 2022-06-17 广东利元亨智能装备股份有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN114638818B (en) * 2022-03-29 2023-11-03 广东利元亨智能装备股份有限公司 Image processing method, device, electronic equipment and storage medium
CN115508807A (en) * 2022-11-16 2022-12-23 苏州一径科技有限公司 Point cloud data processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US10964054B2 (en) Method and device for positioning
CN110866449A (en) Method and device for identifying target object in road
CN111582054B (en) Point cloud data processing method and device and obstacle detection method and device
CN112513679B (en) Target identification method and device
CN114219770A (en) Ground detection method, ground detection device, electronic equipment and storage medium
CN112733812A (en) Three-dimensional lane line detection method, device and storage medium
CN110674705A (en) Small-sized obstacle detection method and device based on multi-line laser radar
CN111142514B (en) Robot and obstacle avoidance method and device thereof
CN115240149A (en) Three-dimensional point cloud detection and identification method and device, electronic equipment and storage medium
CN113838125A (en) Target position determining method and device, electronic equipment and storage medium
CN114445404A (en) Automatic structural vibration response identification method and system based on sub-pixel edge detection
CN112683228A (en) Monocular camera ranging method and device
CN113112491A (en) Cliff detection method and device, robot and storage medium
CN113970734A (en) Method, device and equipment for removing snowing noise of roadside multiline laser radar
CN114170596A (en) Posture recognition method and device, electronic equipment, engineering machinery and storage medium
CN114241448A (en) Method and device for obtaining heading angle of obstacle, electronic equipment and vehicle
CN107767366B (en) A kind of transmission line of electricity approximating method and device
CN117518189A (en) Laser radar-based camera processing method and device, electronic equipment and medium
CN111553342A (en) Visual positioning method and device, computer equipment and storage medium
CN109839645B (en) Speed detection method, system, electronic device and computer readable medium
CN115937817A (en) Target detection method and system and excavator
CN115359332A (en) Data fusion method and device based on vehicle-road cooperation, electronic equipment and system
CN115236643A (en) Sensor calibration method, system, device, electronic equipment and medium
CN114859938A (en) Robot, dynamic obstacle state estimation method and device and computer equipment
CN116679315A (en) Method and device for detecting operation terrain and engineering equipment for detecting operation terrain

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination