CN112991550A - Obstacle position detection method and device based on pseudo-point cloud and electronic equipment - Google Patents

Obstacle position detection method and device based on pseudo-point cloud and electronic equipment Download PDF

Info

Publication number
CN112991550A
CN112991550A CN202110353738.6A CN202110353738A CN112991550A CN 112991550 A CN112991550 A CN 112991550A CN 202110353738 A CN202110353738 A CN 202110353738A CN 112991550 A CN112991550 A CN 112991550A
Authority
CN
China
Prior art keywords
dimensional structure
point cloud
pseudo
information
target obstacle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110353738.6A
Other languages
Chinese (zh)
Other versions
CN112991550B (en
Inventor
苏英菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Reach Automotive Technology Shenyang Co Ltd
Original Assignee
Neusoft Reach Automotive Technology Shenyang Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Reach Automotive Technology Shenyang Co Ltd filed Critical Neusoft Reach Automotive Technology Shenyang Co Ltd
Priority to CN202110353738.6A priority Critical patent/CN112991550B/en
Publication of CN112991550A publication Critical patent/CN112991550A/en
Application granted granted Critical
Publication of CN112991550B publication Critical patent/CN112991550B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Geometry (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a method, a device and electronic equipment for detecting positions of obstacles based on pseudo point clouds, which relate to the technical field of vehicle driving and comprise the steps of obtaining pseudo point cloud information aiming at the same target obstacle and sparse radar point clouds from a low-beam radar; clustering the sparse radar point cloud to obtain a first three-dimensional structure and a centroid position of the first three-dimensional structure, wherein the first three-dimensional structure is the minimum three-dimensional structure capable of including a clustering result; determining a second three-dimensional structure from the pseudo-point cloud information through a three-dimensional bounding box detection method, wherein the second three-dimensional structure is a minimum three-dimensional structure capable of including a target obstacle; and matching the first three-dimensional structure with the second three-dimensional structure, if the matching result is greater than a preset volume threshold value, determining the position of the target obstacle according to the centroid position of the first three-dimensional structure, combining the pseudo-point cloud with the low-beam radar point cloud to improve the accuracy of pseudo-point cloud position detection, and obtaining more accurate obstacle position information under the condition of lower cost.

Description

Obstacle position detection method and device based on pseudo-point cloud and electronic equipment
Technical Field
The invention relates to the technical field of vehicle driving, in particular to a method and a device for detecting an obstacle position based on a pseudo-point cloud and electronic equipment.
Background
With the development of vehicle technology, the accurate detection of the position of an obstacle is of great importance, and the safety and reliability of automatic driving of the vehicle are influenced.
At present, the laser radar is generally adopted to accurately detect the position of the obstacle, but the cost of the laser radar is relatively high, and the laser radar is not beneficial to the wide application in the field of vehicle driving.
Disclosure of Invention
In view of the above, the present invention is directed to a method, an apparatus, and an electronic device for detecting a position of an obstacle based on a pseudo point cloud, in which the pseudo point cloud and a low-beam radar point cloud are combined to improve the accuracy of detecting the position of the pseudo point cloud, and accurate obstacle position information is obtained at a low cost.
In a first aspect, an embodiment provides a method for detecting an obstacle position based on a pseudo point cloud, including:
acquiring pseudo point cloud information aiming at the same target obstacle and sparse radar point cloud from a low-beam radar;
clustering the sparse radar point cloud to obtain a first three-dimensional structure and a centroid position of the first three-dimensional structure, wherein the first three-dimensional structure is a minimum three-dimensional structure capable of including a clustering result;
determining a second three-dimensional structure from the pseudo point cloud information through a three-dimensional bounding box detection method, wherein the second three-dimensional structure is a minimum three-dimensional structure capable of including the target obstacle;
and matching the first three-dimensional structure with the second three-dimensional structure, and determining the position of the target obstacle according to the position of the center of mass of the first three-dimensional structure if the matching result is greater than a preset volume threshold value.
In an alternative embodiment, the method further comprises:
and if the matching result is smaller than or equal to a preset volume threshold, determining the position of the target obstacle according to the centroid position of the second three-dimensional structure.
In an alternative embodiment, the method further comprises:
and if the matching result is smaller than or equal to the preset volume threshold, re-executing the step of acquiring the pseudo point cloud information of the same target obstacle and the sparse radar point cloud from the low-beam radar.
In an alternative embodiment, the method further comprises:
determining width information and height information of a second three-dimensional structure from the pseudo-point cloud information by a three-dimensional bounding box detection method;
and determining the contour information of the target obstacle according to the width information and the height information.
In an optional embodiment, the step of performing a clustering operation on the sparse radar point cloud to obtain a first three-dimensional structure and a centroid position of the first three-dimensional structure includes:
clustering the sparse radar point cloud to obtain a clustering result;
adding a bounding box to the clustering result to obtain a first three-dimensional structure;
and calculating the centroid position of the first three-dimensional structure according to the first three-dimensional structure.
In an alternative embodiment, the step of obtaining pseudo point cloud information for the same target obstacle includes:
acquiring an image to be detected of a target obstacle from a binocular camera;
calculating the depth information of each pixel point in the image to be detected based on binocular parallax of a binocular distance measuring method;
and converting the depth information into pseudo point cloud information according to a camera conversion principle and a geometric relation.
In an alternative embodiment, the step of matching the first and second stereo structures comprises:
and carrying out three-dimensional intersection ratio calculation on the volumes of the first stereo structure and the second stereo structure.
In a second aspect, an embodiment provides an obstacle position detection apparatus based on a pseudo point cloud, the apparatus including:
the acquisition module acquires pseudo point cloud information aiming at the same target obstacle and sparse radar point cloud from a low-beam radar;
the clustering module is used for clustering the sparse radar point cloud to obtain a first three-dimensional structure and a centroid position of the first three-dimensional structure, wherein the first three-dimensional structure is the minimum three-dimensional structure capable of including a clustering result;
a detection module for determining a second three-dimensional structure from the pseudo point cloud information by a three-dimensional bounding box detection method, wherein the second three-dimensional structure is a minimum three-dimensional structure capable of including the target obstacle;
and the determining module is used for matching the first three-dimensional structure with the second three-dimensional structure, and determining the position of the target obstacle according to the position of the mass center of the first three-dimensional structure if the matching result is greater than a preset volume threshold value.
In a third aspect, an embodiment provides an electronic device, including a memory and a processor, where the memory stores a computer program operable on the processor, and the processor implements the steps of the method described in any one of the foregoing embodiments when executing the computer program.
In a fourth aspect, embodiments provide a machine-readable storage medium having stored thereon machine-executable instructions that, when invoked and executed by a processor, cause the processor to carry out the steps of the method of any preceding embodiment.
According to the obstacle position detection method, device and electronic equipment based on the pseudo-point cloud, the low-beam radar point cloud is clustered to obtain the first three-dimensional structure, the pseudo-point cloud is detected through the bounding box to determine the second three-dimensional structure, the first three-dimensional structure and the second three-dimensional structure are matched, if the matching result of the first three-dimensional structure and the second three-dimensional structure exceeds the preset volume threshold value, the first three-dimensional structure and the second three-dimensional structure can be combined, the position of the pseudo-point cloud on a target obstacle is updated through the centroid of the first three-dimensional structure of the low-beam radar point cloud, the detection precision of the pseudo-point cloud on the target obstacle is improved, the pseudo-point cloud and the low-beam radar point cloud are combined to improve the accuracy of pseudo-point cloud position detection, and accurate obstacle position information is.
Additional features and advantages of the disclosure will be set forth in the description which follows, or in part may be learned by the practice of the above-described techniques of the disclosure, or may be learned by practice of the disclosure.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a method for detecting a position of an obstacle based on a pseudo-point cloud according to an embodiment of the present invention;
fig. 2 is a functional block diagram of an obstacle position detecting apparatus based on pseudo point cloud according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a hardware architecture of an electronic device according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the field of vehicle driving at present, in order to guarantee the safety of vehicle driving, a laser radar is generally adopted to accurately detect the position of an obstacle, but the laser radar is high in cost and is not beneficial to the wide application of vehicle driving.
In order to save cost, pseudo point cloud information which is acquired aiming at the position of an obstacle and is similar to the coordinates (x, y, z) of the point cloud of the laser radar can be used for replacing the acquired information of the laser radar. However, the algorithm has the defects that the accuracy of the pseudo-point cloud is not higher than that of a laser radar point cloud, the position of an obstacle cannot be accurately identified, and the safety of a user in the vehicle driving process cannot be further ensured.
Based on the above, the obstacle position detection method, device and electronic device based on the pseudo point cloud provided by the embodiment of the invention combine the pseudo point cloud and the low-beam radar point cloud to improve the accuracy of pseudo point cloud position detection, and obtain more accurate obstacle position information under the condition of lower cost.
For the convenience of understanding the embodiment, first, a method for detecting an obstacle position based on a pseudo point cloud disclosed in the embodiment of the present invention is described in detail, and the method is applicable to control devices such as a PC personal computer, a controller, a server, an intelligent terminal, and the like, and can be independently installed or integrally installed in a vehicle control terminal or a road facility.
It should be noted that the low beam lidar is relatively low in cost, but it does not have a dense point cloud like the high beam lidar, so it does not reflect the shape information of the obstacle. According to the embodiment of the invention, the low-cost low-beam laser radar is combined with the pseudo point cloud, so that the detection precision of the pseudo point cloud is improved, and the accurate position of the obstacle is detected under the condition of low cost.
As an optional embodiment, the vehicle control terminal realizes accurate detection of the position of the target obstacle by combining the pseudo-point cloud information and the sparse point cloud from the low-beam radar, that is, the control terminal communicates with the binocular camera and the low-beam radar to obtain the pseudo-point cloud information and the washing point cloud information of the target obstacle, wherein the binocular camera and the low-beam radar can be installed on each vehicle. Such as the front of the vehicle, the top of the vehicle, for collecting obstacles appearing in the traveling direction of the vehicle, or in road facilities for collecting obstacles that each vehicle passing through the current road may encounter; as another optional embodiment, the road facility may communicate with a vehicle traveling on a road, and the accurate position of the target obstacle is determined by acquiring the pseudo point cloud information and the sparse point cloud information of the target obstacle in the field of view of the vehicle traveling on the road, and the determined position is sent to the vehicle traveling, so that the vehicle performs corresponding operations to avoid the obstacle, and the driving safety is ensured at a low cost. The target obstacle is understood to be an obstacle that may affect the safety of the vehicle traveling on the road, including an obstacle such as a large stone, an obstacle vehicle, or a pedestrian who is also traveling on the current road or is walking or standing on the current road.
Fig. 1 is a flowchart of a method for detecting a position of an obstacle based on a pseudo-point cloud according to an embodiment of the present invention.
As shown in fig. 1, the method comprises the steps of:
step S102, acquiring pseudo point cloud information aiming at the same target obstacle and sparse radar point cloud from a low-beam radar;
the vehicle running on the road may encounter multiple obstacles, the binocular camera and the low-beam radar at the time acquire the same obstacle, and as an optional embodiment, all the obstacles can be acquired one by one according to a preset sequence to determine the position of each obstacle.
Illustratively, the step of acquiring the pseudo point cloud information for the same target obstacle may include acquiring an image to be detected of the target obstacle from a binocular camera; calculating the depth information of each pixel point in the image to be detected based on binocular parallax of a binocular distance measuring method; and converting the depth information into pseudo point cloud information according to a camera conversion principle and a geometric relation.
As an optional embodiment, in the embodiment of the invention, point cloud information corresponding to a target obstacle in the traveling direction of the current running vehicle is acquired through two devices, namely a binocular camera and a low-beam radar.
Step S104, clustering the sparse radar point cloud to obtain a first three-dimensional structure and a centroid position of the first three-dimensional structure, wherein the first three-dimensional structure is the minimum three-dimensional structure capable of including a clustering result;
here, by performing clustering operation on the sparse radar point cloud, a three-dimensional structure wrapping the clustering result can be generated, and the centroid position of the three-dimensional structure can be determined.
Step S106, determining a second three-dimensional structure from the pseudo-point cloud information through a three-dimensional bounding box detection method, wherein the second three-dimensional structure is a minimum three-dimensional structure capable of including the target obstacle;
and detecting the pseudo point cloud information by adopting a three-dimensional bounding box, and framing a second three-dimensional structure. The three-dimensional structure refers to a three-dimensional modeling under a three-dimensional space coordinate system, such as a cube, a cuboid, a cylinder and the like. The specific shape of the three-dimensional structure is not limited herein, and depends on the pseudo point cloud information and the clustering result.
And S108, matching the first three-dimensional structure with the second three-dimensional structure, and determining the position of the target obstacle according to the position of the mass center of the first three-dimensional structure if the matching result is greater than a preset volume threshold value.
As an optional embodiment, the step of matching the first stereo structure and the second stereo structure may be performed by performing an Intersection-over-unity (IOU) calculation on volumes of the first stereo structure and the second stereo structure, that is, performing an Intersection operation on the volumes of the first stereo structure and the second stereo structure to obtain a matching result, where the matching result is a stereo volume portion where the Intersection is generated between the first stereo structure and the second stereo structure.
The preset volume threshold value can be one third, one half and the like of the whole volume of the first three-dimensional structure or the second three-dimensional structure, for example, the volume of the first three-dimensional structure and the volume of the second three-dimensional structure are subjected to intersection operation, the obtained volume intersection part accounts for one half of the whole volume of the preset first three-dimensional structure, then the pseudo-point cloud information acquired by the binocular camera system can be matched with the sparse point cloud acquired by the low-beam laser radar, and the position of the more accurate target obstacle can be further determined by utilizing the centroid position of the first three-dimensional structure.
In the preferred embodiment of practical application, the low-beam radar point cloud is clustered to obtain a first three-dimensional structure, the pseudo-point cloud is detected through a bounding box to determine a second three-dimensional structure, the first three-dimensional structure and the second three-dimensional structure are matched, if the matching result of the two three-dimensional structures exceeds a preset volume threshold value, the two three-dimensional structures can be combined, the position of the pseudo-point cloud on a target obstacle is updated through the mass center of the first three-dimensional structure of the low-beam radar point cloud, the detection precision of the pseudo-point cloud on the target obstacle is improved, the pseudo-point cloud and the low-beam radar point cloud are combined to improve the detection accuracy of the pseudo-point cloud, and under the condition of low cost, more accurate obstacle position information is obtained.
Here, as the low-beam radar and the binocular camera acquire the same target obstacle, in general, the matching degree of the first three-dimensional structure and the second three-dimensional structure of the low-beam radar and the binocular camera can both meet the preset volume threshold, but if an abnormal condition occurs, the matching result of the first three-dimensional structure and the second three-dimensional structure is small, and then the point clouds of the low-beam radar and the binocular camera cannot be combined for application. In an alternative embodiment, the method further comprises:
step 1.1), if the matching result is smaller than or equal to the preset volume threshold, re-executing the step of acquiring the pseudo point cloud information of the same target obstacle and the sparse radar point cloud from the low-beam radar.
In this case, the low-beam radar and the binocular camera may fail for some reasons, so that the low-beam radar and the binocular camera do not correspond to the same target obstacle for acquisition.
In an alternative embodiment, the method in the previous embodiment further comprises:
and 2.1) if the matching result is smaller than or equal to a preset volume threshold, determining the position of the target obstacle according to the centroid position of the second three-dimensional structure.
On the basis of the foregoing embodiment, as another optional embodiment, if the matching result is still less than or equal to the preset volume threshold after re-matching, at this time, because the acquisition accuracy of the low-beam radar and the binocular camera device itself is low or other abnormalities occur, in order to ensure that the vehicle in the road normally travels, the position of the target obstacle is determined by using the centroid corresponding to the pseudo-point cloud second three-dimensional structure, so that the position of the obstacle can be determined quickly, and the potential safety hazard of the vehicle during traveling is avoided.
Since the road conditions of the vehicles running are complex and changeable, and the vehicle types and volumes of the vehicles on the road are different, in order to ensure the running safety, after the positions of the obstacle vehicles or the obstacles are known, the size of the target obstacle is obtained, so that the current vehicle can perform more detailed operation, in an optional embodiment, the method further comprises the following steps:
step 3.1), determining width information and height information of a second three-dimensional structure from the pseudo-point cloud information by a three-dimensional bounding box detection method;
and 3.2) determining the contour information of the target obstacle according to the width information and the height information.
The more detailed operation performed by the current vehicle includes determining which speed the current vehicle adopts, which steering operation is performed, and which safe distance is kept from the target obstacle according to the position of the current vehicle, the position and the size of the target obstacle, the speed and the size of the current vehicle, and the distance between the target obstacle and the current vehicle.
As an alternative embodiment, the more accurate centroid position determined by the sparse point cloud of the low beam radar is used to improve the accuracy of the detection of the pseudo point cloud position, and step S104 may be further implemented by the following steps, specifically including:
step 4.1), carrying out clustering operation on the sparse radar point cloud to obtain a clustering result;
step 4.2), adding a bounding box to the clustering result to obtain a first three-dimensional structure; so that the first three-dimensional structure can wrap the clustering result point cloud as much as possible.
And 4.3) calculating the centroid position of the first three-dimensional structure according to the first three-dimensional structure.
As an alternative embodiment, in the three-dimensional stereo coordinate system, the average value of the coordinates of each point of the three-dimensional stereo structure in the X, Y, Z three-axis direction is calculated by the three-dimensional coordinates of each point of the three-dimensional stereo structure, and the coordinates of the centroid position are determined according to the coordinates.
The embodiment of the invention can enable the low-cost pseudo point cloud to be denser and solve the problem that the low-cost pseudo point cloud is not accurate enough for detecting the position of the obstacle.
As shown in fig. 2, an embodiment of the present invention further provides an obstacle position detecting apparatus 200 based on a pseudo point cloud, the apparatus including:
the acquisition module 201 acquires pseudo point cloud information of the same target obstacle and sparse radar point cloud from a low-beam radar;
a clustering module 202, configured to perform clustering operation on the sparse radar point cloud to obtain a first three-dimensional structure and a centroid position of the first three-dimensional structure, where the first three-dimensional structure is a minimum three-dimensional structure that can include a clustering result;
a detection module 203, which determines a second three-dimensional structure from the pseudo point cloud information by a three-dimensional bounding box detection method, wherein the second three-dimensional structure is a minimum three-dimensional structure capable of including the target obstacle;
the determining module 204 matches the first three-dimensional structure with the second three-dimensional structure, and determines the position of the target obstacle according to the centroid position of the first three-dimensional structure if the matching result is greater than a preset volume threshold.
In a preferred embodiment of practical application, the pseudo point cloud and the low beam laser radar are combined, a first three-dimensional structure corresponding to a clustering result of the low beam laser radar and a second three-dimensional structure of the pseudo point cloud after the bounding box target detection are subjected to intersection and comparison calculation, and according to a matching result after the intersection and comparison, the position of an obstacle obtained by the pseudo point cloud detection is updated to the centroid position of the first three-dimensional structure corresponding to the corresponding low beam laser radar.
In an optional embodiment, the determining module is further configured to determine the position of the target obstacle according to the position of the center of mass of the second three-dimensional structure if the matching result is smaller than or equal to a preset volume threshold.
In an optional embodiment, the obtaining module is further configured to, if the matching result is less than or equal to the preset volume threshold, re-perform the step of obtaining the pseudo point cloud information for the same target obstacle and the sparse radar point cloud from the low beam radar.
In an optional embodiment, the detection module is further configured to determine width information and height information of the second three-dimensional structure from the pseudo point cloud information by a three-dimensional bounding box detection method; and determining the contour information of the target obstacle according to the width information and the height information.
In an optional embodiment, the clustering module is further configured to perform clustering operation on the sparse radar point cloud to obtain a clustering result; adding a bounding box to the clustering result to obtain a first three-dimensional structure; and calculating the centroid position of the first three-dimensional structure according to the first three-dimensional structure.
In an optional embodiment, the acquisition module is further configured to acquire an image to be detected of the target obstacle from a binocular camera; calculating the depth information of each pixel point in the image to be detected based on binocular parallax of a binocular distance measuring method; and converting the depth information into pseudo point cloud information according to a camera conversion principle and a geometric relation.
In an alternative embodiment, the determination module is further configured to perform a three-dimensional intersection ratio calculation on the volumes of the first and second stereo structures.
Fig. 3 is a schematic hardware architecture diagram of an electronic device 300 according to an embodiment of the present invention. Referring to fig. 3, the electronic device 300 includes: a machine-readable storage medium 301 and a processor 302, and may further include a non-volatile storage medium 303, a communication interface 304, and a bus 305; among other things, the machine-readable storage medium 301, the processor 302, the non-volatile storage medium 303, and the communication interface 304 communicate with each other via a bus 305. The processor 302 may perform the above embodiments describing the pseudo point cloud based obstacle position detection method by reading and executing machine executable instructions of obstacle position detection in the machine readable storage medium 301.
A machine-readable storage medium as referred to herein may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
The non-volatile medium may be non-volatile memory, flash memory, a storage drive (e.g., a hard drive), any type of storage disk (e.g., an optical disk, dvd, etc.), or similar non-volatile storage medium, or a combination thereof.
It can be understood that, for the specific operation method of each functional module in this embodiment, reference may be made to the detailed description of the corresponding step in the foregoing method embodiment, and no repeated description is provided herein.
The computer-readable storage medium provided in the embodiments of the present invention stores a computer program, and when executed, the computer program code may implement the method for detecting a position of an obstacle based on a pseudo-point cloud according to any of the embodiments described above, and specific implementation may refer to the method embodiments, which are not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In addition, in the description of the embodiments of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein.

Claims (10)

1. A method for detecting the position of an obstacle based on a pseudo-point cloud is characterized by comprising the following steps:
acquiring pseudo point cloud information aiming at the same target obstacle and sparse radar point cloud from a low-beam radar;
clustering the sparse radar point cloud to obtain a first three-dimensional structure and a centroid position of the first three-dimensional structure, wherein the first three-dimensional structure is a minimum three-dimensional structure capable of including a clustering result;
determining a second three-dimensional structure from the pseudo point cloud information through a three-dimensional bounding box detection method, wherein the second three-dimensional structure is a minimum three-dimensional structure capable of including the target obstacle;
and matching the first three-dimensional structure with the second three-dimensional structure, and determining the position of the target obstacle according to the position of the center of mass of the first three-dimensional structure if the matching result is greater than a preset volume threshold value.
2. The method of claim 1, further comprising:
and if the matching result is smaller than or equal to a preset volume threshold, determining the position of the target obstacle according to the centroid position of the second three-dimensional structure.
3. The method of claim 2, further comprising:
and if the matching result is smaller than or equal to the preset volume threshold, re-executing the step of acquiring the pseudo point cloud information of the same target obstacle and the sparse radar point cloud from the low-beam radar.
4. The method of claim 1, further comprising:
determining width information and height information of a second three-dimensional structure from the pseudo-point cloud information by a three-dimensional bounding box detection method;
and determining the contour information of the target obstacle according to the width information and the height information.
5. The method of claim 1, wherein clustering the sparse radar point cloud to obtain a first volumetric structure and a centroid location of the first volumetric structure comprises:
clustering the sparse radar point cloud to obtain a clustering result;
adding a bounding box to the clustering result to obtain a first three-dimensional structure;
and calculating the centroid position of the first three-dimensional structure according to the first three-dimensional structure.
6. The method of claim 1, wherein the step of obtaining pseudo-point cloud information for the same target obstacle comprises:
acquiring an image to be detected of a target obstacle from a binocular camera;
calculating the depth information of each pixel point in the image to be detected based on binocular parallax of a binocular distance measuring method;
and converting the depth information into pseudo point cloud information according to a camera conversion principle and a geometric relation.
7. The method of claim 1, wherein the step of matching the first and second stereostructures comprises:
and carrying out three-dimensional intersection ratio calculation on the volumes of the first stereo structure and the second stereo structure.
8. An obstacle position detection apparatus based on a pseudo point cloud, the apparatus comprising:
the acquisition module acquires pseudo point cloud information aiming at the same target obstacle and sparse radar point cloud from a low-beam radar;
the clustering module is used for clustering the sparse radar point cloud to obtain a first three-dimensional structure and a centroid position of the first three-dimensional structure, wherein the first three-dimensional structure is the minimum three-dimensional structure capable of including a clustering result;
a detection module for determining a second three-dimensional structure from the pseudo point cloud information by a three-dimensional bounding box detection method, wherein the second three-dimensional structure is a minimum three-dimensional structure capable of including the target obstacle;
and the determining module is used for matching the first three-dimensional structure with the second three-dimensional structure, and determining the position of the target obstacle according to the position of the mass center of the first three-dimensional structure if the matching result is greater than a preset volume threshold value.
9. An electronic device comprising a memory and a processor, wherein the memory stores a computer program operable on the processor, and wherein the processor implements the steps of the method of any of claims 1 to 7 when executing the computer program.
10. A machine-readable storage medium having stored thereon machine-executable instructions which, when invoked and executed by a processor, cause the processor to carry out the steps of the method of any one of claims 1 to 7.
CN202110353738.6A 2021-03-31 2021-03-31 Obstacle position detection method and device based on pseudo point cloud and electronic equipment Active CN112991550B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110353738.6A CN112991550B (en) 2021-03-31 2021-03-31 Obstacle position detection method and device based on pseudo point cloud and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110353738.6A CN112991550B (en) 2021-03-31 2021-03-31 Obstacle position detection method and device based on pseudo point cloud and electronic equipment

Publications (2)

Publication Number Publication Date
CN112991550A true CN112991550A (en) 2021-06-18
CN112991550B CN112991550B (en) 2024-06-18

Family

ID=76338795

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110353738.6A Active CN112991550B (en) 2021-03-31 2021-03-31 Obstacle position detection method and device based on pseudo point cloud and electronic equipment

Country Status (1)

Country Link
CN (1) CN112991550B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569812A (en) * 2021-08-31 2021-10-29 东软睿驰汽车技术(沈阳)有限公司 Unknown obstacle identification method and device and electronic equipment
CN115308771A (en) * 2022-10-12 2022-11-08 深圳市速腾聚创科技有限公司 Obstacle detection method and apparatus, medium, and electronic device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111612728A (en) * 2020-05-25 2020-09-01 北京交通大学 3D point cloud densification method and device based on binocular RGB image
CN111797745A (en) * 2020-06-28 2020-10-20 北京百度网讯科技有限公司 Training and predicting method, device, equipment and medium of object detection model
CN112101222A (en) * 2020-09-16 2020-12-18 中国海洋大学 Sea surface three-dimensional target detection method based on unmanned ship multi-mode sensor

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111612728A (en) * 2020-05-25 2020-09-01 北京交通大学 3D point cloud densification method and device based on binocular RGB image
CN111797745A (en) * 2020-06-28 2020-10-20 北京百度网讯科技有限公司 Training and predicting method, device, equipment and medium of object detection model
CN112101222A (en) * 2020-09-16 2020-12-18 中国海洋大学 Sea surface three-dimensional target detection method based on unmanned ship multi-mode sensor

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JIAMING SUN等: "Disp R-CNN: Stereo 3D Object Detection via Shape Prior Guided Instance Disparity Estimation", 《ARXIV》, pages 1 - 10 *
傅骏伟: "面向多源信息融合的智能汽车容错感知方法研究", 《中国博士学位论文全文数据库 (工程科技Ⅱ辑)》, no. 1, pages 035 - 29 *
康国华;张琪;张晗;徐伟证;张文豪;: "基于点云中心的激光雷达与相机联合标定方法研究", 仪器仪表学报, no. 12, pages 121 - 129 *
沈琦;陈逸伦;刘枢;刘利刚;: "基于两级网络的三维目标检测算法", 计算机科学, no. 10, pages 153 - 158 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569812A (en) * 2021-08-31 2021-10-29 东软睿驰汽车技术(沈阳)有限公司 Unknown obstacle identification method and device and electronic equipment
CN115308771A (en) * 2022-10-12 2022-11-08 深圳市速腾聚创科技有限公司 Obstacle detection method and apparatus, medium, and electronic device
CN115308771B (en) * 2022-10-12 2023-03-14 深圳市速腾聚创科技有限公司 Obstacle detection method and apparatus, medium, and electronic device

Also Published As

Publication number Publication date
CN112991550B (en) 2024-06-18

Similar Documents

Publication Publication Date Title
CN110674705B (en) Small-sized obstacle detection method and device based on multi-line laser radar
US10435074B2 (en) Parking support method and device
JP6227318B2 (en) Detection device for detecting obstacles and steep slopes, and vehicle equipped with the same
KR101900228B1 (en) Method and apparatus for parking assistance
CN104833370B (en) System and method for mapping, positioning and pose correction
US20150036887A1 (en) Method of determining a ground plane on the basis of a depth image
CN111612728B (en) 3D point cloud densification method and device based on binocular RGB image
CN112991550B (en) Obstacle position detection method and device based on pseudo point cloud and electronic equipment
CN104204726A (en) Moving-object position/attitude estimation apparatus and method for estimating position/attitude of moving object
CN110341621B (en) Obstacle detection method and device
CN112509126B (en) Method, device, equipment and storage medium for detecting three-dimensional object
CN112927309A (en) Vehicle-mounted camera calibration method and device, vehicle-mounted camera and storage medium
CN114119729A (en) Obstacle identification method and device
CN110426714B (en) Obstacle identification method
CN117831002A (en) Obstacle key point detection method and device, electronic equipment and storage medium
CN103714528B (en) Object segmentation device and method
JPWO2019123582A1 (en) Object information generator and object information generator
US10783350B2 (en) Method and device for controlling a driver assistance system by using a stereo camera system including a first and a second camera
CN113436309A (en) Scene reconstruction method, system and device and sweeping robot
CN116358528A (en) Map updating method, map updating device, self-mobile device and storage medium
CN111553342A (en) Visual positioning method and device, computer equipment and storage medium
CN110015288B (en) Method and device for detecting library position and electronic equipment
CN115432007A (en) Collision detection method and device for vehicle automatic driving system and electronic equipment
CN112883909B (en) Obstacle position detection method and device based on bounding box and electronic equipment
CN115841514A (en) Automatic parking method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant