CN117372995A - Vehicle drivable region detection method, related device and storage medium - Google Patents

Vehicle drivable region detection method, related device and storage medium Download PDF

Info

Publication number
CN117372995A
CN117372995A CN202311381057.6A CN202311381057A CN117372995A CN 117372995 A CN117372995 A CN 117372995A CN 202311381057 A CN202311381057 A CN 202311381057A CN 117372995 A CN117372995 A CN 117372995A
Authority
CN
China
Prior art keywords
point cloud
cloud data
grid
frame
obstacle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311381057.6A
Other languages
Chinese (zh)
Inventor
陈承文
周珂
潘云飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Cheng Tech Co ltd
Original Assignee
Shenzhen Cheng Tech Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Cheng Tech Co ltd filed Critical Shenzhen Cheng Tech Co ltd
Priority to CN202311381057.6A priority Critical patent/CN117372995A/en
Publication of CN117372995A publication Critical patent/CN117372995A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/586Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of parking space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to the technical field of computer vision, and provides a vehicle drivable region detection method, a related device and a storage medium, wherein the method comprises the following steps: acquiring target point cloud data, wherein the target point cloud data comprises first point cloud data of a current frame and second point cloud data of a historical frame in a preset time window; accumulating the first point cloud data and the second point cloud data to fill the first point cloud data to obtain third point cloud data; clustering the filled third point cloud data to obtain at least one obstacle in the current frame; and carrying out grid binarization on the extracted outline of each obstacle to obtain a candidate grid map, removing the non-travelable area from the candidate grid map, obtaining and outputting the effective travelable area of the target vehicle in the current frame. The method and the device can improve the detection precision and the effectiveness of automatic parking, reduce the deviation of the detection result from a real drivable area, and avoid the situation that an actual non-drivable area is misjudged as the drivable area.

Description

Vehicle drivable region detection method, related device and storage medium
Technical Field
The embodiment of the application relates to the technical field of computer vision, in particular to a vehicle drivable area detection method, a related device and a storage medium.
Background
In an automatic parking scene, high-precision detection of a drivable area is required, and the current mainstream detection modes comprise: based on an ultrasonic detection mode, based on a vision or ultrasonic radar fusion detection mode, based on a radar detection mode (e.g., millimeter wave radar, laser radar detection), etc.
For the detection mode based on vision or ultrasonic radar fusion, a deep learning algorithm is adopted in the detection mode, usually, the obstacles in the training data are required to be marked manually, the characteristics of the drivable area and the non-drivable area are trained based on the manually marked training data, and then the road surface and the obstacles are separated according to the characteristic information such as color, texture, edges and the like. Although the accuracy is higher, the detection distance of the detection mode is long, so that a large amount of manual samples are required to be acquired in the early stage, the calculation force requirement of a chip in the reasoning process is high, the instantaneity is difficult to ensure, in addition, the environmental illumination requirement is high, and the scene robustness is limited.
Disclosure of Invention
The embodiment of the application provides a vehicle drivable region detection method, a related device and a storage medium, which can improve the detection precision and the detection effectiveness of drivable regions and non-drivable regions in an automatic parking scene, effectively reduce the deviation of detection results from a real drivable region, and avoid the situation that the actual non-drivable region is misjudged as the drivable region.
In a first aspect, embodiments of the present application provide a vehicle drivable region detection method from an in-vehicle device perspective, the method including:
acquiring target point cloud data, wherein the target point cloud data comprises first point cloud data of a current frame and second point cloud data of a history frame in a preset time window, the current frame is an image of a travelable area which is detected by a radar installed on a target vehicle, and the target point cloud data comprises a point cloud set detected by the radar;
accumulating the first point cloud data and the second point cloud data to fill the first point cloud data to obtain third point cloud data;
clustering the filled third point cloud data to obtain at least one obstacle in the current frame, wherein each obstacle covers at least one point cloud;
extracting the outline of each obstacle;
performing grid binarization on the outline of each obstacle to obtain a candidate grid map, wherein the candidate grid map comprises a drivable area and an undrivable area corresponding to the target vehicle in the current frame;
and removing the non-drivable area from the candidate grid map, obtaining and outputting an effective drivable area of the target vehicle in the current frame.
In some embodiments, the history frame includes a first frame and a second frame within the preset time window, where a time stamp of the first frame is earlier than that of the second frame, and the first frame is a frame with an earliest time stamp in the point cloud accumulation of the second frame;
the accumulating the first point cloud data and the second point cloud data to fill the first point cloud data to obtain third point cloud data, including:
acquiring the real-time speed and frame period of a target vehicle corresponding to the current frame;
acquiring a point cloud accumulation result of the second frame from the second point cloud data, and acquiring point cloud data of a first frame with an earliest timestamp in the point cloud accumulation result of the second frame;
and performing motion compensation on the current frame according to the real-time vehicle speed and the frame period, and filling the first point cloud data of the current frame to obtain the third point cloud data.
In some embodiments, the performing grid binarization according to the outline of each obstacle to obtain a candidate grid map includes:
calculating a connection equation of edge point clouds corresponding to the outline of each obstacle, wherein each connection line L between the edge point clouds i Represented as slope k i Intercept b i Minimum longitudinal coordinate y imin Maximum longitudinal coordinate y imax Is a combination of (a);
traversing the longitudinal coordinate y of each grid center point according to the grid granularity n Calculating the longitudinal coordinate as y n Each connecting line L i Transverse coordinate x of intersection of (2) ni For each grid center point longitudinal coordinate y n Assigning a plurality of intersection lateral coordinates x nj Wherein j=1, 2, …;
traversing the longitudinal coordinate y of each grid center point according to the grid granularity n All grids below the barrier are assigned 1 according to the scanning sequence from left to right, if the center point of the leftmost grid is determined to be positioned outside the outline area of the barrier, and if the center point of the leftmost grid is determined to be positioned inside the outline area of the barrier, 0 is assigned; scanning the grid to the right and repeating the assignment, the transverse coordinate x of each intersection encountered during the scan nj And executing a reversal operation once until the assignment is completed to all grids, and obtaining the travelable area and the non-travelable area.
In some embodiments, the removing the non-drivable area from the candidate grid map to obtain the effective drivable area of the target vehicle in the current frame includes:
scanning each horizontal grid G in the candidate grid map in turn yi Finding out all the communicated first continuous grids;
Scanning each column grid G in the candidate grid map in turn xi Finding out a second continuous grid of the communication;
and obtaining the effective travelable region according to the first continuous grid and the second continuous grid.
In some embodiments, after clustering the filled third point cloud data, before extracting the outline of each obstacle, the method further comprises:
and selecting a point cloud with a height between the chassis of the target vehicle and the total height of the target vehicle from the third point cloud, and deleting an invalid point cloud to obtain an effective point cloud forming an actual obstacle.
In some embodiments, a point cloud coordinate of the third point cloud or the effective point cloud projected on the ground is set to be P i (x i ,y i ) The method comprises the steps of carrying out a first treatment on the surface of the After clustering the filled third point cloud data, before extracting the contour of each obstacle, the method further comprises:
setting a safety distance delta;
performing plane expansion on the third point cloud or the effective point cloud based on the safe distance so that each point cloud coordinate P projected on the ground i (x i ,y i ) Are each replaced by a target point cloud comprising at least one of:
P i1 (x i +δ,y i +δ)、P i2 (x i -δ,y i +δ)、P i3 (x i -δ,y i -delta) and P i4 (x i +δ,y i -δ)。
In some embodiments, before acquiring the point cloud data of the target vehicle, the method further comprises:
Determining the detection distance and grid granularity of the radar;
and establishing a grid map of the target vehicle according to the detection distance and the grid granularity, wherein the detection range of the radar covers the grid map.
In a second aspect, an embodiment of the present application provides an in-vehicle apparatus having a function of implementing the vehicle drivable region detection method provided corresponding to the first aspect described above. The functions may be realized by hardware, or may be realized by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the functions described above, and the modules may be software and/or hardware.
In some embodiments, the in-vehicle apparatus includes:
the input/output module is used for acquiring target point cloud data, wherein the target point cloud data comprise first point cloud data of a current frame and second point cloud data of a history frame in a preset time window, the current frame is an image of a travelable area which is detected by a radar installed on a target vehicle, and the target point cloud data comprise a point cloud set detected by the radar;
the processing module is used for accumulating the first point cloud data and the second point cloud data to fill the first point cloud data so as to obtain third point cloud data;
Clustering the filled third point cloud data to obtain at least one obstacle in the current frame, wherein each obstacle covers at least one point cloud;
extracting the outline of each obstacle;
performing grid binarization on the outline of each obstacle to obtain a candidate grid map, wherein the candidate grid map comprises a drivable area and an undrivable area corresponding to the target vehicle in the current frame;
and removing the non-drivable area from the candidate grid map to obtain an effective drivable area of the target vehicle in the current frame and outputting the effective drivable area through the input/output module.
In some embodiments, the history frame includes a first frame and a second frame within the preset time window, where a time stamp of the first frame is earlier than that of the second frame, and the first frame is a frame with an earliest time stamp in the point cloud accumulation of the second frame;
the processing module is specifically configured to:
acquiring the real-time speed and frame period of a target vehicle corresponding to the current frame;
acquiring a point cloud accumulation result of the second frame from the second point cloud data, and acquiring point cloud data of a first frame with an earliest timestamp in the point cloud accumulation result of the second frame;
And performing motion compensation on the current frame according to the real-time vehicle speed and the frame period, and filling the first point cloud data of the current frame to obtain the third point cloud data.
In some embodiments, the processing module is specifically configured to:
calculating a connection equation of edge point clouds corresponding to the outline of each obstacle, wherein each connection line L between the edge point clouds i Represented as slope k i Intercept b i Minimum longitudinal coordinate y imin Maximum longitudinal coordinate y imax Is a combination of (a);
traversing the longitudinal coordinate y of each grid center point according to the grid granularity n Calculating the longitudinal coordinate as y n Each connecting line L i Transverse coordinate x of intersection of (2) ni For each grid center point longitudinal coordinate y n Assigning a plurality of intersection lateral coordinates x nj Wherein j=1, 2, …;
traversing the longitudinal coordinate y of each grid center point according to the grid granularity n All grids below the barrier are assigned 1 according to the scanning sequence from left to right, if the center point of the leftmost grid is determined to be positioned outside the outline area of the barrier, and if the center point of the leftmost grid is determined to be positioned inside the outline area of the barrier, 0 is assigned; scanning the grid to the right and repeating the assignment, the transverse coordinate x of each intersection encountered during the scan nj And executing a reversal operation once until the assignment is completed to all grids, and obtaining the travelable area and the non-travelable area.
In some embodiments, the processing module is specifically configured to:
scanning each horizontal grid G in the candidate grid map in turn yi Finding out all the communicated first continuous grids;
scanning each column grid G in the candidate grid map in turn xi Finding out a second continuous grid of the communication;
and obtaining the effective travelable region according to the first continuous grid and the second continuous grid.
In some embodiments, after clustering the filled third point cloud data, the processing module is further configured to, before extracting the outline of each obstacle:
and selecting a point cloud with a height between the chassis of the target vehicle and the total height of the target vehicle from the third point cloud, and deleting an invalid point cloud to obtain an effective point cloud forming an actual obstacle.
In some embodiments, a point cloud coordinate of the third point cloud or the effective point cloud projected on the ground is set to be P i (x i ,y i ) The method comprises the steps of carrying out a first treatment on the surface of the After clustering the filled third point cloud data, before extracting the contour of each obstacle, the method further comprises:
setting a safety distance delta;
performing plane expansion on the third point cloud or the effective point cloud based on the safe distance so that each point cloud coordinate P projected on the ground i (x i ,y i ) Are each replaced by a target point cloud comprising at least one of:
P i1 (x i +δ,y i +δ)、P i2 (x i -δ,y i +δ)、P i3 (x i -δ,y i -delta) and P i4 (x i +δ,y i -δ)。
In some embodiments, the processing module is further configured to, prior to acquiring the point cloud data of the target vehicle:
determining the detection distance and grid granularity of the radar;
and establishing a grid map of the target vehicle according to the detection distance and the grid granularity, wherein the detection range of the radar covers the grid map.
In a third aspect, an embodiment of the present application provides a face in-vehicle apparatus, including: at least one processor and memory; wherein the memory is configured to store a computer program, and the processor is configured to invoke the computer program stored in the memory to perform the steps of the first aspect, any implementation manner of the first aspect, and the second aspect.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium having a function of implementing the vehicle drivable region detection method corresponding to the first aspect described above. The functions may be implemented by hardware, or may be implemented by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the functions described above, which may be software and/or hardware. In particular, the computer readable storage medium stores a plurality of instructions adapted to be loaded by a processor to perform the steps of the first aspect, any implementation of the first aspect in the embodiments of the present application.
Compared with the prior art, in the scheme provided by the embodiment of the application, the target point cloud data is acquired first, and the target point cloud data comprises first point cloud data of a current frame and second point cloud data of a historical frame in a preset time window; accumulating the first point cloud data and the second point cloud data to fill the first point cloud data to obtain third point cloud data; clustering the filled third point cloud data to obtain at least one obstacle in the current frame; and carrying out grid binarization on the extracted outline of each obstacle to obtain a candidate grid map, removing the non-travelable area from the candidate grid map, obtaining and outputting the effective travelable area of the target vehicle in the current frame. Because the edge detection adopts the target level detection of the accumulated point cloud, the accuracy is higher, the detection precision and the detection effectiveness of the drivable area and the non-drivable area can be improved under the automatic parking scene, and the deviation of the detection result from the real drivable area is effectively reduced, so that the situation that the actual non-drivable area is misjudged as the drivable area is avoided, and the adverse effects on the later parking space detection and the parking path planning caused by the lower detection result accuracy can be effectively avoided.
Drawings
FIG. 1 is a schematic flow chart of a method for detecting a vehicle-drivable region in an embodiment of the present application;
fig. 2a is a schematic diagram of 5 4D millimeter wave radar installation and blind areas (gray areas) in the embodiment of the present application;
fig. 2b is a schematic flow chart of point cloud extension in the embodiment of the present application;
FIG. 3 is a flowchart of an algorithm of a method for detecting a vehicle drivable region according to an embodiment of the present application;
fig. 4a is a live-action view of a current frame (top foreground and bottom background) in an embodiment of the present application;
FIG. 4b is a projection view of a current frame cumulative point cloud (with invalid point clouds removed) in an embodiment of the present application;
FIG. 4c is a grid map of a travelable region obtained by a conventional algorithm in an embodiment of the present application;
FIG. 4d is a grid map of a travelable region obtained based on the algorithm flow shown in FIG. 3 in an embodiment of the present application;
fig. 5 is a schematic structural view of an in-vehicle apparatus that implements a vehicle drivable region detection method in an embodiment of the present application;
fig. 6 is a schematic structural diagram of a physical device for implementing a vehicle drivable region detection method in an embodiment of the present application.
Detailed Description
The terms "first," "second," and the like in the description and the claims of the embodiments of the present application and in the foregoing drawings are used for distinguishing similar objects (e.g., the first point cloud data and the second point cloud data in the embodiments of the present application respectively represent different point cloud data with the same attribute), and are not necessarily used for describing a specific order or sequence. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those explicitly listed but may include other steps or modules not expressly listed or inherent to such process, method, article, or apparatus, such that the partitioning of modules by embodiments of the application is only one logical partitioning, such that a plurality of modules may be combined or integrated in another system, or some features may be omitted, or not implemented, and further that the coupling or direct coupling or communication connection between modules may be via some interfaces, such that indirect coupling or communication connection between modules may be electrical or other like, none of the embodiments of the application are limited. The modules or sub-modules described as separate components may or may not be physically separate, may or may not be physical modules, or may be distributed in a plurality of circuit modules, and some or all of the modules may be selected according to actual needs to achieve the purposes of the embodiments of the present application.
The embodiment of the application provides a vehicle drivable region detection method, a related device and a storage medium, which can be used for a server or terminal equipment, and particularly can be used for effectively detecting a drivable region and an undrivable region with high precision under an automatic parking scene, so that the deviation of a detection result from a real drivable region is effectively reduced, the situation that the actual undrivable region is misjudged as the drivable region is avoided, and adverse effects on post parking space detection and parking path planning caused by lower detection result precision can be effectively avoided.
The scheme provided by the embodiment of the application relates to artificial intelligence (Artificial Intelligence, AI), natural language processing (Nature Language processing, NLP), machine Learning (ML) and other technologies
It should be specifically noted that, the server (for example, a business server and a search engine) related to the embodiment of the present application may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, and basic cloud computing services such as big data and an artificial intelligence platform. The vehicle-mounted device according to the embodiment of the present application may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, a personal digital assistant, and the like. The vehicle-mounted device and the server may be directly or indirectly connected through wired or wireless communication, which is not limited herein.
For the detection mode based on vision or ultrasonic radar fusion in the related art, a deep learning algorithm is adopted in the detection mode, usually, the obstacles in the training data are required to be marked manually, the characteristics of the drivable area and the non-drivable area are trained based on the manually marked training data, and then the road surface and the obstacles are separated according to the characteristic information such as color, texture, edges and the like. Although the accuracy is higher, the detection distance of the detection mode is long, so that a large amount of manual samples are required to be acquired in the early stage, the calculation force requirement of a chip in the reasoning process is high, the instantaneity is difficult to ensure, in addition, the environmental illumination requirement is high, and the scene robustness is limited.
In this regard, the embodiment of the application performs algorithm improvement on the basis of the traditional 4D millimeter wave radar point cloud detection scheme, improves the accuracy of detection of the drivable area, and mainly adopts the following technical scheme:
1. the edge detection adopts the target level detection of the accumulated point cloud to replace the point cloud level detection, and the accuracy is higher.
2. And (3) performing outward expansion on the point cloud forming the target, then performing convex hull detection to obtain an edge, and taking the edge as a demarcation boundary of the drivable area. The point cloud expansion can offset measurement errors and boundary estimation errors of the drivable region caused by rasterization, and particularly prevent the situation that the boundary non-drivable region is detected as the drivable region. The convex hull detection replaces the traditional rectangular frame detection, and the outline description of the boundary is more detailed and accurate.
3. The accessibility of the travelable region is increased for detection. The method eliminates the travelling performance of the areas which cannot be reached by the actual vehicle, such as slits, small gaps and the like among barriers, and prevents the following parking space detection and parking path planning from being adversely affected.
The following is an exemplary description of the technical solution of the embodiment of the present application with reference to fig. 1 to 6.
In some embodiments, before the detection of the drivable area, the vehicle-mounted device needs to construct a grid map in advance, specifically as follows: and determining the detection distance and the grid granularity of the radar, and establishing a grid map of the target vehicle according to the detection distance and the grid granularity, wherein the detection range of the radar covers the grid map. For example, the grid map is centered on the vehicle, and the size of the map depends on the detection distance of the 4D millimeter wave radar and the actual application requirement. For a parking scene, a typical map size is 25×50m. The granularity of the grid depends on the accuracy requirement of a later decision algorithm, and is usually 5-20 cm.
As shown in fig. 1, fig. 1 is a schematic flow chart of a method for detecting a vehicle drivable area in an embodiment of the present application, which may be used for an in-vehicle device (may also be referred to as an in-vehicle terminal, and is not limited thereto). The method comprises the following steps:
101. And acquiring target point cloud data.
The target point cloud data comprise first point cloud data of a current frame and second point cloud data of a historical frame in a preset time window, the current frame is an image of a travelable area detected by a radar installed on a target vehicle, and the target point cloud data comprise a point cloud set detected by the radar.
Specifically, the global point cloud coordinates of the current frame are input. The whole point cloud is a point cloud set P detected by a whole 4D millimeter wave radar installed on a vehicle body i The method comprises the steps of carrying out a first treatment on the surface of the The coordinates refer to three-dimensional Euclidean coordinates (x) i ,y i ,z i ) And the ground projection point at the center of the rear axle of the target vehicle is taken as the origin of coordinates, the transverse direction is the x-axis direction, the longitudinal direction is the y-axis direction, and the height direction is the z-axis direction.
It should be noted that, in the embodiment of the present application, when performing calculation based on the 4D millimeter wave point cloud, the radar detection range needs to cover the grid map domain, but a narrow blind area is allowed to exist outside the radar FOV and in a non-decision important area. A typical installation scheme uses 5 radars, including 1 forward radar, front left radar, front right radar, rear left radar, rear right radar, and the schematic diagram of radar installation and blind area is shown in fig. 2 a.
102. And accumulating the first point cloud data and the second point cloud data to fill the first point cloud data to obtain third point cloud data.
In some embodiments, the history frame includes a first frame and a second frame within the preset time window, where a time stamp of the first frame is earlier than that of the second frame, and the first frame is a frame with an earliest time stamp in the point cloud accumulation of the second frame;
the accumulating the first point cloud data and the second point cloud data to fill the first point cloud data to obtain third point cloud data, including:
acquiring the real-time speed and frame period of a target vehicle corresponding to the current frame;
acquiring a point cloud accumulation result of the second frame from the second point cloud data, and acquiring point cloud data of a first frame with an earliest timestamp in the point cloud accumulation result of the second frame;
and performing motion compensation on the current frame according to the real-time vehicle speed and the frame period, and filling the first point cloud data of the current frame to obtain the third point cloud data.
For example, a fixed-length sliding window mode can be adopted, the point cloud information of N frames in total of the current frame and the latest historical frame is accumulated, and the value of N is usually 3-10. The method for accumulating the point clouds comprises the steps of removing the point clouds of the earliest frame from the accumulated result of the point clouds of the previous frame, performing motion compensation according to the speed of the vehicle and the frame period, and finally filling the point cloud information of the frame.
103. Clustering the filled third point cloud data to obtain at least one obstacle in the current frame, wherein each obstacle covers at least one point cloud.
In some embodiments, clustering can be performed by using a DBSCAN algorithm, wherein the clustering distance between the point clouds is a three-dimensional Euclidean distance, and the threshold value is 0.5-1.0 m according to the density of the current radar output point clouds; the minimum number of clusters is also adjusted according to the actual effect, and the value is always 5-10. The point cloud clusters classify the point clouds as obstacle targets (clusters), each of which contains a number of point clouds. By means of the parameter adjustment, each real target is clustered as much as possible, while the discrete point cloud (false target) is not clustered.
104. The outline of each obstacle is extracted.
In some embodiments, the contour of each obstacle may be extracted by using a conventional convex hull detection algorithm, and the calculation process uses a Graham scanning method. After each target is extracted by the outline, only edge point clouds are reserved, and the rest point clouds are removed. The lines of the edge point cloud constitute the target contour. Therefore, in edge detection, the target level detection of the accumulated point cloud is adopted to replace the point cloud level detection, so that the accuracy is higher.
105. And carrying out grid binarization on the outline of each obstacle to obtain a candidate grid map.
Wherein the candidate grid map includes a drivable region and a non-drivable region corresponding to the target vehicle in the current frame.
In some embodiments, the performing grid binarization according to the outline of each obstacle to obtain a candidate grid map includes:
calculating a connection equation of edge point clouds corresponding to the outline of each obstacle, wherein each connection line L between the edge point clouds i Represented as slope k i Intercept b i Minimum longitudinal coordinate y imin Maximum longitudinal coordinate y imax Is a combination of (a);
traversing the longitudinal coordinate y of each grid center point according to the grid granularity n Calculating the longitudinal coordinate as y n Each connecting line L i Transverse coordinate x of intersection of (2) ni For each grid center point longitudinal coordinate y n Assigning a plurality of intersection lateral coordinates x nj Wherein j=1, 2, …;
traversing the longitudinal coordinate y of each grid center point according to the grid granularity n All grids below the barrier are assigned 1 according to the scanning sequence from left to right, if the center point of the leftmost grid is determined to be positioned outside the outline area of the barrier, and if the center point of the leftmost grid is determined to be positioned inside the outline area of the barrier, 0 is assigned; scanning the grid to the right and repeating the assignment, the transverse coordinate x of each intersection encountered during the scan nj And executing a reversal operation once until the assignment is completed to all grids, and obtaining the travelable area and the non-travelable area.
Therefore, by adopting a transverse intersection point scanning mode, grids can be effectively divided into an external grid (capable of running and arranged at 1) and an internal grid (incapable of running and arranged at 0) according to overall target profile information, and algorithm steps can be obviously simplified, so that detection efficiency is improved on the premise of ensuring detection accuracy.
107. And removing the non-drivable area from the candidate grid map, obtaining and outputting an effective drivable area of the target vehicle in the current frame.
The non-drivable area refers to a grid area in which it is actually impossible to drive in the initial drivable area. The most common unreachable areas are slits between obstacles, small gaps, etc., and the subject vehicle is unlikely to drive in.
The non-driving area is judged as the driving area, so that the following parking space detection and parking path planning process is influenced, and the non-driving area is removed. The complexity of the ideal unreachable region detection algorithm is high, and in some implementations, the embodiment of the application proposes an algorithm for scanning the width of the horizontal and vertical connected region. Specifically, the removing the non-drivable area from the candidate grid map to obtain an effective drivable area of the target vehicle in the current frame includes:
(1) Scanning each row grid G in the candidate grid map laterally yi All connected first continuous grids are found.
For example, each row grid G is scanned in turn yi The continuous grid in which all the connected segments, i.e. the first continuous grid, are placed 1, is found. If the segment length is not less than w, reserving; otherwise, all grids of the segment are set to 0, and the non-driving area is considered.
(2) Longitudinally scanning each column grid G in the candidate grid map xi And finding out a second continuous grid of the communication.
Scanning each column grid G in turn xi Find out all the connected segments, i.e. the continuous grid with 1 set. If the segment length is not less than w, reserving; whether or notThen all grids of the segment are set to 0 and considered non-travelable areas.
(3) And obtaining the effective travelable region according to the first continuous grid and the second continuous grid.
And (3) calculating to obtain the grid map of the running area of the current frame through the steps (1) to (3), and outputting the grid map.
Optionally, in some embodiments of the present application, after clustering the third point cloud data after filling, before extracting the outline of each obstacle, the method may further increase detecting the accessibility of the drivable area, and specifically, the method further includes:
And selecting a point cloud with a height between the chassis of the target vehicle and the total height of the target vehicle from the third point cloud, and deleting an invalid point cloud to obtain an effective point cloud forming an actual obstacle.
Specifically, effective point cloud extraction. The effective point cloud refers to the height z in the point cloud of which all forms a cluster i The point cloud between the chassis of the vehicle and the total height of the vehicle can keep a certain margin. The effective point cloud forms an actual obstacle, and the rest is ineffective point cloud, so as to be removed. Height z of effective point cloud after extraction i And no longer used, the subsequent steps are all based on the ground projection coordinates (x i ,y i ) Is carried out.
Therefore, the accessibility of the drivable area is increased to detect, so that the drivable area which cannot be reached by the actual vehicle, such as a slit, a small gap and the like between obstacles, can be effectively removed, and the adverse effect on the subsequent parking space detection and parking path planning is prevented.
Optionally, in some embodiments of the present application, the outline description of the boundary may be further detailed and accurate, and a point cloud expansion method may be further used, where the point cloud expansion refers to performing plane expansion on the whole effective point cloud, so as to cancel the measurement error and the estimation error of the boundary of the drivable area caused by rasterization, and especially prevent the situation that the boundary non-drivable area is detected as the drivable area. Meanwhile, the point cloud expansion can also provide a set safety distance as buffer, so that scraping accidents are further prevented.
Specifically, the point cloud coordinate projected on the ground at the third point cloud or the effective point cloud is set to be P i (x i ,y i ) The method comprises the steps of carrying out a first treatment on the surface of the After clustering the filled third point cloud data, before extracting the contour of each obstacle, the method further comprises:
setting a safety distance delta;
performing plane expansion on the third point cloud or the effective point cloud based on the safe distance so that each point cloud coordinate P projected on the ground i (x i ,y i ) Are each replaced by a target point cloud comprising at least one of:
P i1 (x i +δ,y i +δ)、P i2 (x i -δ,y i +δ)、P i3 (x i -δ,y i -delta) and P i4 (x i +δ,y i -δ)。
Therefore, by additionally adopting the point cloud expansion method to carry out plane expansion on the whole effective point cloud, the point cloud forming the target can be expanded, then convex hull detection is carried out to obtain an edge, and the edge is used as a demarcation boundary of the drivable area. The point cloud expansion can offset measurement errors and boundary estimation errors of the drivable region caused by rasterization, and particularly prevent the situation that the boundary non-drivable region is detected as the drivable region. The convex hull detection replaces the traditional rectangular frame detection, and the outline description of the boundary is more detailed and accurate.
In the embodiment of the present application, first, target point cloud data is acquired, which includes first point cloud data of a current frame and second point cloud data of a history frame within a preset time window; accumulating the first point cloud data and the second point cloud data to fill the first point cloud data to obtain third point cloud data; clustering the filled third point cloud data to obtain at least one obstacle in the current frame; and carrying out grid binarization on the extracted outline of each obstacle to obtain a candidate grid map, removing the non-travelable area from the candidate grid map, obtaining and outputting the effective travelable area of the target vehicle in the current frame. Because the edge detection adopts the target level detection of the accumulated point cloud, the accuracy is higher, the detection precision and the detection effectiveness of the drivable area and the non-drivable area can be improved under the automatic parking scene, and the deviation of the detection result from the real drivable area is effectively reduced, so that the situation that the actual non-drivable area is misjudged as the drivable area is avoided, and the adverse effects on the later parking space detection and the parking path planning caused by the lower detection result accuracy can be effectively avoided.
For easy understanding, the embodiment of the present application uses a parking scene as an example to illustrate the method for detecting the vehicle drivable area in the embodiment of the present application, and reference may be made to fig. 3 to fig. 4d. Fig. 3 is a flowchart of an algorithm for implementing a method for detecting a vehicle drivable area in an embodiment of the present application, and fig. 4a is a live-action photo of a current frame (top is foreground, bottom is background); FIG. 4b is a projection view of the cumulative point cloud (with invalid point clouds removed) of the current frame; FIG. 4c is a grid map of a travelable region obtained by a conventional algorithm; fig. 4d is a grid map of a travelable region obtained by the algorithm of the present invention.
The obstacles are cars, posts and pedestrians as shown in fig. 4 a; the cumulative point cloud (null point cloud removed) projection is shown in fig. 4 b; the detection result of the drivable area of the traditional algorithm in the industry is shown in fig. 4 c; the result of the detection of the drivable region of the present invention is shown in fig. 4d (dark portion is the drivable region).
The test results show that the traditional detection algorithm of the drivable area in the industry has higher degree of dependence on the point cloud, and when the density of the point cloud is insufficient, such as the object is far or the reflection is not strong, a large amount of point cloud sparseness exists in the object, and the point cloud sparseness is most likely to be misjudged as the drivable area. The conventional algorithm can be used for alleviating the problem by expanding the radiation range of the point cloud, but the problem of misjudgment and over-compression of a drivable area is difficult to be simultaneously considered. On the other hand, there are a large number of fragmented, incoherent travelable areas in the result of the conventional algorithm, most of which are unreachable areas, and the result is sent to the subsequent parking space detection and parking path planning algorithm, which tends to have adverse effects. In comparison, the algorithm of the invention adopts an edge detection scheme at the target level, can better outline the obstacle target, and can delimit the drivable area. Meanwhile, the algorithm effectively eliminates the unreachable area, so that the drivable area is more coherent and real.
In conclusion, the algorithm provided by the embodiment of the application greatly improves the accuracy of detecting the drivable area, and has better application prospect. Compared with the traditional algorithm, the complexity of the algorithm is not increased too much, and the algorithm is realized in a domain controller chip common in the industry, so that the real-time requirement of a system can be completely met, and an ideal performance result is obtained.
Any technical features mentioned in the embodiments corresponding to any one of fig. 1 to fig. 6 are also applicable to the embodiments corresponding to fig. 3 to fig. 6 in the embodiments of the present application, and the following similar parts will not be repeated.
The vehicle-mounted device that performs the vehicle-running area detection method described above will be described below.
Referring to fig. 6, a schematic structural diagram of an in-vehicle apparatus 40 shown in fig. 6 is shown, which can be applied to an automatic parking scene for accurately detecting a driving area. The in-vehicle device 40 in the embodiment of the present application can implement the steps in the vehicle drivable region detection method performed by the in-vehicle device 40 in the embodiment corresponding to any one of fig. 1 to 2 described above. The functions realized by the in-vehicle device 40 may be realized by hardware, or may be realized by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the functions described above, which may be software and/or hardware. The in-vehicle device 40 may include an input-output module 401 and a processing module 402. The functional implementation of the input/output module 401 and the processing module 402 may refer to the operations performed in any of the embodiments corresponding to fig. 1 to 6, which are not described herein.
In some embodiments, the input/output module 401 may be configured to obtain target point cloud data, where the target point cloud data includes first point cloud data of a current frame, and second point cloud data of a history frame within a preset time window, where the current frame is an image of a travelable area detected by a radar installed on a target vehicle, and the target point cloud data includes a set of point clouds detected by the radar;
the processing module 402 is configured to accumulate the first point cloud data and the second point cloud data to fill the first point cloud data to obtain third point cloud data;
clustering the filled third point cloud data to obtain at least one obstacle in the current frame, wherein each obstacle covers at least one point cloud;
extracting the outline of each obstacle;
performing grid binarization on the outline of each obstacle to obtain a candidate grid map, wherein the candidate grid map comprises a drivable area and an undrivable area corresponding to the target vehicle in the current frame;
and removing the non-drivable area from the candidate grid map to obtain an effective drivable area of the target vehicle in the current frame and outputting the effective drivable area through the input/output module.
In some embodiments, the history frame includes a first frame and a second frame within the preset time window, where a time stamp of the first frame is earlier than that of the second frame, and the first frame is a frame with an earliest time stamp in the point cloud accumulation of the second frame;
the processing module 402 is specifically configured to:
acquiring the real-time speed and frame period of a target vehicle corresponding to the current frame;
acquiring a point cloud accumulation result of the second frame from the second point cloud data, and acquiring point cloud data of a first frame with an earliest timestamp in the point cloud accumulation result of the second frame;
and performing motion compensation on the current frame according to the real-time vehicle speed and the frame period, and filling the first point cloud data of the current frame to obtain the third point cloud data.
In some embodiments, the processing module 402 is specifically configured to:
calculating the profile of each obstacleThe corresponding connection equation of the edge point clouds, each connection L between the edge point clouds i Represented as slope k i Intercept b i Minimum longitudinal coordinate y imin Maximum longitudinal coordinate y imax Is a combination of (a);
traversing the longitudinal coordinate y of each grid center point according to the grid granularity n Calculating the longitudinal coordinate as y n Each connecting line L i Transverse coordinate x of intersection of (2) ni For each grid center point longitudinal coordinate y n Assigning a plurality of intersection lateral coordinates x nj Wherein j=1, 2, …;
traversing the longitudinal coordinate y of each grid center point according to the grid granularity n All grids below the barrier are assigned 1 according to the scanning sequence from left to right, if the center point of the leftmost grid is determined to be positioned outside the outline area of the barrier, and if the center point of the leftmost grid is determined to be positioned inside the outline area of the barrier, 0 is assigned; scanning the grid to the right and repeating the assignment, the transverse coordinate x of each intersection encountered during the scan nj And executing a reversal operation once until the assignment is completed to all grids, and obtaining the travelable area and the non-travelable area.
In some embodiments, the processing module 402 is specifically configured to:
scanning each horizontal grid G in the candidate grid map in turn yi Finding out all the communicated first continuous grids;
scanning each column grid G in the candidate grid map in turn xi Finding out a second continuous grid of the communication;
and obtaining the effective travelable region according to the first continuous grid and the second continuous grid.
In some embodiments, after clustering the filled third point cloud data, the processing module is further configured to, before extracting the outline of each obstacle:
And selecting a point cloud with a height between the chassis of the target vehicle and the total height of the target vehicle from the third point cloud, and deleting an invalid point cloud to obtain an effective point cloud forming an actual obstacle.
In some embodiments, a point cloud coordinate of the third point cloud or the effective point cloud projected on the ground is set to be P i (x i ,y i ) The method comprises the steps of carrying out a first treatment on the surface of the After clustering the filled third point cloud data, the processing module 402 is further configured to, before extracting the contour of each obstacle:
setting a safety distance delta;
performing plane expansion on the third point cloud or the effective point cloud based on the safe distance so that each point cloud coordinate P projected on the ground i (x i ,y i ) Are each replaced by a target point cloud comprising at least one of:
P i1 (x i +δ,y i +δ)、P i2 (x i -δ,y i +δ)、P i3 (x i -δ,y i -delta) and P i4 (x i +δ,y i -δ)。
In some embodiments, the processing module 402 is further configured to, prior to acquiring the point cloud data of the target vehicle:
determining the detection distance and grid granularity of the radar;
and establishing a grid map of the target vehicle according to the detection distance and the grid granularity, wherein the detection range of the radar covers the grid map.
The in-vehicle apparatus 40 that performs the vehicle drivable region detection method in the embodiment of the present application is described above from the standpoint of the modularized functional entity, and the in-vehicle apparatus 40 that performs the vehicle drivable region detection method in the embodiment of the present application is described below from the standpoint of hardware processing, respectively. It should be noted that, in the embodiment shown in fig. 5 of the present application, the physical device corresponding to the input/output module 401 may be an input/output unit, a transceiver, a radio frequency circuit, a communication module, an output interface, etc., and the physical device corresponding to the processing module 402 may be a processor. The in-vehicle apparatus 40 shown in fig. 5 may have a structure as shown in fig. 6, and when the in-vehicle apparatus 40 shown in fig. 5 has a structure as shown in fig. 6, the processor and the transceiver in fig. 6 can realize the same or similar functions as the input-output module 401 and the processing module 402 provided for the apparatus embodiment of the in-vehicle apparatus 40 described above, and the memory in fig. 6 stores a computer program to be called when the processor performs the above-described vehicle drivable region detection method.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, apparatuses and modules described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in the embodiments of the present application, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When the computer program is loaded and executed on a computer, the flow or functions described in accordance with embodiments of the present application are fully or partially produced. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be stored by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
The foregoing describes in detail the technical solution provided by the embodiments of the present application, in which specific examples are applied to illustrate the principles and implementations of the embodiments of the present application, where the foregoing description of the embodiments is only used to help understand the methods and core ideas of the embodiments of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope according to the ideas of the embodiments of the present application, the present disclosure should not be construed as limiting the embodiments of the present application in view of the above.

Claims (10)

1. A vehicle drivable region detection method, characterized in that the method comprises:
acquiring target point cloud data, wherein the target point cloud data comprises first point cloud data of a current frame and second point cloud data of a history frame in a preset time window, the current frame is an image of a travelable area which is detected by a radar installed on a target vehicle, and the target point cloud data comprises a point cloud set detected by the radar;
accumulating the first point cloud data and the second point cloud data to fill the first point cloud data to obtain third point cloud data;
clustering the filled third point cloud data to obtain at least one obstacle in the current frame, wherein each obstacle covers at least one point cloud;
Extracting the outline of each obstacle;
performing grid binarization on the outline of each obstacle to obtain a candidate grid map, wherein the candidate grid map comprises a drivable area and an undrivable area corresponding to the target vehicle in the current frame;
and removing the non-drivable area from the candidate grid map, obtaining and outputting an effective drivable area of the target vehicle in the current frame.
2. The method of claim 1, wherein the history frame comprises a first frame and a second frame within the predetermined time window, the first frame having a time stamp earlier than the second frame, the first frame being a frame having an earliest time stamp in the point cloud accumulation for the second frame;
the accumulating the first point cloud data and the second point cloud data to fill the first point cloud data to obtain third point cloud data, including:
acquiring the real-time speed and frame period of a target vehicle corresponding to the current frame;
acquiring a point cloud accumulation result of the second frame from the second point cloud data, and acquiring point cloud data of a first frame with an earliest timestamp from the point cloud accumulation result of the second frame;
and performing motion compensation on the current frame according to the real-time vehicle speed and the frame period, and filling the first point cloud data of the current frame to obtain the third point cloud data.
3. The method of claim 2, wherein performing grid binarization according to the profile of each obstacle to obtain a candidate grid map comprises:
calculating a connection equation of edge point clouds corresponding to the outline of each obstacle, wherein each connection line L between the edge point clouds i Represented as slope k i Intercept b i Minimum longitudinal coordinate y imin Maximum longitudinal coordinate y imax Is a combination of (a);
traversing the longitudinal coordinate y of each grid center point according to the grid granularity n Calculating the longitudinal coordinate as y n Each connecting line L i Transverse coordinate x of intersection of (2) ni For each grid center point longitudinal coordinate y n Assigning a plurality of intersection lateral coordinates x nj Wherein j=1, 2, …;
traversing the longitudinal coordinate y of each grid center point according to the grid granularity n All grids below the barrier are assigned 1 according to the scanning sequence from left to right, if the center point of the leftmost grid is determined to be positioned outside the outline area of the barrier, and if the center point of the leftmost grid is determined to be positioned inside the outline area of the barrier, 0 is assigned; scanning the grid to the right and repeating the assignment, the transverse coordinate x of each intersection encountered during the scan nj And executing a reversal operation once until the assignment is completed to all grids, and obtaining the travelable area and the non-travelable area.
4. A method according to any one of claims 1-3, wherein said removing the non-travelable region from the candidate grid map results in an effective travelable region of the target vehicle in the current frame, comprising:
scanning each horizontal grid G in the candidate grid map in turn yi Finding out all the communicated first continuous grids;
scanning each column grid G in the candidate grid map in turn xi Finding out a second continuous grid of the communication;
and obtaining the effective travelable region according to the first continuous grid and the second continuous grid.
5. The method of claim 4, wherein after clustering the filled third point cloud data, before extracting the contour of each obstacle, the method further comprises:
and selecting a point cloud with a height between the chassis of the target vehicle and the total height of the target vehicle from the third point cloud, and deleting an invalid point cloud to obtain an effective point cloud forming an actual obstacle.
6. The method of claim 4, wherein a point cloud coordinate of the third point cloud or the effective point cloud projected on the ground is set to P i (x i ,y i ) The method comprises the steps of carrying out a first treatment on the surface of the After clustering the filled third point cloud data, before extracting the contour of each obstacle, the method further comprises:
Setting a safety distance delta;
performing plane expansion on the third point cloud or the effective point cloud based on the safe distance so that each point cloud coordinate P projected on the ground i (x i ,y i ) Are each replaced by a target point cloud comprising at least one of:
P i1 (x i +δ,y i +δ)、P i2 (x i -δ,y i +δ)、P i3 (x i -δ,y i -delta) and P i4 (x i +δ,y i -δ)。
7. The method of claim 1, wherein prior to acquiring the point cloud data of the target vehicle, the method further comprises:
determining the detection distance and grid granularity of the radar;
and establishing a grid map of the target vehicle according to the detection distance and the grid granularity, wherein the detection range of the radar covers the grid map.
8. An in-vehicle apparatus, characterized by comprising:
the input/output module is used for acquiring target point cloud data, wherein the target point cloud data comprise first point cloud data of a current frame and second point cloud data of a history frame in a preset time window, the current frame is an image of a travelable area which is detected by a radar installed on a target vehicle, and the target point cloud data comprise a point cloud set detected by the radar;
the processing module is used for accumulating the first point cloud data and the second point cloud data to fill the first point cloud data so as to obtain third point cloud data;
Clustering the filled third point cloud data to obtain at least one obstacle in the current frame, wherein each obstacle covers at least one point cloud;
extracting the outline of each obstacle;
performing grid binarization on the outline of each obstacle to obtain a candidate grid map, wherein the candidate grid map comprises a drivable area and an undrivable area corresponding to the target vehicle in the current frame;
and removing the non-drivable area from the candidate grid map to obtain an effective drivable area of the target vehicle in the current frame and outputting the effective drivable area through the input/output module.
9. A computer device, the computer device comprising:
at least one processor and memory;
wherein the memory is for storing a computer program and the processor is for invoking the computer program stored in the memory to perform the method of any of claims 1-7.
10. A computer readable storage medium comprising instructions which, when run on a computer, cause the computer to perform the method of any of claims 1-7.
CN202311381057.6A 2023-10-24 2023-10-24 Vehicle drivable region detection method, related device and storage medium Pending CN117372995A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311381057.6A CN117372995A (en) 2023-10-24 2023-10-24 Vehicle drivable region detection method, related device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311381057.6A CN117372995A (en) 2023-10-24 2023-10-24 Vehicle drivable region detection method, related device and storage medium

Publications (1)

Publication Number Publication Date
CN117372995A true CN117372995A (en) 2024-01-09

Family

ID=89390623

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311381057.6A Pending CN117372995A (en) 2023-10-24 2023-10-24 Vehicle drivable region detection method, related device and storage medium

Country Status (1)

Country Link
CN (1) CN117372995A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183381A (en) * 2020-09-30 2021-01-05 深兰人工智能(深圳)有限公司 Method and device for detecting driving area of vehicle
CN115331189A (en) * 2022-09-01 2022-11-11 赛恩领动(上海)智能科技有限公司 Vehicle passable area detection method, system and storage medium
CN116309316A (en) * 2023-01-16 2023-06-23 重庆长安汽车股份有限公司 Method and device for detecting passable area of vehicle
CN116620294A (en) * 2023-04-21 2023-08-22 重庆长安汽车股份有限公司 Determination method, device, equipment, storage medium and vehicle for drivable area

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183381A (en) * 2020-09-30 2021-01-05 深兰人工智能(深圳)有限公司 Method and device for detecting driving area of vehicle
CN115331189A (en) * 2022-09-01 2022-11-11 赛恩领动(上海)智能科技有限公司 Vehicle passable area detection method, system and storage medium
CN116309316A (en) * 2023-01-16 2023-06-23 重庆长安汽车股份有限公司 Method and device for detecting passable area of vehicle
CN116620294A (en) * 2023-04-21 2023-08-22 重庆长安汽车股份有限公司 Determination method, device, equipment, storage medium and vehicle for drivable area

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
宁小娟 等: "基于激光点云的道路可通行区域检测方法", 计算机工程, vol. 48, no. 4, 30 April 2022 (2022-04-30), pages 22 *

Similar Documents

Publication Publication Date Title
US11709058B2 (en) Path planning method and device and mobile device
CN110658531B (en) Dynamic target tracking method for port automatic driving vehicle
CN114842438B (en) Terrain detection method, system and readable storage medium for automatic driving automobile
CN112154356B (en) Point cloud data processing method and device, laser radar and movable platform
CN112883820B (en) Road target 3D detection method and system based on laser radar point cloud
CN111968229A (en) High-precision map making method and device
CN111881790A (en) Automatic extraction method and device for road crosswalk in high-precision map making
WO2022099530A1 (en) Motion segmentation method and apparatus for point cloud data, computer device and storage medium
CN112129266B (en) Method, apparatus, device and computer readable storage medium for processing map
CN112162297A (en) Method for eliminating dynamic obstacle artifacts in laser point cloud map
US20220398856A1 (en) Method for reconstruction of a feature in an environmental scene of a road
CN115248447A (en) Road edge identification method and system based on laser point cloud
CN115187941A (en) Target detection positioning method, system, equipment and storage medium
CN115861968A (en) Dynamic obstacle removing method based on real-time point cloud data
CN117677976A (en) Method for generating travelable region, mobile platform, and storage medium
CN113111707A (en) Preceding vehicle detection and distance measurement method based on convolutional neural network
CN112508803A (en) Denoising method and device for three-dimensional point cloud data and storage medium
CN113096181A (en) Method and device for determining pose of equipment, storage medium and electronic device
CN114882198A (en) Target determination method, device, equipment and medium
CN114528941A (en) Sensor data fusion method and device, electronic equipment and storage medium
CN111380529B (en) Mobile device positioning method, device and system and mobile device
WO2020248118A1 (en) Point cloud processing method, system and device, and storage medium
CN117372995A (en) Vehicle drivable region detection method, related device and storage medium
US20240161517A1 (en) Detection method and system for a mobile object
US11138448B2 (en) Identifying a curb based on 3-D sensor data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination