CN115018879A - Target detection method, computer-readable storage medium, and driving apparatus - Google Patents
Target detection method, computer-readable storage medium, and driving apparatus Download PDFInfo
- Publication number
- CN115018879A CN115018879A CN202210540475.4A CN202210540475A CN115018879A CN 115018879 A CN115018879 A CN 115018879A CN 202210540475 A CN202210540475 A CN 202210540475A CN 115018879 A CN115018879 A CN 115018879A
- Authority
- CN
- China
- Prior art keywords
- initial detection
- point cloud
- cloud data
- frames
- detection frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
The invention relates to the technical field of automatic driving, in particular to a target detection method, a computer readable storage medium and driving equipment, aiming at solving the problem that some objects with light reflection characteristics are easy to be mistakenly detected as targets when the targets are detected based on point cloud data in the prior art. To this end, the object detection method of the present invention includes: the method comprises the steps of obtaining point cloud data of an area to be detected, carrying out target detection based on the point cloud data, obtaining an initial detection result, wherein the initial detection result comprises at least one initial detection frame corresponding to a target to be detected, correcting the initial detection result according to the tracking speed of the initial detection frame and the reflection intensity of point clouds in the initial detection frame according to the tracking speed, and obtaining a final target detection result.
Description
Technical Field
The invention relates to the technical field of automatic driving, and particularly provides a target detection method, a computer-readable storage medium and driving equipment.
Background
In the prior art, target detection is generally performed by acquiring point cloud data of a region of interest including a target, determining a point cloud having a geometric shape similar to a target shape according to the point cloud data, and outputting a target detection frame based on an outline of the point cloud. By taking vehicle identification applied to the driving process as an example, the point cloud data of the region of interest is collected, the point cloud with the geometric shape similar to that of the vehicle is determined according to the point cloud data, and the vehicle detection frame is output. However, when the existing method is used for target detection, some traffic markers adhered with reflective materials are mistakenly detected as vehicles, so that the target detection accuracy is low.
Disclosure of Invention
The invention aims to solve the technical problem that some objects with light reflection characteristics are easy to be mistakenly detected as targets when the targets are detected based on point cloud data in the prior art.
In a first aspect, the present invention provides a method of target detection, comprising:
acquiring point cloud data of an area to be detected, wherein the point cloud data comprises the reflection intensity of point cloud;
performing target detection based on the point cloud data to obtain an initial detection result, wherein the initial detection result comprises at least one initial detection frame corresponding to a target to be detected;
determining a tracking speed of the initial detection frame;
and correcting the initial detection result according to the tracking speed and the reflection intensity of the point cloud in the initial detection frame to obtain a target detection result.
In some embodiments, the modifying the initial detection result according to the tracking speed and the reflection intensity of the point cloud in the initial detection frame to obtain the target detection result includes:
determining the proportion of high reflection intensity points of the initial detection frame according to the number of points with the reflection intensity greater than the reflection intensity threshold value in the initial detection frame and the total number of points of the point cloud in the initial detection frame;
and correcting the initial detection result according to the tracking speed, the speed threshold, the high reflection intensity point proportion and the high reflection intensity point proportion threshold to obtain the target detection result.
In some embodiments, the modifying the initial detection result according to the tracking speed, the speed threshold, the high reflection intensity point ratio, and the high reflection intensity point ratio threshold to obtain the target detection result includes:
according to the tracking speed, the speed threshold, the high reflection intensity point proportion and the high reflection intensity point proportion threshold, removing the initial detection frame corresponding to the tracking speed smaller than the speed threshold and the high reflection intensity point proportion larger than the high reflection intensity point proportion threshold.
In some embodiments, the performing target detection based on the point cloud data to obtain an initial detection result, where the initial detection result includes at least one initial detection frame corresponding to a target to be detected, and includes:
and performing target detection by using a three-dimensional target detection network model based on the point cloud data to obtain at least one initial detection frame corresponding to the target to be detected, wherein the initial detection frame is a three-dimensional initial detection frame.
In some embodiments, the determining the tracking speed of the initial detection frame comprises:
tracking the initial detection frame, and acquiring the displacement of the initial detection frame and the time interval of two frames of point cloud data in two frames of point cloud data with preset frame intervals;
and determining the tracking speed of the initial detection frame according to the displacement and the time interval.
In some embodiments, the tracking the initial detection frame to obtain the displacement of the initial detection frame in the two frames of point cloud data separated by the preset number of frames includes:
acquiring position information of the initial detection frame in two frames of point cloud data with a preset frame number at intervals, and converting the position information of the initial detection frame in the two frames of point cloud data to the same coordinate system to obtain converted position information of the initial detection frame in the two frames of point cloud data;
calculating the intersection and comparison of any two initial detection frames in the two frames of point cloud data according to the converted position information of the initial detection frames, and determining two associated initial detection frames in the two frames of point cloud data according to the calculation result;
and determining the displacement of the initial detection frame according to the converted position information of the two associated initial detection frames.
In some embodiments, the acquiring point cloud data of the area to be detected includes:
and acquiring point cloud data of the area to be detected based on the laser radar.
In a second aspect, the invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements an object detection method as described in any one of the above.
In a third aspect, the invention provides an electronic device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, implements the object detection method of any one of the above.
In a fourth aspect, the invention provides a driving device comprising a driving device body, a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, implements the object detection method of any one of the above.
Under the condition of adopting the technical scheme, the method can obtain the point cloud data of the area to be detected, and performs target detection based on the point cloud data of the area to be detected to obtain an initial detection result, wherein the initial detection result comprises at least one initial detection frame corresponding to the target to be detected, and the initial detection result is corrected according to the tracking speed of the initial detection frame and the reflection intensity of the point cloud in the initial detection frame, so as to obtain a final target detection result, thereby effectively avoiding the problem that some objects with light reflection characteristics are easy to be falsely detected as targets when the target detection is performed based on the point cloud data in the prior art, and effectively improving the precision of the target detection.
Drawings
Preferred embodiments of the present invention are described below with reference to the accompanying drawings, in which:
fig. 1 is a schematic flow chart of a target detection method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a tracking speed determining method according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of a method for correcting an initial detection result according to an embodiment of the present invention;
FIG. 4 is a schematic view of a driving environment provided by the present invention;
FIG. 5 is a schematic diagram of a vehicle test result based on an initial test result provided by the present invention;
fig. 6 is a schematic diagram of a vehicle detection result of a driving environment after correction according to the present invention.
Detailed Description
Some embodiments of the invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are only for explaining the technical principle of the present invention, and are not intended to limit the scope of the present invention.
In the prior art, target detection is generally performed by acquiring point cloud data of a region of interest including a target, determining a point cloud having a geometric shape similar to a target shape according to the point cloud data, and outputting a target detection frame based on an outline of the point cloud. Taking vehicle identification applied to a driving process as an example, by collecting point cloud data of an area of interest, determining point cloud with a geometric shape similar to that of a vehicle according to the point cloud data and outputting a vehicle detection frame. However, when the existing method is used for target detection, for some traffic markers attached with reflective materials, the corresponding point cloud outlines are diffused to the periphery to form a point cloud geometrical shape larger than a real object, so that the point cloud geometrical shape is similar to a vehicle, and the traffic markers are easily mistakenly detected as the vehicle.
In view of this, the present invention provides a target detection method, which includes obtaining point cloud data of an area to be detected, performing target detection based on the point cloud data of the area to be detected to obtain an initial detection result, where the initial detection result includes at least one initial detection frame corresponding to a target to be detected, and correcting the initial detection result by determining a tracking speed of the initial detection frame and combining a reflection intensity of a point cloud in the initial detection frame according to the tracking speed to obtain a final target detection result. The method can effectively avoid the problem that some objects with light reflection characteristics are easy to be mistakenly detected as targets according to the geometrical shape of the point cloud when the targets are detected based on the point cloud data in the prior art, and effectively improves the precision of target detection.
Referring to fig. 1, fig. 1 is a schematic flow chart of a target detection method according to an embodiment of the present invention, which may include:
step S11: acquiring point cloud data of a to-be-detected area, wherein the point cloud data comprises the reflection intensity of point cloud;
step S12: carrying out target detection based on the point cloud data to obtain an initial detection result, wherein the initial detection result comprises at least one initial detection frame corresponding to a target to be detected;
step S13: determining the tracking speed of the initial detection frame;
step S14: and correcting the initial detection result according to the tracking speed and the reflection intensity of the point cloud in the initial detection frame to obtain a target detection result.
In some embodiments, the step S11 may be embodied to obtain the point cloud data of the area to be detected based on a laser radar, which is beneficial to obtain the point cloud data with high precision.
In other embodiments, the point cloud data of the area to be detected can be acquired based on a millimeter wave radar.
In some embodiments, the point cloud data may include at least reflection intensities of the point cloud, with one reflection intensity for each point in the point cloud.
In other embodiments, the point cloud data may also include three-dimensional coordinate data and/or color data for each point.
In some embodiments, the step S12 may specifically be to perform target detection by using a three-dimensional target detection network model based on the point cloud data, to obtain at least one initial detection frame corresponding to the target to be detected, where the initial detection frame is a three-dimensional initial detection frame. The target detection is carried out by adopting the three-dimensional target detection network, so that the information of the original data can be prevented from being lost, and the detection accuracy can be improved.
By way of example, the three-dimensional object detection network model may include a PointPillar model, a VoxelNet model, or a CenterPoints model.
In other embodiments, the initial detection result may also be obtained by clustering point cloud data first and then classifying the point cloud data by using a classification model.
In some embodiments, referring to fig. 2, fig. 2 is a schematic flow chart of the tracking speed determining method provided by the embodiment of the present invention, and step S13 may specifically be:
step S131: tracking the initial detection frame, and acquiring the displacement of the initial detection frame in two frames of point cloud data with preset frame number intervals and the time interval of the two frames of point cloud data;
step S132: and determining the tracking speed of the initial detection frame according to the displacement and the time interval.
In some embodiments, the tracking of the initial detection frame in step S131, and the obtaining of the displacement of the initial detection frame in the two-frame point cloud data with the interval of the preset number of frames may specifically be:
acquiring position information of an initial detection frame in two frames of point cloud data with a preset frame number at intervals, and converting the position information of the initial detection frame in the two frames of point cloud data to the same coordinate system to obtain converted position information of the initial detection frame in the two frames of point cloud data;
calculating the intersection ratio of any two initial detection frames in the two frames of point cloud data according to the converted position information of the initial detection frames, and determining two associated initial detection frames in the two adjacent frames of point cloud data according to the calculation result;
and determining the displacement of the initial detection frame according to the converted position information of the two associated initial detection frames.
The preset frame number may be set as required, and as an example, the interval preset frame number may be zero, that is, two adjacent frames of point cloud data are used for determining the tracking speed, which will be described below based on two adjacent frames of point cloud data as an example.
In some embodiments, the position information of the initial detection frame corresponding to the target to be detected in each frame of point cloud data may be obtained by performing target detection on two adjacent frames of point cloud data.
The position information of the initial detection frame is the position information of the initial detection frame in the vehicle-mounted coordinate system, and the vehicle-mounted coordinate systems corresponding to the initial detection frames in different frames of point cloud data are different, so that the position information of the initial detection frames in the two frames of point cloud data needs to be converted into the same coordinate system before the intersection and comparison of any two initial detection frames in the two frames of point cloud data is calculated.
In some embodiments, converting the position information of the initial detection box in the two frames of point cloud data to the same coordinate system may be:
constructing a three-dimensional coordinate system by taking the position of the vehicle when starting as an origin, wherein the three-dimensional coordinate system takes the advancing direction of the vehicle as the positive direction of an x axis, takes the direction which is vertical to the direction of the x axis and points to the left side of the vehicle as the direction of a y axis, and takes the direction which is vertical to the plane where the x axis and the y axis are positioned and points to the roof as the direction of a z axis;
acquiring first position information of a vehicle in a three-dimensional coordinate system when first frame point cloud data are acquired and acquiring second position information of the vehicle in the three-dimensional coordinate system when second frame point cloud data adjacent to the first frame are acquired;
calculating the position information of the initial detection frame in the first frame of point cloud data after the initial detection frame is converted into a three-dimensional coordinate system according to the first position information and the position information of the initial detection frame in the first frame of point cloud data under the corresponding vehicle-mounted coordinate system when the first frame of point cloud data is collected;
calculating the position information of the initial detection frame in the second frame point cloud data after the initial detection frame is converted into the three-dimensional coordinate system according to the second position information and the position information of the initial detection frame in the second frame point cloud data under the corresponding vehicle-mounted coordinate system when the second frame point cloud data is collected; therefore, the position information of the initial detection frames in the two frames of point cloud data is converted to the same coordinate system, the intersection and combination ratio of the two initial detection frames can be calculated based on the position information of the first frame of point cloud data after the initial detection frame is converted to the three-dimensional coordinate system and the position information of the second frame of point cloud data after the initial detection frame is converted to the three-dimensional coordinate system, and the displacement of the initial detection frame can be determined.
By calculating the intersection and comparison of any two initial detection frames in the two adjacent frames of point cloud data, two associated initial detection frames in the two adjacent frames of point cloud data can be determined when the calculation result meets the preset condition, and the two associated initial detection frames correspond to the same target to be detected at different moments. In some embodiments, the preset condition may be that the intersection ratio of the two initial detection frames is greater than an intersection ratio threshold.
In other embodiments, the initial detection frame may be tracked based on a kalman filtering algorithm to obtain the position information of the initial detection frame in two adjacent frames of point cloud data.
In some embodiments, referring to fig. 3, fig. 3 is a schematic flowchart of a method for correcting an initial detection result according to an embodiment of the present invention, and step S14 may specifically be:
step S141: determining the proportion of high reflection intensity points of the initial detection frame according to the number of points with the reflection intensity greater than the reflection intensity threshold value in the initial detection frame and the total number of points of the point cloud in the initial detection frame;
step S142: and correcting the initial detection result according to the tracking speed, the speed threshold, the high reflection intensity point proportion and the high reflection intensity point proportion threshold to obtain a target detection result.
The speed threshold and the high reflection intensity point proportion threshold can be flexibly set according to the actual application requirement.
In some embodiments, step S142 may specifically be: and removing the initial detection frames corresponding to the tracking speed smaller than the speed threshold and the high reflection intensity point proportion larger than the high reflection intensity point proportion threshold according to the tracking speed, the speed threshold, the high reflection intensity point proportion and the high reflection intensity point proportion threshold. According to the method, the real target and the pseudo target can be distinguished from two aspects of motion characteristics and reflection intensity distribution characteristics by combining the tracking speed with the reflection intensity of the point cloud in the initial detection frame, and then the initial detection result is corrected to obtain the final target detection result.
The target detection method provided by the embodiment of the invention comprises the steps of obtaining point cloud data of an area to be detected, carrying out target detection based on the point cloud data to obtain an initial detection result, wherein the initial detection result comprises at least one initial detection frame corresponding to the target to be detected, correcting the initial detection result according to the tracking speed of the initial detection frame and the reflection intensity of point clouds in the initial detection frame, so as to obtain a final target detection result.
As an example, the object detection method provided by the embodiment of the invention can be applied to vehicle detection in automatic driving.
Referring to fig. 4, fig. 4 is a schematic view of a driving environment provided by the present invention, where the driving environment includes a vehicle and a traffic marker located in the middle of a road and pasted with a reflective material, and applying the target detection method provided by the embodiment of the present invention to perform vehicle detection on the driving environment shown in fig. 4 may include:
acquiring point cloud data of a driving environment, wherein the point cloud data comprises the reflection intensity of a point cloud;
detecting the acquired point cloud data by using a vehicle detection model and obtaining an initial detection result;
if the output is performed directly based on the initial detection result, as shown in fig. 5, since the obtained point cloud contour of the traffic marker is expanded relative to the real contour of the traffic marker, the expanded point cloud is similar to a vehicle in shape, and the traffic marker is also falsely detected as a vehicle in the initial detection result and an initial detection frame is correspondingly output;
determining the tracking speed of the initial detection frame;
counting the number of points with the reflection intensity larger than the threshold value of the reflection intensity in the initial detection frame, and determining the proportion of the high-reflection-intensity points of the initial detection frame according to the number of the points and the total number of the point clouds in the initial detection frame;
according to the tracking speed, the set speed threshold, the high reflection intensity point proportion and the set high reflection intensity point proportion threshold, removing the initial detection frames corresponding to the tracking speed smaller than the speed threshold and the high reflection intensity point proportion larger than the high reflection intensity point proportion threshold, namely filtering out the initial detection frames corresponding to the traffic marker, and further obtaining and outputting the corrected target detection result, as shown in fig. 6, fig. 6 is a schematic diagram of the vehicle detection result of the driving environment after correction provided by the invention.
By applying the target detection method provided by the embodiment of the invention to detect the vehicle in the driving environment shown in the figure 4, the traffic marker which is static and stuck with the reflective material can be effectively distinguished from the vehicle and the initial detection result can be corrected by combining the tracking speed with the reflection intensity of the point cloud in the initial detection frame, the problem that some traffic markers with reflective characteristics are easily mistakenly detected as the vehicle according to the geometrical shape of the point cloud in the prior art is solved, and the target detection precision is effectively improved.
Another aspect of the present invention further provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, can implement the object detection method in any of the above embodiments. The computer readable storage medium may be a storage device formed by including various electronic devices, and optionally, the computer readable storage medium is a non-transitory computer readable storage medium in the embodiment of the present invention.
In another aspect of the present invention, an electronic device is further provided, which includes: a memory and a processor, wherein the memory stores a computer program, and the computer program is executed by the processor to implement the object detection method according to any of the above embodiments.
In another aspect of the present invention, a driving device is further provided, which includes a driving device body, a memory and a processor, wherein the memory stores a computer program, and when the computer program is executed by the processor, the object detection method described in any of the above embodiments is implemented.
In some embodiments, the driving device may further include a laser radar for acquiring point cloud data of the area to be detected.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.
Claims (10)
1. A method of object detection, comprising:
acquiring point cloud data of an area to be detected, wherein the point cloud data comprises the reflection intensity of point cloud;
performing target detection based on the point cloud data to obtain an initial detection result, wherein the initial detection result comprises at least one initial detection frame corresponding to a target to be detected;
determining a tracking speed of the initial detection frame;
and correcting the initial detection result according to the tracking speed and the reflection intensity of the point cloud in the initial detection frame to obtain a target detection result.
2. The method of claim 1, wherein the modifying the initial detection result according to the tracking speed and the reflection intensity of the point cloud in the initial detection frame to obtain the target detection result comprises:
determining the proportion of high-reflection-intensity points of the initial detection frame according to the number of the points with the reflection intensity larger than the reflection intensity threshold value in the initial detection frame and the total number of the points in the point cloud in the initial detection frame;
and correcting the initial detection result according to the tracking speed, the speed threshold, the high reflection intensity point proportion and the high reflection intensity point proportion threshold to obtain the target detection result.
3. The method of claim 2, wherein the modifying the initial detection result according to the tracking speed, the speed threshold, the high reflection intensity point ratio and the high reflection intensity point ratio threshold to obtain the target detection result comprises:
according to the tracking speed, the speed threshold, the high reflection intensity point proportion and the high reflection intensity point proportion threshold, removing the initial detection frame corresponding to the tracking speed smaller than the speed threshold and the high reflection intensity point proportion larger than the high reflection intensity point proportion threshold.
4. The method of claim 1, wherein the performing object detection based on the point cloud data to obtain an initial detection result, the initial detection result including at least one initial detection frame corresponding to an object to be detected, comprises:
and performing target detection by using a three-dimensional target detection network model based on the point cloud data to obtain at least one initial detection frame corresponding to the target to be detected, wherein the initial detection frame is a three-dimensional initial detection frame.
5. The method of claim 1, wherein determining the tracking speed of the initial detection box comprises:
tracking the initial detection frame, and acquiring the displacement of the initial detection frame and the time interval of two frames of point cloud data in two frames of point cloud data with preset frame intervals;
and determining the tracking speed of the initial detection frame according to the displacement and the time interval.
6. The method of claim 5, wherein the tracking the initial detection frame to obtain the displacement of the initial detection frame in the two frames of point cloud data separated by a preset number of frames comprises:
acquiring position information of the initial detection frame in two frames of point cloud data with a preset frame number at intervals, and converting the position information of the initial detection frame in the two frames of point cloud data to the same coordinate system to obtain converted position information of the initial detection frame in the two frames of point cloud data;
calculating the intersection ratio of any two initial detection frames in the two frames of point cloud data according to the converted position information of the initial detection frames, and determining two associated initial detection frames in the two frames of point cloud data according to a calculation result;
and determining the displacement of the initial detection frame according to the converted position information of the two associated initial detection frames.
7. The method according to claim 1, wherein the acquiring point cloud data of the area to be detected comprises:
and acquiring point cloud data of the area to be detected based on the laser radar.
8. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the object detection method of any one of claims 1 to 7.
9. An electronic device, comprising a memory and a processor, wherein the memory has stored therein a computer program that, when executed by the processor, implements the object detection method of any one of claims 1 to 7.
10. A driving apparatus comprising a driving apparatus body, a memory, and a processor, the memory having stored therein a computer program that, when executed by the processor, implements the object detection method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210540475.4A CN115018879A (en) | 2022-05-17 | 2022-05-17 | Target detection method, computer-readable storage medium, and driving apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210540475.4A CN115018879A (en) | 2022-05-17 | 2022-05-17 | Target detection method, computer-readable storage medium, and driving apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115018879A true CN115018879A (en) | 2022-09-06 |
Family
ID=83068879
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210540475.4A Pending CN115018879A (en) | 2022-05-17 | 2022-05-17 | Target detection method, computer-readable storage medium, and driving apparatus |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115018879A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115965943A (en) * | 2023-03-09 | 2023-04-14 | 安徽蔚来智驾科技有限公司 | Target detection method, device, driving device, and medium |
CN115984805A (en) * | 2023-03-15 | 2023-04-18 | 安徽蔚来智驾科技有限公司 | Data enhancement method, target detection method and vehicle |
CN115980702A (en) * | 2023-03-10 | 2023-04-18 | 安徽蔚来智驾科技有限公司 | Target false detection preventing method, device, driving device and medium |
-
2022
- 2022-05-17 CN CN202210540475.4A patent/CN115018879A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115965943A (en) * | 2023-03-09 | 2023-04-14 | 安徽蔚来智驾科技有限公司 | Target detection method, device, driving device, and medium |
CN115980702A (en) * | 2023-03-10 | 2023-04-18 | 安徽蔚来智驾科技有限公司 | Target false detection preventing method, device, driving device and medium |
CN115984805A (en) * | 2023-03-15 | 2023-04-18 | 安徽蔚来智驾科技有限公司 | Data enhancement method, target detection method and vehicle |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111192295B (en) | Target detection and tracking method, apparatus, and computer-readable storage medium | |
CN115018879A (en) | Target detection method, computer-readable storage medium, and driving apparatus | |
CN110443225B (en) | Virtual and real lane line identification method and device based on feature pixel statistics | |
EP2958054B1 (en) | Hazard detection in a scene with moving shadows | |
CN110705543A (en) | Method and system for recognizing lane lines based on laser point cloud | |
CN111487641B (en) | Method and device for detecting object by using laser radar, electronic equipment and storage medium | |
CN112101092A (en) | Automatic driving environment sensing method and system | |
US10769421B2 (en) | Method for performing pedestrian detection with aid of light detection and ranging | |
CN110632617B (en) | Laser radar point cloud data processing method and device | |
CN110320531B (en) | Obstacle identification method based on laser radar, map creation method and device | |
CN109946703B (en) | Sensor attitude adjusting method and device | |
CN112083441B (en) | Obstacle detection method and system for depth fusion of laser radar and millimeter wave radar | |
CN110674705A (en) | Small-sized obstacle detection method and device based on multi-line laser radar | |
CN110341621B (en) | Obstacle detection method and device | |
CN110388929B (en) | Navigation map updating method, device and system | |
CN111722249B (en) | Object recognition device and vehicle control system | |
CN108725318B (en) | Automobile safety early warning method and device and computer readable storage medium | |
CN112183381A (en) | Method and device for detecting driving area of vehicle | |
CN114155720B (en) | Vehicle detection and track prediction method for roadside laser radar | |
JPWO2018180081A1 (en) | Degraded feature identifying apparatus, degraded feature identifying method, degraded feature identifying program, and computer-readable recording medium recording the degraded feature identifying program | |
CN115327572A (en) | Method for detecting obstacle in front of vehicle | |
CN114296095A (en) | Method, device, vehicle and medium for extracting effective target of automatic driving vehicle | |
CN113945219A (en) | Dynamic map generation method, system, readable storage medium and terminal equipment | |
JP2020034322A (en) | Self-position estimation device | |
US20230314169A1 (en) | Method and apparatus for generating map data, and non-transitory computer-readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |