WO2023065313A1 - 遮挡关系判断方法、装置、存储介质及电子设备 - Google Patents

遮挡关系判断方法、装置、存储介质及电子设备 Download PDF

Info

Publication number
WO2023065313A1
WO2023065313A1 PCT/CN2021/125749 CN2021125749W WO2023065313A1 WO 2023065313 A1 WO2023065313 A1 WO 2023065313A1 CN 2021125749 W CN2021125749 W CN 2021125749W WO 2023065313 A1 WO2023065313 A1 WO 2023065313A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
occlusion
target
obstacle
distance value
Prior art date
Application number
PCT/CN2021/125749
Other languages
English (en)
French (fr)
Inventor
徐棨森
Original Assignee
深圳市速腾聚创科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市速腾聚创科技有限公司 filed Critical 深圳市速腾聚创科技有限公司
Priority to CN202180102336.7A priority Critical patent/CN118056228A/zh
Priority to PCT/CN2021/125749 priority patent/WO2023065313A1/zh
Publication of WO2023065313A1 publication Critical patent/WO2023065313A1/zh

Links

Images

Definitions

  • the present application relates to the field of laser radar, and in particular to a method, device, storage medium and electronic equipment for judging an occlusion relationship.
  • LiDAR light detection and ranging
  • LiDAR is a radar system that emits laser beams to detect the position, speed and other characteristics of the target.
  • the working principle of lidar is to transmit a detection signal (laser beam) to a target object (such as a vehicle, aircraft or missile), and then compare and process the received signal (echo signal) reflected from the target object with the transmitted signal , the relevant information of the target object can be obtained, such as the target distance, azimuth, height, speed, attitude, and even shape parameters, so that the target object can be detected, tracked and identified.
  • Embodiments of the present application provide a method, device, storage medium, and electronic equipment for judging occlusion relations, which can make judgments on occlusion relations more efficient and reliable, reduce the possibility of misjudgments, and effectively improve driving safety and reliability. Described technical scheme is as follows:
  • the embodiment of the present application provides a method for judging an occlusion relationship, the method comprising:
  • the point cloud data includes a distance value
  • the absolute value of the difference between the distance value and the distance value of the target point cloud is greater than or equal to the first threshold, and the occlusion point cloud that does not belong to the same object as the target point cloud, other points in the neighborhood of the target point cloud
  • the cloud includes the occlusion point cloud, and the number of the occlusion point cloud is greater than 0;
  • the embodiment of the present application provides a device for judging an occlusion relationship, the device comprising:
  • the first acquisition module is used to acquire point cloud data of the target point cloud, and point cloud data corresponding to other point clouds in the neighborhood of the target point cloud; the point cloud data includes a distance value;
  • the second acquisition module is used to acquire the occlusion point cloud whose absolute value of the difference between the distance value and the distance value of the target point cloud is greater than or equal to the first threshold and which does not belong to the same object as the target point cloud, the target point cloud Other point clouds in the neighborhood of the point cloud include the occluded point cloud, and the number of the occluded point cloud is greater than 0;
  • An occlusion determination module configured to determine that there is an occlusion relationship between the obstacle corresponding to the occlusion point cloud and the target obstacle corresponding to the target point cloud.
  • an embodiment of the present application provides a computer storage medium, where a plurality of instructions are stored in the computer storage medium, and the instructions are adapted to be loaded by a processor and execute the above method steps.
  • an embodiment of the present application provides an electronic device, which may include: a processor and a memory; wherein, the memory stores a computer program, and the computer program is adapted to be loaded by the processor and execute the above-mentioned method steps .
  • an embodiment of the present application provides an occlusion relationship recognition system, including the electronic device described in the fourth aspect and a radar sensor connected to the electronic device;
  • the radar sensor is used to collect point cloud data corresponding to the target scene.
  • This application uses the point cloud data of the target point cloud to judge the occlusion relationship, and judges the occlusion relationship between the target obstacle corresponding to the target point cloud and other obstacles from the perspective of the point cloud; compared to the need to output the outline of all obstacles, And related technologies based on the obstacle contour to determine whether it is missing to obtain the occlusion relationship between the target obstacle and other obstacles.
  • the application's judgment on the occlusion relationship is more efficient and reliable, reducing the possibility of missed judgments and misjudgments, and effectively improving driving safety. sex and reliability.
  • FIG. 1A is a schematic diagram of a scene for acquiring point cloud data provided by an embodiment of the present application
  • Fig. 1B is an assembly schematic diagram of a vehicle and a vehicle-mounted radar provided by an embodiment of the present application;
  • FIG. 1C is a schematic diagram of a scene where a target obstacle is blocked by an obstacle provided in an embodiment of the present application;
  • FIG. 2 is a schematic flowchart of a method for judging an occlusion relationship provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of a point matrix of a target point cloud and other point clouds in the neighborhood of the target point cloud provided by an embodiment of the present application;
  • FIG. 4 is a schematic diagram of a scene where a target obstacle is determined to be occluded by an embodiment of the present application
  • Fig. 5 is a schematic structural diagram of an image source acquisition module provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a dot matrix of an occluded point cloud provided by an embodiment of the present application.
  • Fig. 7 is a schematic structural diagram of a device for judging an occlusion relationship provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • plural means two or more.
  • “And/or” describes the association relationship of associated objects, indicating that there may be three types of relationships, for example, A and/or B may indicate: A exists alone, A and B exist simultaneously, and B exists independently.
  • the character “/” generally indicates that the contextual objects are an "or” relationship.
  • FIG. 1A it is a schematic diagram of a scene for obtaining point cloud data provided by the embodiment of the present application.
  • the schematic diagram of the scene includes: a vehicle 101 equipped with a vehicle radar, a tree 102A with an occlusion relationship and pedestrians 102B, a flower bed 103A and a truck 103B in an occlusion relationship.
  • the tree 102A is the occluder for the pedestrian 102B
  • the pedestrian 102B is the occluded object for the tree 102A
  • the flower bed 103A is the occluder for the truck 103B
  • the truck 103B is the occluded object for the flower bed 103A.
  • the tree 102A, the pedestrian 102B, the flower bed 103A and the truck 103B are all obstacles to the vehicle 101 .
  • FIG. 1B it is a schematic diagram of the assembly of the vehicle and the vehicle-mounted radar provided in the embodiment of the present application, and the structural diagram includes: a vehicle 101 and a vehicle-mounted radar 101A.
  • the vehicle 101 is provided with a vehicle radar 101A. It can be understood that, in this application, the vehicle 101 is only a carrying platform of the laser radar.
  • the carrying platform acts as a bearing and drives the lidar to move, so the lidar will generate corresponding linear and angular velocities.
  • the carrying platform can be a vehicle, a drone or other devices, which is not limited in this application.
  • the on-vehicle radar 101A may be a radar such as a millimeter-wave radar or a laser radar.
  • the lidar may be a mechanical lidar, a solid-state lidar, or the like.
  • the vehicle-mounted radar 101 obtains reflected signals of one or more information including spatial position coordinates, time stamps, and echo strengths through TOF ranging methods, frequency modulation continuous wave methods, and other ranging methods, and uses each reflected signal as a data point, the data point further includes one or more information of the distance information, angle information, radial velocity and other information of the corresponding obstacle relative to the vehicle radar.
  • FIG. 1C it is a schematic diagram of a scene where a target obstacle is blocked by an obstacle provided in the embodiment of the present application, and the schematic diagram of the scene includes: a truck 105 and a pedestrian 104 .
  • the collected point cloud is usually obtained through clustering algorithm and/or deep learning network to obtain the contours of multiple obstacles, and further through point cloud completion technology (Point Cloud Completion, a method starting from missing point cloud Estimate the complete point cloud, so as to obtain a higher quality point cloud, to achieve the purpose of repair technology) to assist in the completion of the obstacle contour.
  • Point Cloud Completion a method starting from missing point cloud Estimate the complete point cloud, so as to obtain a higher quality point cloud, to achieve the purpose of repair technology
  • the lidar will collect point cloud data, and the point cloud corresponding to the truck 105 and the point cloud corresponding to the pedestrian 104 will be obtained through queueSeg lightweight deep learning network analysis, and the point cloud corresponding to the truck 105
  • the contour of the truck 105 is obtained through the restoration of the corresponding point cloud, and the contour of the pedestrian 104 is obtained through the restoration of the corresponding point cloud of the pedestrian 104; the occlusion relationship between the pedestrian 104 and the truck 105 is judged by the contour of the pedestrian 104 and the contour of the truck 105 as: pedestrian 104
  • the truck 105 is partially obscured.
  • a method for judging an occlusion relationship is proposed.
  • the method can be realized by relying on a computer program and can run on a device for judging an occlusion relationship based on the von Neumann system.
  • the computer program can be integrated in the application, or run as an independent utility application.
  • the method for judging the occlusion relationship includes:
  • Point cloud data includes distance values between the object that generated the point cloud and the radar emitting the laser signal.
  • the neighborhood of the target point cloud can be understood as a preset range centered on the target point cloud, for example, a circle or a square centered on the target point cloud and whose diameter is the window length L1.
  • Point clouds in the neighborhood of the target point cloud can be understood as the target point cloud (m, n) as the center, and all point clouds within the window length L1, that is, Dist(m-floor(L1/2):m+floor (L1/2), n-floor(L1/2): n+floor(L1/2), t), t is the number of point clouds in the neighborhood of the target point cloud.
  • other point clouds in the neighborhood of the target point cloud can be understood as point clouds that are next to the target point cloud and located in four directions: up, down, left, and right. It can be understood that, as shown in FIG. 3 , point clouds exist in the neighborhood of the target point cloud, and point clouds also exist outside the neighborhood, which is not shown in the figure.
  • FIG. 3 it is a point matrix diagram of a target point cloud and other point clouds in the neighborhood of the target point cloud provided by the present application, including: the target point cloud 301 , the neighborhood 301A of the target point cloud 301 , and the target point cloud 301 Other point clouds 302 within the neighborhood 301A of , and the point cloud 302A included by the other point clouds.
  • the neighborhood of the target point cloud can be of any shape, and the figure shown in Fig. 3 is only for illustration.
  • the distance values of all point clouds in the neighborhood of the target point cloud are calculated.
  • the neighborhood of the target point cloud includes 319 point clouds in addition to the target point cloud, and the above-mentioned 379 point clouds and target points are calculated.
  • the distance values corresponding to the clouds respectively.
  • Methods to verify that the target point cloud and other point clouds in its neighborhood do not belong to the same object include: DBSCAN algorithm, deep learning network, etc.
  • other point clouds in the neighborhood of the target point cloud may be first divided into point clouds corresponding to multiple obstacles, and the distance value between each point cloud in the neighborhood and the distance value of the target point cloud is further calculated. difference between.
  • the application also includes: first calculating the difference between the distance value of each point cloud in the neighborhood and the distance value of the target point cloud, and then dividing other point clouds in the neighborhood of the target point cloud into point clouds corresponding to multiple obstacles .
  • the absolute value of the difference between the distance value and the distance value of the target point cloud is greater than or equal to the first threshold, and the occlusion point cloud that does not belong to the same object as the target point cloud, other point clouds in the neighborhood of the target point cloud include occlusion point clouds, The number of occluded point clouds is greater than 0. For example, as shown in FIG.
  • the first threshold is 0.5m
  • the processor determines that the target point cloud 301 and a piece of point cloud 302A in the neighborhood 301A do not belong to the same object, and the distance value of the target point cloud 301 is 10m , and the distance value of the point cloud 302 is 9.2m; it is determined that the absolute value of the difference between the distance value of the target point cloud and the distance value of the point cloud 302 is greater than the first threshold, so it is determined that the point cloud 302 is an occlusion point cloud. It can be understood that, the present application does not set any limitation on the specific value of the first threshold and the manner of obtaining the first threshold.
  • the absolute value of the difference between the obtained distance value and the distance value of the target point cloud is greater than or equal to the first threshold, and the occluded point cloud does not belong to the same object as the target point cloud, then it is determined that the obstacle corresponding to the occluded point cloud corresponds to the target point cloud
  • the occlusion relationship can be understood as representing that the target occluder corresponding to the target point cloud forms an occlusion to the obstacle corresponding to the occlusion point cloud, or the obstacle corresponding to the occlusion point cloud forms an occlusion to the target occluder corresponding to the target point cloud, and the target occlusion
  • the comprehensive relationship of information such as the degree of occlusion of the object, the occlusion position of the target occluder, etc.
  • the distance between the target obstacle and the lidar is determined to be 10 meters by the distance value of the target point cloud, and the distance between the obstacle corresponding to the occlusion point cloud and the lidar is determined to be 9.2 meters by the distance value of the occlusion point cloud.
  • the absolute value of the difference between the distance value of the point cloud and the distance value of the target point cloud is greater than the first threshold, and the distance value of the occluded point cloud is less than the distance value of the target point cloud, and the distance between the occluder corresponding to the occluded point cloud and the target obstacle is determined
  • the occlusion relationship is: the obstacle corresponding to the occlusion point cloud occludes the target obstacle.
  • the present application further includes: after step S103, determining that there is an occlusion relationship between the obstacle corresponding to the occlusion point cloud and the target obstacle corresponding to the target point cloud, further comprising: determining the target obstacle The trajectory of the target obstacle; according to the trajectory of the target obstacle and the occlusion relationship between the obstacles corresponding to the occlusion point cloud, the occluded target side of the target obstacle is judged.
  • the movement trajectory of the target obstacle is used to characterize the movement trend direction of the target obstacle under the observation of the lidar.
  • the method for determining the trajectory of the target obstacle includes: obtaining a roughly selected trajectory according to the initial pose of the target obstacle and the output value of the inertial sensor at different times; obtaining the three-dimensional point cloud of the multi-line lidar at different times Rasterize the data to obtain a grayscale image; perform feature matching on the feature points between two frames of grayscale images at adjacent moments to obtain feature matching point pairs; obtain point cloud data at adjacent moments based on feature matching point pairs The transformation relationship among them; according to the transformation relationship, the precise motion trajectory of the target obstacle is obtained.
  • SURF Speed Up Robust Features
  • FIG. 4 it is a schematic diagram of a scene to determine the target side where the target obstacle is blocked by the embodiment of the present application.
  • the laser radar 401 collects a point cloud 401A, and the point cloud 401A includes a point cloud 402 corresponding to the obstacle 402. And the point cloud 403 corresponding to the obstacle 403; using the obstacle 403 as the target obstacle, it is known from the collected point cloud composition bitmap that the obstacle 402 forms a block on the right side of the target obstacle 403; obtain the target obstacle 403
  • the action trajectory of the laser radar 401 and the movement trajectory of the laser radar 401 determine that the obstacle 402 actually blocks the left side of the target obstacle 403, that is, the blocked target side of the target obstacle 403 is the left side.
  • the accuracy and authenticity of judging the occlusion relationship between the target obstacle and other obstacles are improved.
  • This application uses the point cloud data of the target point cloud to judge the occlusion relationship, and judges the occlusion relationship between the target obstacle corresponding to the target point cloud and other obstacles from the perspective of the point cloud; compared to the need to output the outline of all obstacles, And related technologies based on the obstacle contour to determine whether it is missing to obtain the occlusion relationship between the target obstacle and other obstacles.
  • the application's judgment on the occlusion relationship is more efficient and reliable, reducing the possibility of missed judgments and misjudgments, and effectively improving driving safety. sex and reliability.
  • Fig. 5 is a schematic flowchart of another occlusion relationship judging method proposed in the present application, which can be realized by relying on a computer program and can run on a von Neumann system-based occlusion relationship judging device.
  • the computer program can be integrated in the application, or run as an independent utility application.
  • step S201 is the same as step S101, and will not be repeated here.
  • the distance values of all point clouds in the neighborhood of the target point cloud are calculated.
  • the neighborhood of the target point cloud includes 319 point clouds in addition to the target point cloud, and the above-mentioned 319 point clouds and the target point are calculated.
  • the distance values corresponding to the clouds respectively.
  • the first threshold is 0.5m, and the distance value of acquisition target point cloud 301 is 10m, and the distance value of point cloud 302 is 9.2m;
  • the absolute value of the distance difference is greater than the first threshold, so it is judged that the point cloud 302 is an occlusion point cloud. It can be understood that, the present application does not set any limitation on the specific value of the first threshold and the manner of obtaining the first threshold.
  • the point cloud to be determined includes the occlusion point cloud, and the number of occlusion point clouds is greater than zero.
  • a plurality of point clouds to be determined included in the neighborhood of the target point cloud is determined based on the distance value, and among the point clouds to be determined, a point cloud that is on the same line bundle as the target point cloud and adjacent to the target point cloud is acquired
  • For multiple adjacent point cloud pairs determine the absolute value of the difference between the angle values between the multiple point cloud pairs, and when the absolute value of the angle value difference is less than the second threshold, determine the above-mentioned multiple point cloud alignment non-occlusion points cloud.
  • the lidar collects point cloud data through multiple laser heads scanning back and forth horizontally, a set of point cloud data scanned by the same laser head in one cycle belongs to the same beam .
  • the angle difference between the points on the same line bundle and adjacent to the target point cloud is less than the second threshold
  • the angle value in the point cloud data is used to determine whether other point clouds in the neighborhood of the target point cloud belong to the same object, the judgment method is simple and reliable, and the judgment efficiency is effectively improved.
  • the clustering algorithm is used to classify other point clouds in the neighborhood of the target point cloud, and at the same time, the angle value of the point cloud is used to exclude non-occluded point clouds, so as to improve the reliability of judging the occluded point cloud.
  • the blocking relationship includes a blocking relationship that blocks the target obstacle and a blocked relationship that is blocked by the target obstacle.
  • FIG. 6 it is a schematic diagram of a dot matrix of an occluded point cloud provided by an embodiment of the present application.
  • the dot matrix diagram includes: a target point cloud 301 corresponding to a target obstacle, and a first occlusion point corresponding to a first obstacle
  • the cloud 602 and the second occlusion point cloud 601 corresponding to the second obstacle.
  • the first obstacle corresponding to the first occlusion point cloud 602 is in an occluded relationship with the target obstacle
  • the second obstacle corresponding to the second occlusion point cloud 601 is in an occlusion relationship with the target obstacle.
  • S205A Determine the first occluded point cloud whose distance value is greater than the distance value of the target point cloud.
  • the occlusion point cloud includes the first occlusion point cloud.
  • the distance value of the occlusion point cloud the occlusion point cloud whose distance value is greater than the distance value of the target point cloud is obtained, and the above occlusion point cloud is defined as the first occlusion point cloud.
  • the distance value of the target point cloud 301 is 10m
  • the distance value of the first occlusion point cloud 602 is 11m.
  • the relationship between the first obstacle corresponding to the first occlusion point cloud 602 and the target obstacle corresponding to the target point cloud 301 is an occluded relationship, that is, the first obstacle corresponding to the first occlusion point cloud 602 Obscured by the target obstacle.
  • the occlusion point cloud includes a second occlusion point cloud.
  • the distance value of the occlusion point cloud the occlusion point cloud whose distance value is smaller than the distance value of the target point cloud is obtained, and the above occlusion point cloud is defined as the second occlusion point cloud.
  • the distance value of the target point cloud 301 is 10m
  • the distance value of the second occlusion point cloud 601 is 9m.
  • the relationship between the second obstacle corresponding to the second occlusion point cloud 601 and the target obstacle corresponding to the target point cloud 301 is an occlusion relationship, that is, the second obstacle pair corresponding to the second occlusion point cloud 601 The target obstacle is blocked.
  • the occlusion relationship between the target obstacle and the obstacle corresponding to the occlusion point cloud is further judged by using the distance value of the point cloud to improve the judgment accuracy.
  • step S204 it further includes: determining the occlusion condition of the target obstacle according to the occlusion point cloud corresponding to each set of edge point clouds in at least one set of edge point clouds corresponding to the target obstacle.
  • the edge point cloud can be understood as the point cloud that constitutes the outer contour of the target obstacle, and a group of edge point clouds is a group of point clouds that constitute one edge of the outer contour of the target obstacle.
  • the target obstacle is a cylindrical trash can, including 4 sets of edge point clouds
  • the target obstacle is a tree, including 12 sets of edge point clouds.
  • the occlusion situation of the target obstacle can be understood as the occlusion position and occlusion rate of the target obstacle.
  • one side of the target obstacle is completely occluded, and one side of the target obstacle is partially occluded.
  • the target obstacle corresponds to a point cloud including 4 groups of edge point clouds, and the left side of a group of edge point clouds in the above point cloud is completely covered by the occluded point cloud, then it is judged that the left side of the target obstacle is completely occluded; the above point In another group of edge point clouds in the cloud, the upper side of half of the edge point clouds is distributed with occluded point clouds, and the upper side of the other side is empty point cloud data, then it is judged that the upper side of the target obstacle is partially occluded.
  • determining the occlusion situation of the target obstacle includes: obtaining at least one group of edge point clouds corresponding to the target obstacle Edge point cloud; determine the occlusion point cloud within the neighborhood of each edge point cloud in each group of edge point clouds; according to whether the occlusion point cloud corresponding to each group of edge point clouds is located on the same target side of the target obstacle and the corresponding edge point cloud Whether the distance value of the occlusion point cloud is smaller than the distance value of the edge point cloud determines the occlusion situation of the target side of the target obstacle.
  • the occlusion point cloud corresponding to a group of edge point clouds of the target point cloud is located on the right side of the group of edge point clouds, and the distance values of the occlusion point cloud are all smaller than the average distance value of the group of edge point clouds, then determine The target sides corresponding to the group of edge point clouds are all occluded, for example, the upper side of the target obstacle is all occluded. For another example, if there are occluded point clouds in the neighborhood of only part of the edge point clouds in a group of edge point clouds of the target point cloud, it is determined that the target side corresponding to the group of edge point clouds is partially occluded or partially occluded by other obstacles.
  • This application further judges the occlusion of the target obstacle by judging the distribution of the occluded point cloud in each group of edge point clouds corresponding to the target obstacle, improves the judgment content of the occlusion relationship of the target obstacle, and improves the judgment accuracy.
  • This application uses the point cloud data of the target point cloud to judge the occlusion relationship, and judges the occlusion relationship between the target obstacle corresponding to the target point cloud and other obstacles from the perspective of the point cloud; compared to the need to output the outline of all obstacles, And related technologies based on the obstacle contour to determine whether it is missing to obtain the occlusion relationship between the target obstacle and other obstacles.
  • the application's judgment on the occlusion relationship is more efficient and reliable, reducing the possibility of missed judgments and misjudgments, and effectively improving driving safety. sex and reliability.
  • FIG. 7 shows a schematic structural diagram of an apparatus for judging an occlusion relationship provided by an exemplary embodiment of the present application.
  • the device for judging the occlusion relationship can be implemented as all or a part of the device through software, hardware or a combination of the two.
  • the apparatus for judging an occlusion relationship includes a first acquisition module 701 , a second acquisition module 702 and an occlusion determination module 703 .
  • the first obtaining module 701 is used to obtain point cloud data of the target point cloud, and point cloud data corresponding to other point clouds in the neighborhood of the target point cloud; the point cloud data includes distance values;
  • the second acquiring module 702 is configured to acquire an occlusion point cloud whose absolute value of the difference between the distance value and the distance value of the target point cloud is greater than or equal to the first threshold and which does not belong to the same object as the target point cloud, the Other point clouds in the neighborhood of the target point cloud include the occluded point cloud, and the number of the occluded point cloud is greater than 0;
  • An occlusion determination module 703, configured to determine that there is an occlusion relationship between the obstacle corresponding to the occlusion point cloud and the target obstacle corresponding to the target point cloud.
  • the point cloud data also includes angle values
  • the second acquiring module 702 includes:
  • the first acquisition unit is used to acquire the point cloud to be determined whose absolute value of the difference between the distance value and the distance value of the target point cloud is greater than or equal to the first threshold, and other point clouds in the neighborhood of the target point cloud include the The point cloud to be determined, the number of the point cloud to be determined is greater than or equal to 1;
  • the second acquisition unit is configured to acquire an occlusion point cloud whose absolute value of the difference between the angle value and the angle value of the target point cloud is greater than or equal to a second threshold, the point cloud to be determined includes the occlusion point cloud, the The number of occluded point clouds is greater than 0.
  • the occlusion determination module 703 includes:
  • An occlusion determination unit configured to determine an occlusion relationship between at least one obstacle corresponding to the occlusion point cloud and a target obstacle corresponding to the target point cloud according to the distance value of the occlusion point cloud, the occlusion relationship It includes an occluding relationship that blocks the target obstacle and an occluded relationship that is blocked by the target obstacle.
  • the occlusion determination unit includes:
  • a first determining subunit configured to determine a first occlusion point cloud whose distance value is greater than that of the target point cloud, the occlusion point cloud comprising the first occlusion point cloud, the number of the first occlusion point cloud Greater than 0;
  • the second determining subunit is configured to determine that the first obstacle corresponding to the first occlusion point cloud and the target obstacle are an occluded relationship.
  • the occlusion determination unit includes:
  • a third determining subunit configured to determine a second occlusion point cloud whose distance value is smaller than that of the target point cloud, the occlusion point cloud comprising the second occlusion point cloud, the number of the second occlusion point cloud Greater than 0;
  • the fourth determination subunit is configured to determine that the second obstacle corresponding to the second occlusion point cloud and the target obstacle are an occlusion relationship.
  • the device for judging the occlusion relationship further includes:
  • the situation determination module is used to determine the occluded situation of the target obstacle according to the occlusion point cloud corresponding to each group of edge point clouds in at least one group of edge point clouds corresponding to the target obstacle.
  • the situation determination module includes:
  • an acquisition unit configured to acquire at least one set of edge point clouds corresponding to the target obstacle
  • the edge unit is used to determine whether there is an occlusion point cloud in the neighborhood of each edge point cloud in each group of edge point clouds, and the occlusion point cloud corresponding to each group of edge point clouds is located at the same target of the target obstacle side;
  • the determining unit is configured to, if yes, determine that the target side of the target obstacle is blocked.
  • This application uses the point cloud data of the target point cloud to judge the occlusion relationship, and judges the occlusion relationship between the target obstacle corresponding to the target point cloud and other obstacles from the perspective of the point cloud; compared to the need to output the outline of all obstacles, And related technologies based on the obstacle contour to determine whether it is missing to obtain the occlusion relationship between the target obstacle and other obstacles.
  • the application's judgment on the occlusion relationship is more efficient and reliable, reducing the possibility of missed judgments and misjudgments, and effectively improving driving safety. sex and reliability.
  • the occlusion relationship judging device when the occlusion relationship judging device provided in the above-mentioned embodiments executes the occlusion relationship judging method, the division of the above-mentioned functional modules is used as an example for illustration. In practical applications, the above-mentioned functions can be assigned to different function modules as required Module completion means that the internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
  • the device for judging the occlusion relationship provided by the above embodiment and the embodiment of the method for judging the occlusion relationship belong to the same idea, and the implementation process thereof is detailed in the method embodiment, and will not be repeated here.
  • the embodiment of the present application also provides a computer storage medium, the computer storage medium can store a plurality of instructions, and the instructions are suitable for being loaded and executed by a processor as described in the above-mentioned embodiments shown in FIGS. 1-7 .
  • the specific execution process can refer to the specific description of the embodiments shown in FIGS. 1-6 , and details are not repeated here.
  • the present application also provides a computer program product, the computer program product stores at least one instruction, the at least one instruction is loaded by the processor and executes the occlusion relationship of the embodiment shown in the above-mentioned Fig. 1-Fig. 6
  • the computer program product stores at least one instruction, the at least one instruction is loaded by the processor and executes the occlusion relationship of the embodiment shown in the above-mentioned Fig. 1-Fig. 6
  • the electronic device 800 may include: at least one processor 801 , at least one network interface 804 , a user interface 803 , a memory 805 , and at least one communication bus 802 .
  • the communication bus 802 is used to realize connection and communication between these components.
  • the user interface 803 may include a display screen (Display) and a camera (Camera), and the optional user interface 803 may also include a standard wired interface and a wireless interface.
  • Display display screen
  • Camera Camera
  • the optional user interface 803 may also include a standard wired interface and a wireless interface.
  • the network interface 804 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface).
  • the processor 801 may include one or more processing cores.
  • the processor 801 uses various interfaces and lines to connect various parts in the entire server 800, and executes the server by running or executing instructions, programs, code sets or instruction sets stored in the memory 805, and calling data stored in the memory 805. 800 various functions and process data.
  • the processor 801 may adopt at least one of Digital Signal Processing (Digital Signal Processing, DSP), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), and Programmable Logic Array (Programmable Logic Array, PLA). implemented in the form of hardware.
  • DSP Digital Signal Processing
  • FPGA Field-Programmable Gate Array
  • PLA Programmable Logic Array
  • the processor 801 may integrate one or a combination of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), a modem, and the like.
  • CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • the CPU mainly handles the operating system, user interface and application programs, etc.
  • the GPU is used to render and draw the content that needs to be displayed on the display screen
  • the modem is used to handle wireless communication. It can be understood that the foregoing modem may also not be integrated into the processor 801, but implemented by a single chip.
  • the memory 805 may include a random access memory (Random Access Memory, RAM), and may also include a read-only memory (Read-Only Memory).
  • the memory 805 includes a non-transitory computer-readable storage medium (non-transitory computer-readable storage medium). Memory 805 may be used to store instructions, programs, codes, sets of codes, or sets of instructions.
  • the memory 805 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playback function, an image playback function, etc.), Instructions and the like for implementing the above method embodiments; the storage data area can store the data and the like involved in the above method embodiments.
  • the memory 805 may also be at least one storage device located away from the aforementioned processor 801 .
  • the memory 805 as a computer storage medium may include an operating system, a network communication module, a user interface module, and an application program for judging an occlusion relationship.
  • the user interface 803 is mainly used to provide an input interface for the user to obtain the data input by the user; and the processor 801 can be used to call the occlusion relationship judging application program stored in the memory 805, and Specifically perform the following operations:
  • the point cloud data includes a distance value
  • the absolute value of the difference between the distance value and the distance value of the target point cloud is greater than or equal to the first threshold, and the occlusion point cloud that does not belong to the same object as the target point cloud, other points in the neighborhood of the target point cloud
  • the cloud includes the occlusion point cloud, and the number of the occlusion point cloud is greater than 0;
  • the point cloud data also includes angle values
  • the processor 801 executes that the absolute value of the difference between the obtained distance value and the distance value of the target point cloud is greater than or equal to the first threshold, and the occlusion point cloud that does not belong to the same object as the target point cloud, specifically executes:
  • the point cloud to be determined includes the occlusion point cloud, and the number of the occlusion point clouds is greater than 0 .
  • the processor 801 executes the determining that there is an occlusion relationship between the obstacle corresponding to the occlusion point cloud and the target obstacle corresponding to the target point cloud, and specifically performs:
  • the occlusion relationship includes occlusion of the target obstacle The occluding relationship of objects and the occluded relationship occluded by the target obstacle.
  • the processor 801 performs the step of determining an occlusion between at least one obstacle corresponding to the occlusion point cloud and a target obstacle corresponding to the target point cloud according to the distance value of the occlusion point cloud. Relationship, specific execution:
  • the occlusion point cloud comprising the first occlusion point cloud, the number of the first occlusion point cloud being greater than 0;
  • the processor 801 performs the step of determining an occlusion between at least one obstacle corresponding to the occlusion point cloud and a target obstacle corresponding to the target point cloud according to the distance value of the occlusion point cloud. Relationship, specific execution:
  • the occlusion point cloud comprising the second occlusion point cloud, the number of the second occlusion point cloud being greater than 0;
  • the processor 801 after the processor 801 performs the determining that there is an occlusion relationship between the obstacle corresponding to the occlusion point cloud and the target obstacle corresponding to the target point cloud, specifically perform:
  • the occlusion situation of the target obstacle is determined.
  • the processor 801 executes the step of determining the occlusion condition of the target obstacle according to the occlusion point cloud corresponding to each set of edge point clouds in at least one set of edge point clouds corresponding to the target obstacle, specifically implement:
  • the processor 801 executes the determination that there is an occlusion relationship between the obstacle corresponding to the occlusion point cloud and the target obstacle corresponding to the target point cloud, it further executes:
  • the occluded target side of the target obstacle is determined.
  • This application uses the point cloud data of the target point cloud to judge the occlusion relationship, and judges the occlusion relationship between the target obstacle corresponding to the target point cloud and other obstacles from the perspective of the point cloud; compared to the need to output the outline of all obstacles, And related technologies based on the obstacle contour to determine whether it is missing to obtain the occlusion relationship between the target obstacle and other obstacles.
  • the application's judgment on the occlusion relationship is more efficient and reliable, reducing the possibility of missed judgments and misjudgments, and effectively improving driving safety. sex and reliability.
  • the embodiment of the present application also provides an occlusion relationship recognition system, which includes the electronic device shown in FIG. 8 and a radar sensor connected to the electronic device shown, and the radar sensor is used to collect point cloud data corresponding to a target scene.
  • the processes in the methods of the above embodiments can be implemented through computer programs to instruct related hardware, and the programs can be stored in a computer-readable storage medium. During execution, it may include the processes of the embodiments of the above-mentioned methods.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory or a random access memory, and the like.

Landscapes

  • Optical Radar Systems And Details Thereof (AREA)

Abstract

本申请实施例公开了一种遮挡关系判断方法、装置、存储介质及电子设备,其中,方法包括:获取目标点云的点云数据,以及所述目标点云的邻域内其他点云各自对应的点云数据;所述点云数据包括距离值;获取距离值与所述目标点云的距离值之差的绝对值大于或等于第一阈值,且与所述目标点云不属于同一物体的遮挡点云,所述目标点云的邻域内其他点云包括所述遮挡点云,所述遮挡点云的数量大于0;确定所述遮挡点云对应的障碍物与所述目标点云对应的目标障碍物之间具有遮挡关系。采用本申请实施例,可以对遮挡关系的判断更加高效可靠,降低出现漏判误判的可能,有效提高驾驶安全性和可靠性。

Description

遮挡关系判断方法、装置、存储介质及电子设备 技术领域
本申请涉及激光雷达领域,尤其涉及一种遮挡关系判断方法、装置、存储介质及电子设备。
背景技术
激光雷达(light detection and ranging,LiDAR)为发射激光束探测目标的位置、速度等特征量的雷达***。激光雷达的工作原理是向目标物体(例如车辆、飞机或导弹)发射探测信号(激光束),然后将接收到的从目标物体反射回来的信号(回波信号)与发射信号进行比较和处理后,可获得目标物体的有关信息,如目标距离、方位、高度、速度、姿态、甚至形状等参数,从而可对目标物体进行探测、跟踪和识别。
在现有技术中,使用激光雷达对障碍物检测时,常常需要对目标障碍物是否被遮挡进行判断,常见的技术方式是先检测各个障碍物,根据各个障碍物对应的边界框(通常为使用矩形框来表示目标)在指定的视角下的相对关系来推测遮挡与否。然而该方案中通过障碍物的边界框来判断障碍物是否被遮挡以及遮挡情况,精度较低。
发明内容
本申请实施例提供了一种遮挡关系判断方法、装置、存储介质及电子设备,可以对遮挡关系的判断更加高效可靠,降低出现漏判误判的可能,有效提高驾驶安全性和可靠性。所述技术方案如下:
第一方面,本申请实施例提供了一种遮挡关系判断方法,所述方法包括:
获取目标点云的点云数据,以及所述目标点云的邻域内其他点云各自对应的点云数据;所述点云数据包括距离值;
获取距离值与所述目标点云的距离值之差的绝对值大于或等于第一阈值,且与所述目标点云不属于同一物体的遮挡点云,所述目标点云的邻域内其他点云包括所述遮挡点云,所述遮挡点云的数量大于0;
确定所述遮挡点云对应的障碍物与所述目标点云对应的目标障碍物之间具有遮挡关系。
第二方面,本申请实施例提供了一种遮挡关系判断装置,所述装置包括:
第一获取模块,用于获取目标点云的点云数据,以及所述目标点云的邻域内其他点云各自对应的点云数据;所述点云数据包括距离值;
第二获取模块,用于获取距离值与所述目标点云的距离值之差的绝对值大于或等于第一阈值,且与所述目标点云不属于同一物体的遮挡点云,所述目标点云的邻域内其他点云包括所述遮挡点云,所述遮挡点云的数量大于0;
遮挡确定模块,用于确定所述遮挡点云对应的障碍物与所述目标点云对应的目标障碍物之间具有遮挡关系。
第三方面,本申请实施例提供一种计算机存储介质,所述计算机存储介质存储有多条指令,所述指令适于由处理器加载并执行上述的方法步骤。
第四方面,本申请实施例提供一种电子设备,可包括:处理器和存储器;其中,所述存储器存储有计算机程序,所述计算机程序适于由所述处理器加载并执行上述的方法步骤。
第五方面,本申请实施例提供一种遮挡关系识别***,包括第四方面所述的电子设备以及与所述电子设备连接的雷达传感器;
所述雷达传感器用于采集目标场景对应的点云数据。
本申请一些实施例提供的技术方案带来的有益效果至少包括:
本申请利用目标点云的点云数据进行遮挡关系的判断,从点云角度判断目标点云对应的目标障碍物与其他障碍物之间的遮挡关系;相比于需要输出所有障碍物的轮廓,以及基于障碍物轮廓判断是否缺失从而得到目标障碍物与其他障碍物之间的遮挡关系的相关技术,本申请对遮挡关系的判断更加高效可靠,降低出现漏判误判的可能,有效提高驾驶安全性和可靠性。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1A是本申请实施例提供的一种获取点云数据的场景示意图;
图1B是本申请实施例提供的一种车辆和车载雷达的装配示意图;
图1C是本申请实施例提供的一种目标障碍物被障碍物遮挡的场景示意图;
图2是本申请实施例提供的一种遮挡关系判断方法的流程示意图;
图3是本申请实施例提供的一种目标点云和目标点云邻域内其他点云的点阵示意图;
图4是本申请实施例提供的一种确定目标障碍物受遮挡的目标侧的场景示意图;
图5是本申请实施例提供的一种图像来源获取模块的结构示意图;
图6是本申请实施例提供的一种受遮挡点云的点阵示意图;
图7是本申请实施例提供的一种遮挡关系判断装置的结构示意图;
图8是本申请实施例提供的一种电子设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
在本申请的描述中,需要理解的是,术语“第一”、“第二”等仅用于描述目的,而不能理解为指示或暗示相对重要性。在本申请的描述中,需要说明的是,除非另有明确的规定和限定,“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、***、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其他步骤或单元。对于本领域的普通技术人员而言,可以具体情况理解上述术语在本申请中的具体含义。此外,在本申请的描述中,除非另有说明,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
下面结合具体的实施例对本申请进行详细说明。
在一个实施例中,如图1A所示,为本申请实施例提供的一种获取点云数据的场景示意图,该场景示意图包括:设置有车载雷达的车辆101、具有遮挡关系的树102A和行人102B、具有遮挡关系的花坛103A和卡车103B。其中,树102A是对于行人102B的遮挡物,行人102B是对于树102A的被遮挡物,花坛103A是对卡车103B的遮挡物,卡车103B是对于花坛103A的被遮挡物。可以理解的是,树102A、行人102B、花坛103A和卡车103B等皆为对于车辆101的障碍物。
在本申请实施例中,如图1B所示,为本申请实施例提供的车辆和车载雷达的装配示意图,该结构示意图包括:车辆101和车载雷达101A。
车辆101设置有车载雷达101A,可以理解的是,在本申请中,车辆101仅为激光雷达的承载平台。承载平台起承载作用和带动激光雷达进行运动,因此激光雷达会产生相应的线速度和角速度。承载平台可以是车辆、无人机或其他装置,本申请不作限制。
该车载雷达101A可以是毫米波雷达、激光雷达等雷达。例如,激光雷达可以为机械式激光雷达、固态激光雷达等。车载雷达101通过TOF测距方法、调频连续波方法等测距方法获取包括空间位置坐标、时间戳和回波强度等信息中的一个或多个信息的反射信号,将每个反射信号作为一个数据点,该数据点进一步包括对应的障碍物相对于车载雷达的距离信息、角度信息、径向速度等信息中的一个或多个信息。
如图1C所示,为本申请实施例提供的一种目标障碍物被障碍物遮挡的场景示意图,该场景示意图包括:卡车105和行人104。在雷达测距领域,通常将采集到的点云通过聚类算法和/或深度学习网络得到多个障碍物的轮廓,进一步通过点云补全技术(Point Cloud Completion,一种从缺失点云出发估计完整点云,从而获得更高质量的点云,达到修补的目的技术)辅助对障碍物轮廓 进行补全。如图1C所示,当卡车105被行人104遮挡;激光雷达将采集点云数据,通过queezeSeg轻量级深度学习网络解析得到卡车105对应的点云和行人104对应的点云,以及通过卡车105对应的点云还原得到卡车105的轮廓,通过行人104对应的点云还原得到行人104的轮廓;通过行人104的轮廓和卡车105的轮廓判断行人104和卡车105之间的遮挡关系为:行人104部分遮挡卡车105。
基于上述流程对多个障碍物之间的遮挡关系进行判断,判断效率低,且判断精度受基于点云形成的障碍物轮廓的精度影响,判断精度无法保障。
在一个实施例中,如图1所示,特提出了一种遮挡关系判断方法,该方法可依赖于计算机程序实现,可运行于基于冯诺依曼体系的遮挡关系判断装置上。该计算机程序可集成在应用中,也可作为独立的工具类应用运行。
具体的,该遮挡关系判断方法包括:
S101、获取目标点云的点云数据,以及目标点云的邻域内其他点云各自对应的点云数据;
点云数据包括距离值,距离值为产生该点云的物体与发射激光信号的雷达之间的距离。目标点云的邻域,可以理解为以目标点云为中心的预设范围,例如,以目标点云为中心,以窗口长度L1为直径的圆形或正方形。
目标点云的邻域内其他点云,可以理解为目标点云(m,n)为中心,以窗口长度L1的范围内所有点云,即Dist(m-floor(L1/2):m+floor(L1/2),n-floor(L1/2):n+floor(L1/2),t),t为目标点云邻域内点云的数量。在一个实施例中,目标点云的邻域内其他点云,可以理解为紧挨着目标点云且位于目标点云上下左右四个方位的点云。可以理解的是,图3所示中,除目标点云的邻域内存在点云,邻域外同样存在点云,图中未示出。
如图3所示,为本申请提供的一种目标点云和目标点云邻域内其他点云的点阵示意图,包括:目标点云301、目标点云301的邻域301A、目标点云301的邻域301A内的其他点云302,以及其他点云包括的点云302A。可以理解的是,目标点云的邻域可以是任意形状,图3所示仅为示意。
S102、获取距离值与目标点云的距离值之差的绝对值大于或等于第一阈值,且与目标点云不属于同一物体的遮挡点云。
在本申请实施例中,计算目标点云的邻域内所有点云的距离值,例如,目标点云的邻域内除目标点云外还包括319个点云,计算上述379个点云和目标点云分别对应的距离值。
获取目标点云的邻域内与目标点云不属于同一物体的点云,也即将目标点云的邻域内其他点云分为多个障碍物各自对应的点云。验证目标点云与其邻域内其他点云不属于同一物体的方法包括:DBSCAN算法、深度学习网络等。
在本申请实施例中,可以是先将目标点云的邻域内其他点云区分为多个障碍物对应的点云,进一步计算邻域内每个点云的距离值与目标点云的距离值之间的差值。本申请还包括:先计算邻域内每个点云的距离值与目标点云的距离 值之间的差值,再将目标点云的邻域内其他点云区分为多个障碍物对应的点云。
获取距离值与目标点云的距离值之差的绝对值大于或等于第一阈值,且与目标点云不属于同一物体的遮挡点云,目标点云的邻域内其他点云包括遮挡点云,遮挡点云的数量大于0。举例来说,如图3所示,第一阈值为0.5m,处理器确定目标点云301与邻域301A中的一片点云302A不属于同一物体,且获取目标点云301的距离值为10m,以及点云302的距离值9.2m;判断目标点云的距离值和点云302的距离值之差的绝对值大于第一阈值,因此判断点云302为遮挡点云。可以理解的是,本申请不对第一阈值的具体数值以及第一阈值的获取方式作任何限定。
S103、确定遮挡点云对应的障碍物与目标点云对应的目标障碍物之间具有遮挡关系。
获取距离值与目标点云的距离值之差的绝对值大于或等于第一阈值,且与目标点云不属于同一物体的遮挡点云,则确定遮挡点云对应的障碍物与目标点云对应的目标障碍物之间具有遮挡关系。可以理解的是,本申请对目标点云以及目标障碍物的选取不作任何限定。
遮挡关系,可以理解为用于表征目标点云对应的目标遮挡物对遮挡点云对应的障碍物形成遮挡、或遮挡点云对应的障碍物对目标点云对应的目标遮挡物形成遮挡、目标遮挡物受遮挡程度、目标遮挡物受遮挡的部位等信息的综合关系。
举例来说,通过目标点云的距离值确定目标障碍物与激光雷达的距离为10米,通过遮挡点云的距离值确定该遮挡点云对应的障碍物与激光雷达的距离为9.2米,遮挡点云的距离值和目标点云的距离值之差的绝对值大于第一阈值,且遮挡点云的距离值小于目标点云的距离值,确定遮挡点云对应的遮挡物和目标障碍物的遮挡关系为:遮挡点云对应的障碍物对目标障碍物进行遮挡。
在一个实施例中,本申请还包括:在步骤S103、确定所述遮挡点云对应的障碍物与所述目标点云对应的目标障碍物之间具有遮挡关系之后,还包括:确定目标障碍物的运动轨迹;根据目标障碍物的运动轨迹、与遮挡点云对应的障碍物之间的遮挡关系,判断目标障碍物受遮挡的目标侧。
目标障碍物的运动轨迹,用于表征在激光雷达的观测下目标障碍物的运动趋势方向。在一个实施例中,确定目标障碍物的运动轨迹的方法包括:根据目标障碍物的初始位姿和不同时刻惯性传感器的输出值得到粗选运动轨迹;获取多线激光雷达不同时刻的三维点云数据并进行栅格化处理得到灰度图;对相邻时刻的两帧灰度图之间的特征点进行特征匹配,得到特征匹配点对;根据特征匹配点对得到相邻时刻的点云数据之间的变换关系;根据所述变换关系得到目标障碍物的精确运动轨迹。基于粗选的轨迹进行特定范围内的SURF(Speed Up Robust Features)特征匹配,这样就减少了SURF特征匹配的运算量和运算时间。本申请还包括其他确定目标障碍物的运动轨迹的方法,对此不作任何限定。
如图4所示,为本申请实施例提供的一种确定目标障碍物受遮挡的目标侧的场景示意图,激光雷达401采集点云401A,点云401A中包括障碍物402对应的点云402,以及障碍物403对应的点云403;将障碍物403作为目标障碍物,从采集的点云组成点阵图得知,障碍物402对目标障碍物403的右侧形成遮挡;获取目标障碍物403的行动轨迹,以及激光雷达401的运动轨迹,判断障碍物402实际对目标障碍物403的左侧形成遮挡,也即目标障碍物403的受遮挡目标侧为左侧。
在本申请实施例中,通过获取目标障碍物的行动轨迹判断目标障碍物真实受遮挡的目标侧,提高对目标障碍物与其他障碍物之间的遮挡关系的判断精准度和真实度。
本申请利用目标点云的点云数据进行遮挡关系的判断,从点云角度判断目标点云对应的目标障碍物与其他障碍物之间的遮挡关系;相比于需要输出所有障碍物的轮廓,以及基于障碍物轮廓判断是否缺失从而得到目标障碍物与其他障碍物之间的遮挡关系的相关技术,本申请对遮挡关系的判断更加高效可靠,降低出现漏判误判的可能,有效提高驾驶安全性和可靠性。
如图5所示,图5是本申请提出的另一种遮挡关系判断方法的流程示意图,该方法可依赖于计算机程序实现,可运行于基于冯诺依曼体系的遮挡关系判断装置上。该计算机程序可集成在应用中,也可作为独立的工具类应用运行。
具体的:
S201、获取目标点云的点云数据,以及目标点云的邻域内其他点云各自对应的点云数据。
具体的,步骤S201与步骤S101一致,此处不再赘述。
S202、获取距离值与目标点云的距离值之差的绝对值大于或等于第一阈值的待确定点云。
在本申请实施例中,计算目标点云的邻域内所有点云的距离值,例如,目标点云的邻域内除目标点云外还包括319个点云,计算上述319个点云和目标点云分别对应的距离值。
举例来说,如图3所示,第一阈值为0.5m,获取目标点云301的距离值为10m,以及点云302的距离值9.2m;判断目标点云的距离值和点云302的距离值之差的绝对值大于第一阈值,因此判断点云302为遮挡点云。可以理解的是,本申请不对第一阈值的具体数值以及第一阈值的获取方式作任何限定。
S203、获取角度值与目标点云的角度值之差的绝对值大于或等于第二阈值的遮挡点云。
待确定点云包括所述遮挡点云,遮挡点云的数量大于0。
基于距离值确定目标点云的邻域内包括的多个待确定点云,在待确定点云中,获取角度值与目标点云的角度值之差的绝对值大于或等于第二阈值的点云,将上述点云作为遮挡点云。
在另一个实施例中,基于距离值确定目标点云的邻域内包括的多个待确定点云,在待确定点云中,获取与目标点云处于同一线束上且与目标点云相邻的多个相邻点云对,判断多个点云对之间的角度值之差的绝对值,当角度值之差的绝对值小于第二阈值时,确定上述多个点云对位非遮挡点云。与目标点云处于同一线束,可以理解为由于激光雷达是通过多个激光头横向来回扫描采集点云数据,同一个激光头在一个周期内扫描得到的一组点云数据,就是属于同一线束的。
对于同一线束上的点,即使满足与目标点云之间的距离值大于第一阈值,但与目标点云处于同一线束上且与目标点云相邻的形成的角度值之差小于第二阈值时,即多个相邻点对之间的角度关系接近时,判断上述多个相邻点云对和目标点云处于与激光雷达照射方向形成大角度的平面上,而上述相邻点云对对应的障碍物没有对目标点云对应的目标障碍物形成遮挡。
在本申请实施例中,通过点云数据中的角度值确定目标点云的邻域内其他点云是否属于同一物体,判断方法简单可靠,有效提高判断效率。在另一个实施例中,通过聚类算法对目标点云的邻域内其他点云进行分类,同时通过点云的角度值排除非遮挡点云,提高对遮挡点云判断的可靠性。
S204、根据遮挡点云的距离值,确定遮挡点云对应的至少一个障碍物分别与目标点云对应的目标障碍物之间的遮挡关系。
在本申请实施例中,遮挡关系包括遮挡目标障碍物的遮挡关系以及被目标障碍物遮挡的被遮挡关系。
如图6所示,为本申请实施例提供的一种受遮挡点云的点阵示意图,该点阵示意图包括:目标障碍物对应的目标点云301、第一障碍物对应的第一遮挡点云602、第二障碍物对应的第二遮挡点云601。第一遮挡点云602对应的第一障碍物与目标障碍物之间为被遮挡关系,第二遮挡点云601对应的第二障碍物与目标障碍物之间为遮挡关系。
S205A、确定距离值大于目标点云的距离值的第一遮挡点云。
遮挡点云中包括第一遮挡点云。根据遮挡点云的距离值,获取距离值大于目标点云的距离值的遮挡点云,定义上述遮挡点云为第一遮挡点云。例如,如图6所示,目标点云301的距离值为10m,第一遮挡点云602的距离值为11m。
S206A、确定第一遮挡点云对应的第一障碍物与目标障碍物之间为被遮挡关系。
如图6所示,第一遮挡点云602对应的第一障碍物与目标点云301对应的目标障碍物之间的关系为被遮挡关系,即第一遮挡点云602对应的第一障碍物被目标障碍物遮挡。
S205B、确定距离值小于目标点云的距离值的第二遮挡点云。
遮挡点云中包括第二遮挡点云。根据遮挡点云的距离值,获取距离值小于目标点云的距离值的遮挡点云,定义上述遮挡点云为第二遮挡点云。例如,如图6所示,目标点云301的距离值为10m,第二遮挡点云601的距离值为9m。
S206B、确定第二遮挡点云对应的第二障碍物与目标障碍物之间为遮挡关系。
如图6所示,第二遮挡点云601对应的第二障碍物与目标点云301对应的目标障碍物之间的关系为遮挡关系,即第二遮挡点云601对应的第二障碍物对目标障碍物进行遮挡。
在本申请实施例中,通过点云的距离值对目标障碍物与遮挡点云对应的障碍物之间的遮挡关系进行进一步的判断,提高判断精度。
在一个实施例中,步骤S204之后还包括:根据目标障碍物对应的至少一组边缘点云中每组边缘点云对应的遮挡点云,确定目标障碍物的被遮挡情况。
边缘点云,可以理解为构成目标障碍物***轮廓的点云,一组边缘点云,为构成目标障碍物***轮廓一条边的一组点云。例如,目标障碍物为一个圆筒形垃圾桶,包括4组边缘点云,目标障碍物为一棵树,包括12组边缘点云。
目标障碍物的被遮挡情况,可以理解为目标障碍物的受遮挡部位和受遮挡率。例如,目标障碍物的某一侧完全遮挡,目标障碍物的某一侧部分被遮挡。又比如,目标障碍物对应包括4组边缘点云的点云,上述点云中一组边缘点云的左侧完全分布被遮挡点云,则判断目标障碍物的左侧被完全遮挡;上述点云中另一组边缘点云中一半边缘点云的上侧分布遮挡点云,另一侧的上侧为空点云数据,则判断目标障碍物的上侧部分被遮挡。
在一个实施例中,根据目标障碍物对应的至少一组边缘点云中每组边缘点云对应的遮挡点云,确定目标障碍物的被遮挡情况,包括:获取目标障碍物对应的至少一组边缘点云;确定每组边缘点云中每个边缘点云的邻域内的遮挡点云;根据每组边缘点云对应的遮挡点云是否位于目标障碍物的同一目标侧以及边缘点云对应的遮挡点云的距离值是否小于边缘点云的距离值,确定目标障碍物的目标侧的被遮挡情况。
举例来说,目标点云的一组边缘点云对应的遮挡点云位于该组边缘点云的右侧,且该遮挡点云的距离值均小于该组边缘点云的平均距离值,则确定该组边缘点云对应的目标侧全部被遮挡,例如,目标障碍物的上侧部分全部被遮挡。又比如,目标点云的一组边缘点云中仅有部分边缘点云的邻域内存在遮挡点云,则确定该组边缘点云对应的目标侧部分被遮挡或部分遮挡其他障碍物。
本申请通过判断目标障碍物对应的每组边缘点云内被遮挡点云的分布情况,进一步判断目标障碍物的被遮挡情况,完善对目标障碍物遮挡关系的判断内容,提高判断精度。
本申请利用目标点云的点云数据进行遮挡关系的判断,从点云角度判断目标点云对应的目标障碍物与其他障碍物之间的遮挡关系;相比于需要输出所有障碍物的轮廓,以及基于障碍物轮廓判断是否缺失从而得到目标障碍物与其他障碍物之间的遮挡关系的相关技术,本申请对遮挡关系的判断更加高效可靠,降低出现漏判误判的可能,有效提高驾驶安全性和可靠性。
下述为本申请装置实施例,可以用于执行本申请方法实施例。对于本申请装置实施例中未披露的细节,请参照本申请方法实施例。
请参见图7,其示出了本申请一个示例性实施例提供的遮挡关系判断装置的结构示意图。该遮挡关系判断装置可以通过软件、硬件或者两者的结合实现成为装置的全部或一部分。该遮挡关系判断装置包括第一获取模块701、第二获取模块702和遮挡确定模块703。
第一获取模块701,用于获取目标点云的点云数据,以及所述目标点云的邻域内其他点云各自对应的点云数据;所述点云数据包括距离值;
第二获取模块702,用于获取距离值与所述目标点云的距离值之差的绝对值大于或等于第一阈值,且与所述目标点云不属于同一物体的遮挡点云,所述目标点云的邻域内其他点云包括所述遮挡点云,所述遮挡点云的数量大于0;
遮挡确定模块703,用于确定所述遮挡点云对应的障碍物与所述目标点云对应的目标障碍物之间具有遮挡关系。
在一个实施例中,所述点云数据还包括角度值;
第二获取模块702,包括:
第一获取单元,用于获取距离值与所述目标点云的距离值之差的绝对值大于或等于第一阈值的待确定点云,所述目标点云的邻域内其他点云包括所述待确定点云,所述待确定点云的数量大于或等于1;
第二获取单元,用于获取角度值与所述目标点云的角度值之差的绝对值大于或等于第二阈值的遮挡点云,所述待确定点云包括所述遮挡点云,所述遮挡点云的数量大于0。
在一个实施例中,遮挡确定模块703,包括:
遮挡确定单元,用于根据所述遮挡点云的距离值,确定所述遮挡点云对应的至少一个障碍物分别与所述目标点云对应的目标障碍物之间的遮挡关系,所述遮挡关系包括遮挡所述目标障碍物的遮挡关系以及被所述目标障碍物遮挡的被遮挡关系。
在一个实施例中,遮挡确定单元,包括:
第一确定子单元,用于确定距离值大于所述目标点云的距离值的第一遮挡点云,所述遮挡点云包括所述第一遮挡点云,所述第一遮挡点云的数量大于0;
第二确定子单元,用于确定所述第一遮挡点云对应的第一障碍物与所述目标障碍物之间为被遮挡关系。
在一个实施例中,遮挡确定单元,包括:
第三确定子单元,用于确定距离值小于所述目标点云的距离值的第二遮挡点云,所述遮挡点云包括所述第二遮挡点云,所述第二遮挡点云的数量大于0;
第四确定子单元,用于确定所述第二遮挡点云对应的第二障碍物与所述目标障碍物之间为遮挡关系。
在一个实施例中,遮挡关系判断装置还包括:
情况确定模块,用于根据所述目标障碍物对应的至少一组边缘点云中每组 边缘点云对应的遮挡点云,确定所述目标障碍物的被遮挡情况。
在一个实施例中,情况确定模块包括:
获取单元,用于获取所述目标障碍物对应的至少一组边缘点云;
边缘单元,用于确定每组所述边缘点云中每个边缘点云的邻域内是否存在遮挡点云,且每组所述边缘点云对应的遮挡点云位于所述目标障碍物的同一目标侧;
确定单元,用于若为是,则确定所述目标障碍物的目标侧被遮挡。
本申请利用目标点云的点云数据进行遮挡关系的判断,从点云角度判断目标点云对应的目标障碍物与其他障碍物之间的遮挡关系;相比于需要输出所有障碍物的轮廓,以及基于障碍物轮廓判断是否缺失从而得到目标障碍物与其他障碍物之间的遮挡关系的相关技术,本申请对遮挡关系的判断更加高效可靠,降低出现漏判误判的可能,有效提高驾驶安全性和可靠性。
需要说明的是,上述实施例提供的遮挡关系判断装置在执行遮挡关系判断方法时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的遮挡关系判断装置与遮挡关系判断方法实施例属于同一构思,其体现实现过程详见方法实施例,这里不再赘述。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
本申请实施例还提供了一种计算机存储介质,所述计算机存储介质可以存储有多条指令,所述指令适于由处理器加载并执行如上述图1-图7所示实施例的所述遮挡关系判断方法,具体执行过程可以参见图1-图6所示实施例的具体说明,在此不进行赘述。
本申请还提供了一种计算机程序产品,该计算机程序产品存储有至少一条指令,所述至少一条指令由所述处理器加载并执行如上述图1-图6所示实施例的所述遮挡关系判断方法,具体执行过程可以参见图1-图6所示实施例的具体说明,在此不进行赘述。
请参见图8,为本申请实施例提供了一种电子设备的结构示意图。如图8所示,所述电子设备800可以包括:至少一个处理器801,至少一个网络接口804,用户接口803,存储器805,至少一个通信总线802。
其中,通信总线802用于实现这些组件之间的连接通信。
其中,用户接口803可以包括显示屏(Display)、摄像头(Camera),可选用户接口803还可以包括标准的有线接口、无线接口。
其中,网络接口804可选的可以包括标准的有线接口、无线接口(如WI-FI接口)。
其中,处理器801可以包括一个或者多个处理核心。处理器801利用各种接口和线路连接整个服务器800内的各个部分,通过运行或执行存储在存储器805内的指令、程序、代码集或指令集,以及调用存储在存储器805内的数据,执行服务器800的各种功能和处理数据。可选的,处理器801可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器801可集成中央处理器(Central Processing Unit,CPU)、图像处理器(Graphics Processing Unit,GPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作***、用户界面和应用程序等;GPU用于负责显示屏所需要显示的内容的渲染和绘制;调制解调器用于处理无线通信。可以理解的是,上述调制解调器也可以不集成到处理器801中,单独通过一块芯片进行实现。
其中,存储器805可以包括随机存储器(Random Access Memory,RAM),也可以包括只读存储器(Read-Only Memory)。可选的,该存储器805包括非瞬时性计算机可读介质(non-transitory computer-readable storage medium)。存储器805可用于存储指令、程序、代码、代码集或指令集。存储器805可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作***的指令、用于至少一个功能的指令(比如触控功能、声音播放功能、图像播放功能等)、用于实现上述各个方法实施例的指令等;存储数据区可存储上面各个方法实施例中涉及到的数据等。存储器805可选的还可以是至少一个位于远离前述处理器801的存储装置。如图8所示,作为一种计算机存储介质的存储器805中可以包括操作***、网络通信模块、用户接口模块以及遮挡关系判断应用程序。
在图8所示的电子设备800中,用户接口803主要用于为用户提供输入的接口,获取用户输入的数据;而处理器801可以用于调用存储器805中存储的遮挡关系判断应用程序,并具体执行以下操作:
获取目标点云的点云数据,以及所述目标点云的邻域内其他点云各自对应的点云数据;所述点云数据包括距离值;
获取距离值与所述目标点云的距离值之差的绝对值大于或等于第一阈值,且与所述目标点云不属于同一物体的遮挡点云,所述目标点云的邻域内其他点云包括所述遮挡点云,所述遮挡点云的数量大于0;
确定所述遮挡点云对应的障碍物与所述目标点云对应的目标障碍物之间具有遮挡关系。
在一个实施例中,所述点云数据还包括角度值;
处理器801执行所述获取距离值与所述目标点云的距离值之差的绝对值大于或等于第一阈值,且与所述目标点云不属于同一物体的遮挡点云,具体执行:
获取距离值与所述目标点云的距离值之差的绝对值大于或等于第一阈值的待确定点云,所述目标点云的邻域内其他点云包括所述待确定点云,所述待 确定点云的数量大于或等于1;
获取角度值与所述目标点云的角度值之差的绝对值大于或等于第二阈值的遮挡点云,所述待确定点云包括所述遮挡点云,所述遮挡点云的数量大于0。
在一个实施例中,处理器801执行所述确定所述遮挡点云对应的障碍物与所述目标点云对应的目标障碍物之间具有遮挡关系,具体执行:
根据所述遮挡点云的距离值,确定所述遮挡点云对应的至少一个障碍物分别与所述目标点云对应的目标障碍物之间的遮挡关系,所述遮挡关系包括遮挡所述目标障碍物的遮挡关系以及被所述目标障碍物遮挡的被遮挡关系。
在一个实施例中,处理器801执行所述根据所述遮挡点云的距离值,确定所述遮挡点云对应的至少一个障碍物分别与所述目标点云对应的目标障碍物之间的遮挡关系,具体执行:
确定距离值大于所述目标点云的距离值的第一遮挡点云,所述遮挡点云包括所述第一遮挡点云,所述第一遮挡点云的数量大于0;
确定所述第一遮挡点云对应的第一障碍物与所述目标障碍物之间为被遮挡关系。
在一个实施例中,处理器801执行所述根据所述遮挡点云的距离值,确定所述遮挡点云对应的至少一个障碍物分别与所述目标点云对应的目标障碍物之间的遮挡关系,具体执行:
确定距离值小于所述目标点云的距离值的第二遮挡点云,所述遮挡点云包括所述第二遮挡点云,所述第二遮挡点云的数量大于0;
确定所述第二遮挡点云对应的第二障碍物与所述目标障碍物之间为遮挡关系。
在一个实施例中,处理器801执行所述确定所述遮挡点云对应的障碍物与所述目标点云对应的目标障碍物之间具有遮挡关系之后,具体执行:
根据所述目标障碍物对应的至少一组边缘点云中每组边缘点云对应的遮挡点云,确定所述目标障碍物的被遮挡情况。
在一个实施例中,处理器801执行所述根据所述目标障碍物对应的至少一组边缘点云中每组边缘点云对应的遮挡点云,确定所述目标障碍物的被遮挡情况,具体执行:
获取所述目标障碍物对应的至少一组边缘点云;
确定每组所述边缘点云中每个边缘点云的邻域内是否存在遮挡点云,且每组所述边缘点云对应的遮挡点云位于所述目标障碍物的同一目标侧;
若为是,则确定所述目标障碍物的目标侧被遮挡。
在一个实施例中,处理器801执行所述确定所述遮挡点云对应的障碍物与所述目标点云对应的目标障碍物之间具有遮挡关系之后,还执行:
确定所述目标障碍物的运动轨迹;
根据所述目标障碍物的运动轨迹、与所述遮挡点云对应的障碍物之间的遮挡关系,判断所述目标障碍物受遮挡的目标侧。
本申请利用目标点云的点云数据进行遮挡关系的判断,从点云角度判断目标点云对应的目标障碍物与其他障碍物之间的遮挡关系;相比于需要输出所有障碍物的轮廓,以及基于障碍物轮廓判断是否缺失从而得到目标障碍物与其他障碍物之间的遮挡关系的相关技术,本申请对遮挡关系的判断更加高效可靠,降低出现漏判误判的可能,有效提高驾驶安全性和可靠性。
本申请实施例还提供了一种遮挡关系识别***,包括图8所示的电子设备以及与所示电子设备连接的雷达传感器,该雷达传感器用于采集目标场景对应的点云数据。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体或随机存储记忆体等。
以上所揭露的仅为本申请较佳实施例而已,当然不能以此来限定本申请之权利范围,因此依本申请权利要求所作的等同变化,仍属本申请所涵盖的范围。

Claims (12)

  1. 一种遮挡关系判断方法,其特征在于,所述方法包括:
    获取目标点云的点云数据,以及所述目标点云的邻域内其他点云各自对应的点云数据;所述点云数据包括距离值;
    获取距离值与所述目标点云的距离值之差的绝对值大于或等于第一阈值,且与所述目标点云不属于同一物体的遮挡点云,所述目标点云的邻域内其他点云包括所述遮挡点云,所述遮挡点云的数量大于0;
    确定所述遮挡点云对应的障碍物与所述目标点云对应的目标障碍物之间具有遮挡关系。
  2. 根据权利要求1所述的方法,其特征在于,所述点云数据还包括角度值;
    所述获取距离值与所述目标点云的距离值之差的绝对值大于或等于第一阈值,且与所述目标点云不属于同一物体的遮挡点云,包括:
    获取距离值与所述目标点云的距离值之差的绝对值大于或等于第一阈值的待确定点云,所述目标点云的邻域内其他点云包括所述待确定点云,所述待确定点云的数量大于或等于1;
    获取角度值与所述目标点云的角度值之差的绝对值大于或等于第二阈值的遮挡点云,所述待确定点云包括所述遮挡点云,所述遮挡点云的数量大于0。
  3. 根据权利要求1所述的方法,其特征在于,所述确定所述遮挡点云对应的障碍物与所述目标点云对应的目标障碍物之间具有遮挡关系,包括:
    根据所述遮挡点云的距离值,确定所述遮挡点云对应的至少一个障碍物分别与所述目标点云对应的目标障碍物之间的遮挡关系,所述遮挡关系包括遮挡所述目标障碍物的遮挡关系以及被所述目标障碍物遮挡的被遮挡关系。
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述遮挡点云的距离值,确定所述遮挡点云对应的至少一个障碍物分别与所述目标点云对应的目标障碍物之间的遮挡关系,包括:
    确定距离值大于所述目标点云的距离值的第一遮挡点云,所述遮挡点云包括所述第一遮挡点云,所述第一遮挡点云的数量大于0;
    确定所述第一遮挡点云对应的第一障碍物与所述目标障碍物之间为被遮挡关系。
  5. 根据权利要求3所述的方法,其特征在于,所述根据所述遮挡点云的距离值,确定所述遮挡点云对应的至少一个障碍物分别与所述目标点云对应的目标障碍物之间的遮挡关系,包括:
    确定距离值小于所述目标点云的距离值的第二遮挡点云,所述遮挡点云包 括所述第二遮挡点云,所述第二遮挡点云的数量大于0;
    确定所述第二遮挡点云对应的第二障碍物与所述目标障碍物之间为遮挡关系。
  6. 根据权利要求1所述的方法,其特征在于,所述确定所述遮挡点云对应的障碍物与所述目标点云对应的目标障碍物之间具有遮挡关系之后,包括:
    根据所述目标障碍物对应的至少一组边缘点云中每组边缘点云对应的遮挡点云,确定所述目标障碍物的被遮挡情况。
  7. 根据权利要求6所述的方法,其特征在于,所述根据所述目标障碍物对应的至少一组边缘点云中每组边缘点云对应的遮挡点云,确定所述目标障碍物的被遮挡情况,包括:
    获取所述目标障碍物对应的至少一组边缘点云;
    确定每组所述边缘点云中每个边缘点云的邻域内的遮挡点云;
    根据每组所述边缘点云对应的遮挡点云是否位于所述目标障碍物的同一目标侧以及每组边缘点云对应的遮挡点云的距离值是否小于所述边缘点云的距离值,确定所述目标障碍物的目标侧的被遮挡情况。
  8. 根据权利要求1所述的方法,其特征在于,所述确定所述遮挡点云对应的障碍物与所述目标点云对应的目标障碍物之间具有遮挡关系之后,还包括:
    确定所述目标障碍物的运动轨迹;
    根据所述目标障碍物的运动轨迹、与所述遮挡点云对应的障碍物之间的遮挡关系,判断所述目标障碍物受遮挡的目标侧。
  9. 一种遮挡关系判断装置,其特征在于,所述装置包括:
    第一获取模块,用于获取目标点云的点云数据,以及所述目标点云的邻域内其他点云各自对应的点云数据;所述点云数据包括距离值;
    第二获取模块,用于获取距离值与所述目标点云的距离值之差的绝对值大于或等于第一阈值,且与所述目标点云不属于同一物体的遮挡点云,所述目标点云的邻域内其他点云包括所述遮挡点云,所述遮挡点云的数量大于0;
    遮挡确定模块,用于确定所述遮挡点云对应的障碍物与所述目标点云对应的目标障碍物之间具有遮挡关系。
  10. 一种计算机存储介质,其特征在于,所述计算机存储介质存储有多条指令,所述指令适于由处理器加载并执行如权利要求1~8任意一项的方法步骤。
  11. 一种电子设备,其特征在于,包括:处理器和存储器;其中,所述存 储器存储有计算机程序,所述计算机程序适于由所述处理器加载并执行如权利要求1~8任意一项的方法步骤。
  12. 一种遮挡关系识别***,其特征在于,包括权利要求11所述的电子设备以及与所述电子设备连接的雷达传感器;
    所述雷达传感器用于采集目标场景对应的点云数据。
PCT/CN2021/125749 2021-10-22 2021-10-22 遮挡关系判断方法、装置、存储介质及电子设备 WO2023065313A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202180102336.7A CN118056228A (zh) 2021-10-22 2021-10-22 遮挡关系判断方法、装置、存储介质及电子设备
PCT/CN2021/125749 WO2023065313A1 (zh) 2021-10-22 2021-10-22 遮挡关系判断方法、装置、存储介质及电子设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/125749 WO2023065313A1 (zh) 2021-10-22 2021-10-22 遮挡关系判断方法、装置、存储介质及电子设备

Publications (1)

Publication Number Publication Date
WO2023065313A1 true WO2023065313A1 (zh) 2023-04-27

Family

ID=86058728

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/125749 WO2023065313A1 (zh) 2021-10-22 2021-10-22 遮挡关系判断方法、装置、存储介质及电子设备

Country Status (2)

Country Link
CN (1) CN118056228A (zh)
WO (1) WO2023065313A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108604405A (zh) * 2016-02-03 2018-09-28 本田技研工业株式会社 利用环境和深度次序探测局部被遮挡的物体
US20200081448A1 (en) * 2018-09-07 2020-03-12 GM Global Technology Operations LLC Traffic light occlusion detection for autonomous vehicle
CN110889828A (zh) * 2019-11-07 2020-03-17 浙江大华技术股份有限公司 预定场景中的栈板识别方法、终端设备、计算机存储介质
CN111402160A (zh) * 2020-03-13 2020-07-10 北京百度网讯科技有限公司 一种点云数据去噪方法、装置、设备和存储介质
CN112417967A (zh) * 2020-10-22 2021-02-26 腾讯科技(深圳)有限公司 障碍物检测方法、装置、计算机设备和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108604405A (zh) * 2016-02-03 2018-09-28 本田技研工业株式会社 利用环境和深度次序探测局部被遮挡的物体
US20200081448A1 (en) * 2018-09-07 2020-03-12 GM Global Technology Operations LLC Traffic light occlusion detection for autonomous vehicle
CN110889828A (zh) * 2019-11-07 2020-03-17 浙江大华技术股份有限公司 预定场景中的栈板识别方法、终端设备、计算机存储介质
CN111402160A (zh) * 2020-03-13 2020-07-10 北京百度网讯科技有限公司 一种点云数据去噪方法、装置、设备和存储介质
CN112417967A (zh) * 2020-10-22 2021-02-26 腾讯科技(深圳)有限公司 障碍物检测方法、装置、计算机设备和存储介质

Also Published As

Publication number Publication date
CN118056228A (zh) 2024-05-17

Similar Documents

Publication Publication Date Title
US20230072637A1 (en) Vehicle Drivable Area Detection Method, System, and Autonomous Vehicle Using the System
US11320833B2 (en) Data processing method, apparatus and terminal
WO2020083024A1 (zh) 障碍物的识别方法和装置、存储介质、电子装置
CN111932943B (zh) 动态目标的检测方法、装置、存储介质及路基监测设备
JP2019533133A (ja) 車両の環境情報を検出するための方法およびシステム
CN111753609A (zh) 一种目标识别的方法、装置及摄像机
JP2023500994A (ja) 障害物認識方法、装置、自律移動機器及び記憶媒体
JP2012221456A (ja) 対象物識別装置及びプログラム
JP4102885B2 (ja) 駐車車両検知方法及び駐車車両検知システム
KR20210026412A (ko) Cnn을 활용한 카메라 및 라이다 센서 기반 실시간 객체 탐지 방법
CN110197106A (zh) 物件标示***及方法
WO2020237516A1 (zh) 点云的处理方法、设备和计算机可读存储介质
CN114091601B (zh) 人员状况侦测的传感器融合方法
WO2022206517A1 (zh) 一种目标检测方法及装置
CN112906777A (zh) 目标检测方法、装置、电子设备及存储介质
CN114119729A (zh) 障碍物识别方法及装置
WO2021051736A1 (zh) 感知区域的确定方法、装置、存储介质及车辆
KR101333459B1 (ko) 차선 인식 방법 및 장치
WO2023065312A1 (zh) 障碍物识别方法、装置、存储介质及电子设备
WO2023065313A1 (zh) 遮挡关系判断方法、装置、存储介质及电子设备
EP4206728A1 (en) Interference point determining method and apparatus, storage medium, and multi-channel lidar
CN112733678A (zh) 测距方法、装置、计算机设备和存储介质
WO2023216555A1 (zh) 基于双目视觉的避障方法、装置、机器人及介质
CN112639822A (zh) 一种数据处理方法及装置
JP3740531B2 (ja) 駐車車両検知方法、検知システム及び駐車車両検出装置

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE