WO2023065312A1 - 障碍物识别方法、装置、存储介质及电子设备 - Google Patents

障碍物识别方法、装置、存储介质及电子设备 Download PDF

Info

Publication number
WO2023065312A1
WO2023065312A1 PCT/CN2021/125748 CN2021125748W WO2023065312A1 WO 2023065312 A1 WO2023065312 A1 WO 2023065312A1 CN 2021125748 W CN2021125748 W CN 2021125748W WO 2023065312 A1 WO2023065312 A1 WO 2023065312A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
obstacle
occluded
cloud data
occluded object
Prior art date
Application number
PCT/CN2021/125748
Other languages
English (en)
French (fr)
Inventor
徐棨森
Original Assignee
深圳市速腾聚创科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市速腾聚创科技有限公司 filed Critical 深圳市速腾聚创科技有限公司
Priority to PCT/CN2021/125748 priority Critical patent/WO2023065312A1/zh
Priority to CN202180102319.3A priority patent/CN118043864A/zh
Publication of WO2023065312A1 publication Critical patent/WO2023065312A1/zh

Links

Images

Definitions

  • the present application relates to the field of radar imaging, and in particular to an obstacle identification method, device, storage medium and electronic equipment.
  • the accuracy of obstacle detection is of great significance and is the key to automatic driving.
  • Lidar can generate three-dimensional information, has high ranging accuracy, can accurately obtain the target position, and can effectively improve the detection effect of obstacles, so it is widely used in automatic driving.
  • the deep learning network can effectively extract features, and use the expressive ability of the multi-layer neural network to accurately detect obstacles based on the point cloud data obtained by the lidar. Therefore, in automatic driving, the deep learning network is often used to cooperate with the processor. Restore, model and judge the point cloud data collected by lidar.
  • the automatic driving the deep learning network is often used to cooperate with the processor. Restore, model and judge the point cloud data collected by lidar.
  • Embodiments of the present application provide an obstacle identification method, device, storage medium, and electronic equipment, which can enhance the accuracy of obstacle identification. Described technical scheme is as follows:
  • the embodiment of the present application provides an obstacle identification method, the method comprising:
  • the occlusion rate of the occluded object and the first confidence level determine the type of the occluded object as the second confidence level of the target obstacle type
  • the second confidence level is greater than a preset confidence level threshold, it is determined that the type of the occluded object is the type of the target obstacle.
  • an embodiment of the present application provides an obstacle identification method, the method comprising:
  • an embodiment of the present application provides an obstacle identification device, the device comprising:
  • the first object recognition module is configured to perform first obstacle detection on the point cloud data corresponding to the occluded object, and determine the first degree of confidence that the type of the occluded object is the target obstacle type;
  • an occlusion calculation module configured to perform occlusion calculation on the point cloud data corresponding to the occluded object, and determine the occluded rate of the occluded object;
  • the second object identification module is configured to determine a second confidence level that the type of the occluded object is the type of the target obstacle according to the occlusion rate of the occluded object and the first confidence level;
  • the third object recognition module is configured to determine that the type of the occluded object is the type of the target obstacle when the second confidence level is greater than a preset confidence level threshold.
  • an obstacle identification device comprising:
  • a calculation module configured to perform occlusion calculation on the point cloud data corresponding to the occluded object, to determine the occluded rate of the occluded object;
  • the object recognition module is used to input the point cloud data corresponding to the occluded object and the corresponding occluded rate into the obstacle recognition model, and determine the target obstacle type corresponding to the occluded object; wherein, the obstacle
  • the recognition model is trained from multiple known obstacle types, corresponding point cloud data, and corresponding occlusion rate.
  • the embodiments of the present application provide a computer storage medium, where a plurality of instructions are stored in the computer storage medium, and the instructions are adapted to be loaded by a processor and execute the above method steps.
  • an embodiment of the present application provides an electronic device, which may include: a processor and a memory; wherein, the memory stores a computer program, and the computer program is adapted to be loaded by the processor and execute the above-mentioned method steps .
  • the application After confirming that the type of the occluded object is the first confidence level of the target obstacle type, the application obtains the second confidence level through the occluded rate of the occluded object, so as to judge whether the type of the occluded object is based on the second confidence level is the target obstacle type; and this application uses the trained obstacle recognition model to input the occlusion rate and point cloud data of the occluded object, and obtains the target obstacle type corresponding to the occluded object based on the obstacle recognition model; this application Considering whether the obstacle is occluded, the point cloud data of the obstacle is synthesized based on the occlusion rate and other influencing factors, so as to obtain the confidence that the obstacle type is the target obstacle type, enhance the accuracy of obstacle recognition, and reduce the occurrence of errors. The possibility of false detection can effectively improve driving safety and reliability.
  • FIG. 1A is a schematic diagram of a scene for acquiring point cloud data provided by an embodiment of the present application
  • Fig. 1B is an assembly schematic diagram of a vehicle and a vehicle-mounted radar provided by an embodiment of the present application;
  • FIG. 2 is a schematic flow chart of an obstacle identification method provided in an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a scene where an occluded object is occluded according to an embodiment of the present application
  • Fig. 4 is a numerical schematic diagram of a preset comparison table provided by the embodiment of the present application.
  • Fig. 5 is a schematic flowchart of another obstacle identification method provided by the embodiment of the present application.
  • FIG. 6 is a schematic flowchart of another obstacle identification method provided by the embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a deep learning network provided by an embodiment of the present application.
  • Fig. 8 is a schematic structural diagram of an obstacle recognition device provided by an embodiment of the present application.
  • Fig. 9 is a schematic structural diagram of another obstacle recognition device provided by an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • plural means two or more.
  • “And/or” describes the association relationship of associated objects, indicating that there may be three types of relationships, for example, A and/or B may indicate: A exists alone, A and B exist simultaneously, and B exists independently.
  • the character “/” generally indicates that the contextual objects are an "or” relationship.
  • FIG. 1A it is a schematic diagram of a scene for obtaining point cloud data provided by the embodiment of the present application.
  • the schematic diagram of the scene includes: a vehicle 101 equipped with a vehicle radar, a tree 102A with an occlusion relationship and pedestrians 102B, a flower bed 103A and a truck 103B in an occlusion relationship.
  • the tree 102A is the occluder for the pedestrian 102B
  • the pedestrian 102B is the occluded object for the tree 102A
  • the flower bed 103A is the occluder for the truck 103B
  • the truck 103B is the occluded object for the flower bed 103A.
  • the tree 102A, the pedestrian 102B, the flower bed 103A and the truck 103B are all obstacles to the vehicle 101 .
  • FIG. 1B it is a schematic diagram of the assembly of the vehicle and the vehicle-mounted radar provided in the embodiment of the present application, and the structural diagram includes: a vehicle 101 and a vehicle-mounted radar 101A.
  • the vehicle 101 is provided with a vehicle radar 101A. It can be understood that, in this application, the vehicle 101 is only a carrying platform of the laser radar.
  • the carrying platform acts as a bearing and drives the lidar to move, so the lidar will generate corresponding linear and angular velocities.
  • the carrying platform can be a vehicle, a drone or other devices, which is not limited in this application.
  • the on-vehicle radar 101A may be a radar such as a millimeter-wave radar or a laser radar.
  • the lidar may be a mechanical lidar, a solid-state lidar, or the like.
  • the vehicle-mounted radar 101 obtains reflected signals of one or more information including spatial position coordinates, time stamps, and echo strengths through TOF ranging methods, frequency modulation continuous wave methods, and other ranging methods, and uses each reflected signal as a data point, the data point further includes one or more information of the distance information, angle information, radial velocity and other information of the corresponding obstacle relative to the vehicle radar.
  • Point cloud so as to obtain a higher quality point cloud, to achieve the purpose of repair technology) to assist in the restoration of the occluded part, still needs to make decisions based on prior information of a certain object infrastructure.
  • the radar system can focus more on preset types of obstacles.
  • advanced driver assistance systems advanced driver assistance systems
  • ADAS advanced driver assistance systems
  • unmanned driving technology it is more hoped that the advanced driver assistance system based on vehicle radar will focus more on specific types of obstacles for pedestrians and vehicles;
  • the detector it is more desirable for the detector to focus more on specific types of obstacles that are typical foreign objects.
  • a method for identifying obstacles is proposed, which can be realized by relying on a computer program and can run on an obstacle identifying device based on the von Neumann system.
  • the computer program can be integrated in the application, or run as an independent utility application.
  • the obstacle identification method includes:
  • S101 Perform a first obstacle detection on point cloud data of an occluded object, and determine a first confidence level that the type of the occluded object is a target obstacle.
  • the obstacle type can be understood as a preset type used to classify obstacles.
  • one type of division is to divide obstacles into living or non-living
  • another type of division is to divide obstacles into pedestrians, vehicles, animals, plants, etc.
  • one type of division is to divide obstacles The body is divided into heart, stomach, liver, spleen and so on. This application does not make any restrictions on the type of obstacles.
  • Confidence can be understood as when the real type of the obstacle has a certain probability of being the target obstacle type, the confidence is the size of the above-mentioned "certain probability". For example, the confidence level of an obstacle being a pedestrian is 0.55.
  • a step is further included: dividing the collected point cloud data into point cloud data of the occluded object and point cloud data of the occluded object. Based on the point cloud data of the occluded object and the point cloud data of the occluded object, further implementation step S101: perform the first obstacle detection on the point cloud data of the occluded object, and determine the type of the occluded object as the target obstacle the first degree of confidence.
  • the vehicle 101 collects point cloud data including 30,291 points based on the vehicle radar 101A, divides the point cloud data including 30,291 points into point cloud data of multiple obstacles through a clustering algorithm or classification algorithm, and then passes Based on the occlusion relationship between multiple obstacles analyzed by the corresponding point cloud data, the obstacles are divided into two parts: the occluded object and the occluded object; further, obtain the point cloud of the occluded object including the pedestrian 102B
  • the data and the point cloud data of the truck 103B, and the occluders include the point cloud data of the tree 102A and the point cloud data of the flower bed 103A.
  • steps S202 and S203 shown in FIG. 4 below refer to steps S202 and S203 shown in FIG. 4 below.
  • step S101 of the present application further includes: performing a first obstacle detection on the point cloud data of the occluder, and determining the first confidence level that the type of the occluder is the target obstacle.
  • this application uses a clustering algorithm or a classification algorithm to divide a large amount of collected point cloud data into point cloud data corresponding to multiple obstacles, and further performs the first obstacle detection on each obstacle to determine the Type is the first confidence level of the target obstacle type.
  • this application also includes: dividing the point cloud data into point cloud data of multiple obstacles, and then dividing the obstacles into blocked There are two parts of object category and occluder category; the first obstacle detection is performed on the point cloud data corresponding to each occluded object, and the first obstacle detection is performed on the point cloud data corresponding to each occluder to obtain each The first confidence level corresponding to the occluder and the occluded object respectively.
  • the method of performing the first obstacle detection on the point cloud data of the obstacle the method of performing the first obstacle detection on the point cloud data corresponding to the occluded object, and the preliminary detection on the point cloud data corresponding to the occluder
  • the detection method is the same.
  • the following is an example of the method of preliminary obstacle detection for the point cloud data corresponding to the occluded object, and it is also applicable to other types of point cloud data for the first obstacle detection.
  • the deep learning network is used to perform preliminary obstacle detection on the point cloud data corresponding to the occluded object to obtain the first confidence level of the type of the occluded object and the type of the target obstacle.
  • the deep learning network can be a learning network built with a lightweight deep learning framework, such as the SqueezeSeg lightweight deep learning network. Taking the SqueezeSeg deep learning network as an example, the following is the working principle of the first obstacle detection in this application:
  • This application also includes a method of performing preliminary obstacle calculation on the occluded object based on other deep learning networks to obtain the first confidence level, and there is no limitation on the type and structure of the deep learning network and the process of preliminary obstacle calculation.
  • the occlusion rate can be understood as the degree of occlusion of the occluded object. For example, the percentage of the area of the occluded object to the total area of the occluded object, and the importance of the occluded part of the occluded object.
  • the method of occlusion calculation may be to obtain initial data including rich semantic information through various sensors installed on the vehicle 101, so that the vehicle 101 can restore the initial data to a three-dimensional image through a computer vision method, thereby occluding the occluded object calculate.
  • various sensors include lidar sensors and vision sensors. Vision sensors include depth cameras, monocular cameras, binocular cameras, RGB-D cameras, etc.
  • Lidar sensors include mechanical lidar, semi-solid lidar, and solid-state lidar. wait.
  • depth cameras include Structured Light depth cameras (Structured Light), TOF depth cameras or binocular stereo vision cameras (Binocular Stereo Vision).
  • the processor obtains a three-dimensional image including depth distance information between the occluded object and the occluded object through the depth camera.
  • the present application restores the point cloud data collected by the lidar sensor to obtain a three-dimensional image including depth and distance information between the occluded object and the occluded object.
  • FIG. 3 it is a schematic diagram of a scene where an occluded object is occluded according to the embodiment of the present application, including: a passerby 301 , a truck 302 and a puppy 303 .
  • the passerby 301, the truck 302 and the puppy 303 are all obstacles
  • the passerby 301 is the obstruction of the truck 302
  • the truck 302 is the obstruction of the puppy 303.
  • the covered area of the truck 302 is relatively low, and the importance of the covered object is not high, while the covered area of the puppy 303 is relatively high, and the importance of the blocked object is high, for example , the occlusion rate of the truck 302 is 15%, and the occlusion rate of the dog 303 is 67%.
  • the processor connected to the lidar calculates the first confidence that the type of the occluded object is the target obstacle type based on the above point cloud data degree, the first confidence degree is compensated according to the occlusion rate of the occluded object, so as to obtain the second confidence degree.
  • the occlusion rate of the occluded object is negatively correlated with the compensation for the first confidence level.
  • the lower the occlusion rate the higher the degree of compensation for the first confidence level.
  • the method for obtaining the second confidence degree is: obtain the compensation coefficient through the occlusion rate, and multiply the first confidence degree by the compensation coefficient to obtain the second confidence degree; as shown in FIG.
  • the processor is based on The point cloud data of the truck 302 shows that the type of the truck 302 is a large vehicle type and the first confidence level is 0.48, and the occlusion rate of the truck 302 is 15%, so the compensation coefficient for the first confidence level is 1.4, and the occlusion rate is further obtained Obtaining the second degree of confidence is 0.68; the first degree of confidence that the processor obtains based on the point cloud data of the puppy 303 is that the type of the puppy 303 is a small animal type is 0.3, and the occlusion rate of the puppy 303 is 67%.
  • the compensation coefficient of the first confidence level is 1.1, and the second confidence level based on the occlusion rate is further obtained as 0.33.
  • the method for obtaining compensation for the first confidence level based on the occlusion rate is to set a preset comparison table.
  • FIG. 4 it is a numerical schematic diagram of a preset comparison table provided by the embodiment of the present application.
  • the preset comparison table includes the occlusion rate of the occluded object and the corresponding compensation value, which can be set in advance by the user in the processor Among them, in this embodiment, the method for obtaining the second confidence level is: the compensation value corresponding to the occlusion rate is found in the preset comparison table through the occlusion rate, and the first confidence level is added to the compensation value to obtain the second confidence level. Confidence.
  • the preset comparison table shown in FIG. 4 is only an example, and the present application does not impose any limitation on the form, specific values, etc. of the preset comparison table.
  • the method for obtaining compensation for the first confidence level based on the occlusion rate is a compensation model, which is obtained by training the training set including the occlusion rate and the compensation value corresponding to each occlusion rate until convergence .
  • the compensation value corresponding to the occlusion rate is obtained through the compensation model, and the second confidence degree is obtained based on the compensation value and the first confidence degree.
  • the second confidence is obtained after the first confidence is compensated by the occluded rate of the occluded object, and when the second confidence is greater than the preset confidence threshold, the type of the occluded object is determined to be the type of the target obstacle.
  • the preset reliability threshold is 0.5
  • the processor obtains based on the point cloud data of the truck 302 that the type of the truck 302 is a first confidence of a large vehicle type is 0.48
  • the occlusion rate of the truck 302 is 15%, so based on The occluded rate obtains a second confidence level of 0.6, and it is determined that the type of the truck 302 is a large vehicle type.
  • This application solves the technical problem that in some cases, based on the actual needs of the usage scenarios, it is more desirable that the radar system can focus more on preset types of obstacles.
  • ADAS advanced driver assistance systems
  • ADAS advanced driver assistance systems
  • unmanned driving technology it is more hoped that the advanced driver assistance system based on vehicle radar will focus more on specific types of obstacles for pedestrians and vehicles;
  • the detector it is more desirable for the detector to focus more on specific types of obstacles that are typical foreign objects.
  • the application After confirming that the type of the occluded object is the first confidence level of the target obstacle type, the application obtains the second confidence level through the occluded rate of the occluded object, so as to judge whether the type of the occluded object is based on the second confidence level is the target obstacle type; in this application, considering whether the obstacle is blocked or not, the point cloud data of the obstacle is synthesized based on the influencing factors such as the occlusion rate, so as to obtain the confidence that the obstacle type is the target obstacle type, and enhance the confidence of the obstacle
  • the accuracy of recognition reduces the possibility of missed detection and wrong detection, and effectively improves driving safety and reliability.
  • another obstacle identification method is proposed, which can be realized by relying on a computer program and can run on an obstacle identification device based on the von Neumann system.
  • the computer program can be integrated in the application, or run as an independent utility application.
  • the obstacle identification method includes:
  • S201 Perform a first obstacle detection on point cloud data of an occluded object, and determine a first confidence level that the type of the occluded object is a target obstacle.
  • step S201 is the same as step S101, and will not be repeated here.
  • the point cloud data is divided into point cloud data corresponding to multiple obstacles through the clustering algorithm, and the obstacles are further divided into occluded objects and occluders through the feature values of the point cloud data, so that The point cloud data is divided into the point cloud data of the occluder and the point cloud data of the occluded object.
  • the radial speed may be different, but the overall moving speed has the principle of consistency, and the points with small radial speed differences are clustered into point cloud data of an obstacle.
  • the multiple obstacles are further divided into occluders and occluded objects, so that the point cloud data is divided into point cloud data corresponding to the occluders and point cloud data corresponding to the occluded objects.
  • the present application also includes other algorithms and implementations for distinguishing point cloud data corresponding to obstacles, such as DBSCAN algorithm, deep learning network, and the like.
  • the present application acquires feature values of point cloud data based on cameras and radars, so as to obtain point cloud data corresponding to occluders and point cloud data corresponding to occluded objects. It can be understood that the present application also includes other methods for judging occluders and occluded objects among multiple obstacles, for example, an RGB-D three-dimensional obstacle detection method.
  • the occlusion judgment calculation is performed on the occluder, and the occlusion rate of the occluded object is determined.
  • the following is a possible method of obtaining the occlusion rate: after judging that the obstacle corresponding to the point cloud data is an occluded object according to the eigenvalues of the point cloud data, the matrix unit representation of the occluded object is obtained based on the point cloud data
  • the two-dimensional image of using the instance segmentation algorithm to extract the mask of the occluded part of the occluded object, the format of the mask is the same matrix unit as the two-dimensional image; calculate the complete mask of the occluded object through the morphological closing operation, Further, the occlusion rate of the occluded object is calculated through the mask at the occluded object and the complete mask. For example, a truck is 25% occluded.
  • the method for obtaining the compensation for the first confidence level based on the occlusion rate is to set a preset comparison table.
  • the preset comparison table includes the occlusion rate of the occluded object and the corresponding compensation, which can be set in advance by the user in the processor, so that the processor can compensate the first confidence level based on the preset comparison table.
  • the method for obtaining the compensation for the first confidence based on the occlusion rate is a compensation model, and the compensation model is obtained through training from a training set including the occlusion rate until convergence.
  • the method for determining the compensation score is: according to the target obstacle type corresponding to the occluded object, determine the corresponding compensation coefficient f; according to the occluded rate, obtain the corresponding basic compensation score x; based on the compensation coefficient and the basic
  • the compensation score f ⁇ x determines the corresponding compensation score.
  • the target obstacle type of the occluded object is a large vehicle type, and the compensation coefficient of the large vehicle type is 0.8; based on the occlusion rate of the occluded object, the basic compensation score of the truck is 0.2, based on the compensation coefficient and the basic compensation score , resulting in a final compensation score of 0.16.
  • the method of determining the corresponding compensation coefficient based on the corresponding target obstacle type may be to determine whether the target obstacle type is a type that is easily blocked, and if so, a higher compensation coefficient will be set, For example, the compensation coefficient of large vehicle type and large building type is higher than that of small trash can type; it can also be judged whether the target obstacle type is a type that needs more attention based on the actual application scenario, and if so, it will be set higher For example, the compensation coefficient of the pedestrian type is higher than that of the pole barricade type.
  • S205 Perform compensation calculation on the first confidence degree according to the compensation score, and determine the second confidence degree.
  • Compensation calculation is performed on the first confidence level according to the compensation score to determine the second confidence level. For example, based on the point cloud data of the truck, the processor obtains that the type of the truck is a large vehicle type with a first confidence level of 0.48, and the occlusion rate of the truck 302 is 15%, so the compensation score is 0.16, and finally the second confidence level is obtained.
  • the confidence level is 0.64; the first confidence level obtained by the processor based on the puppy’s point cloud data is that the puppy’s type is a small animal type is 0.3, and the puppy’s occlusion rate is 67%, so the compensation score is 0.05, and finally The second confidence level is 0.35.
  • the second confidence is obtained after the first confidence is compensated by the occluded rate of the occluded object, and when the second confidence is greater than the preset confidence threshold, the type of the occluded object is determined to be the type of the target obstacle.
  • step S206 refer to step S104 shown in FIG. 1 , which will not be repeated here.
  • the present application also includes when one or more obstacle types in the preset flag obstacle types are the obstacle types of concern, when it is determined that the target obstacle type corresponding to the obstacle is the obstacle type of concern , to mark in the output 3D imaging image based on all point cloud data.
  • the marking method can be annotation, color, etc.
  • the processor determines that the target obstacle types corresponding to each of the multiple obstacles include the pedestrian type, mark the frame of the obstacle with a red frame in the three-dimensional imaging.
  • the application After confirming that the type of the occluded object is the first confidence level of the target obstacle type, the application obtains the second confidence level through the occluded rate of the occluded object, so as to judge whether the type of the occluded object is based on the second confidence level is the target obstacle type; in this application, considering whether the obstacle is blocked or not, the point cloud data of the obstacle is synthesized based on the influencing factors such as the occlusion rate, so as to obtain the confidence that the obstacle type is the target obstacle type, and enhance the confidence of the obstacle
  • the accuracy of recognition reduces the possibility of missed detection and wrong detection, and effectively improves driving safety and reliability.
  • another obstacle identification method is proposed, which can be realized by relying on a computer program and can run on an obstacle identification device based on the von Neumann system.
  • the computer program can be integrated in the application, or run as an independent utility application.
  • the obstacle identification method includes:
  • step S301 is consistent with S202 and S203, and will not be repeated here.
  • the target obstacle type corresponding to the occluded object is determined by inputting the point cloud data corresponding to the occluded object and the corresponding occluded rate into the obstacle recognition model.
  • the obstacle recognition model uses a deep learning network as a framework, and is trained from multiple known obstacle types, corresponding point cloud data, and corresponding occluded rates.
  • FIG 7 it is a schematic structural diagram of a deep learning network provided by the embodiment of the present application, that is, a DNN-HMM model, which introduces an error back propagation algorithm on the basis of the existing neural network model to optimize and improve deep learning.
  • the recognition accuracy of the network model is a schematic structural diagram of a deep learning network provided by the embodiment of the present application, that is, a DNN-HMM model, which introduces an error back propagation algorithm on the basis of the existing neural network model to optimize and improve deep learning.
  • the shown deep learning network is composed of an input layer, a hidden layer, and an output layer.
  • the input layer is used to input the output value of the bottommost hidden layer unit according to the occlusion rate and point cloud data input to the deep learning network.
  • the input layer It usually includes a plurality of input units, and the input unit is used to calculate the output value input to the lowest hidden layer unit according to the occlusion rate and point cloud data. After the occlusion rate and point cloud data are input to the input unit, the input unit uses the occlusion rate and point cloud data input to the input unit to calculate the output value to the bottom hidden layer according to its own weighted value.
  • each hidden layer includes multiple hidden layer units, and the hidden layer units receive input values from the hidden layer units in the next hidden layer.
  • the input values from the hidden layer units in the hidden layer of the next layer are weighted and summed, and the result of the weighted summation is used as the output value of the hidden layer units output to the hidden layer of the previous layer.
  • the output layer includes a plurality of output units, the output unit receives the input value from the hidden layer unit in the uppermost hidden layer, and weights the input value from the hidden layer unit in the uppermost hidden layer according to the weighted value of this layer. And, the actual output value is calculated according to the weighted summation results, based on the error between the expected output value and the actual output value, it is backpropagated from the output layer and adjusted along the output path to connect the weights and thresholds of each layer.
  • the DNN-HMM model that introduces the error back propagation algorithm is used to create the initial model.
  • the occluded rate and point cloud data corresponding to the occluded object are calculated.
  • Cloud data is fed into a deep learning network model.
  • the training process of the deep learning network model usually consists of two parts: forward propagation and back propagation. After the transfer function (also known as activation function, conversion function) operation of hidden layer neurons (also called nodes), it is transmitted to the output layer, where the state of neurons in each layer affects the state of neurons in the next layer, and the actual output layer is calculated in the output layer.
  • the transfer function also known as activation function, conversion function
  • Output value the confidence degree of the target obstacle type corresponding to the occluded object, calculate the expected error between the actual output value and the expected output value, and adjust the parameters of the deep learning network model based on the expected error.
  • the parameters include the weight value and threshold of each layer , after the training is completed, an obstacle recognition model is generated.
  • the obstacle recognition model can take the occlusion rate as one of the decision-making conditions when making a decision based on the point cloud data-output the confidence of the target obstacle type corresponding to the occluded object, and improve the obstacle recognition model's ability to identify obstacles. accuracy.
  • the obstacle identification model can also be used to identify the target obstacle type for the occluder. That is, when the occlusion rate of the obstacle is 0, the obstacle recognition model can also output the confidence of the target obstacle type corresponding to the occlusion.
  • the obstacle recognition model may also output a two-dimensional or three-dimensional bounding box of the obstacle based on the target obstacle type of the obstacle.
  • the present application also includes when one or more obstacle types in the preset flag obstacle types are the obstacle types of concern, when the obstacle recognition model determines that the target obstacle type corresponding to the obstacle is the obstacle type of concern For the type of obstacle, it is marked in the 3D imaging map based on all point cloud data output by the obstacle recognition model, and the marking method can be annotation, color, etc.
  • the marking method can be annotation, color, etc.
  • the obstacle type of interest is determined to be a pedestrian
  • the obstacle recognition model determines that the target obstacle type corresponding to each of the multiple obstacles includes the pedestrian type
  • a red frame is used to mark the frame of the obstacle in the 3D imaging.
  • This application uses the trained obstacle recognition model to input the occlusion rate and point cloud data of the occluded object, and obtains the target obstacle type corresponding to the occluded object based on the obstacle recognition model; this application considers whether the obstacle is blocked Occlusion, the point cloud data of obstacles is synthesized based on factors such as occlusion rate, so as to obtain the confidence that the obstacle type is the target obstacle type, enhance the accuracy of obstacle recognition, reduce the possibility of missed detection and false detection, and effectively Improve driving safety and reliability.
  • FIG. 8 shows a schematic structural diagram of an obstacle identification device provided by an exemplary embodiment of the present application.
  • the obstacle identification device can be implemented as all or a part of the device through software, hardware or a combination of the two.
  • the obstacle recognition device includes a first object recognition module 801 , an occlusion judgment module 802 , a second object recognition module 803 and a third object recognition module 804 .
  • the first object recognition module 801 is configured to perform a first obstacle detection on the point cloud data corresponding to the occluded object, and determine the first confidence level that the type of the occluded object is the target obstacle type;
  • An occlusion calculation module 802 configured to perform occlusion calculation on the point cloud data corresponding to the occluded object, and determine the occluded rate of the occluded object;
  • the second object identification module 803 is configured to determine a second confidence level that the type of the occluded object is the type of the target obstacle according to the occlusion rate of the occluded object and the first confidence level;
  • the third object recognition module 804 is configured to determine that the type of the occluded object is the type of the target obstacle when the second confidence level is greater than a preset confidence level threshold.
  • the occlusion calculation module 802 includes:
  • the feature extraction unit is used to extract the feature value of the collected point cloud data, and distinguish the point cloud data corresponding to the occluded object included in the point cloud data from the point cloud data corresponding to the occluder according to the feature value;
  • the occlusion rate unit is configured to perform occlusion judgment calculation on the point cloud data corresponding to the occluded object according to the point cloud data corresponding to the occluded object, and determine the occluded rate of the occluded object.
  • extracting feature units includes:
  • the distance and angle subunit is used to obtain the distance value and angle value between each point and other points in the collected point cloud data, and the distance value and angle value are used as feature values of the point cloud data.
  • the second object recognition module 803 includes:
  • a compensation unit configured to determine a corresponding compensation score according to the occluded rate
  • a determining unit configured to perform compensation calculation on the first confidence level according to the compensation score, and determine a second confidence level.
  • the compensation unit includes:
  • the compensation coefficient subunit is used to determine the corresponding compensation coefficient according to the target obstacle type corresponding to the occluded object
  • the basic compensation subunit is used to obtain the corresponding basic compensation score according to the occlusion rate
  • the compensation score subunit is configured to determine a corresponding compensation score based on the compensation coefficient and the basic compensation score.
  • the application After confirming that the type of the occluded object is the first confidence level of the target obstacle type, the application obtains the second confidence level through the occluded rate of the occluded object, so as to judge whether the type of the occluded object is based on the second confidence level is the target obstacle type; in this application, considering whether the obstacle is blocked or not, the point cloud data of the obstacle is synthesized based on the influencing factors such as the occlusion rate, so as to obtain the confidence that the obstacle type is the target obstacle type, and enhance the confidence of the obstacle
  • the accuracy of recognition reduces the possibility of missed detection and wrong detection, and effectively improves driving safety and reliability.
  • FIG. 9 shows a schematic structural diagram of an obstacle identification device provided by an exemplary embodiment of the present application.
  • the obstacle identification device can be implemented as all or a part of the device through software, hardware or a combination of the two.
  • the obstacle recognition device includes a calculation module 901 and an object recognition module 902 .
  • Calculation module 901 configured to perform occlusion calculation on the point cloud data corresponding to the occluded object, and determine the occlusion rate of the occluded object;
  • the object recognition module 902 is configured to input the point cloud data corresponding to the occluded object and the corresponding occluded rate into the obstacle recognition model, and determine the target obstacle type corresponding to the occluded object; wherein, the obstacle The object recognition model is trained from multiple known obstacle types, corresponding point cloud data, and corresponding occlusion rate.
  • This application uses the trained obstacle recognition model to input the occlusion rate and point cloud data of the occluded object, and obtains the target obstacle type corresponding to the occluded object based on the obstacle recognition model; this application considers whether the obstacle is blocked Occlusion, the point cloud data of obstacles is synthesized based on factors such as occlusion rate, so as to obtain the confidence that the obstacle type is the target obstacle type, enhance the accuracy of obstacle recognition, reduce the possibility of missed detection and false detection, and effectively Improve driving safety and reliability.
  • the obstacle recognition device provided by the above-mentioned embodiments executes the obstacle recognition method, it only uses the division of the above-mentioned functional modules for illustration. In practical applications, the above-mentioned functions can be assigned to different functions according to needs Module completion means that the internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
  • the obstacle recognition device provided in the above embodiments and the obstacle recognition method embodiment belong to the same concept, and the implementation process thereof is detailed in the method embodiment, and will not be repeated here.
  • the embodiment of the present application also provides a computer storage medium, the computer storage medium can store a plurality of instructions, and the instructions are suitable for being loaded and executed by a processor as described in the above-mentioned embodiments shown in FIGS. 1-7 .
  • the specific execution process can refer to the specific description of the embodiments shown in FIGS. 1-7 , and details are not repeated here.
  • the present application also provides a computer program product, the computer program product stores at least one instruction, the at least one instruction is loaded by the processor and executes the obstacle described in the embodiment shown in Fig. 1 to Fig. 7 above.
  • the specific execution process can refer to the specific description of the embodiments shown in FIGS. 1-7 , and details are not repeated here.
  • the electronic device 1000 may include: at least one processor 1001 , at least one network interface 1004 , a user interface 1003 , a memory 1005 , and at least one communication bus 1002 .
  • the communication bus 1002 is used to realize connection and communication between these components.
  • the user interface 1003 may include a display screen (Display) and a camera (Camera), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
  • Display display screen
  • Camera Camera
  • the optional user interface 1003 may also include a standard wired interface and a wireless interface.
  • the network interface 1004 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface).
  • the processor 1001 may include one or more processing cores.
  • the processor 1001 uses various interfaces and lines to connect various parts in the entire server 1000, and executes the server by running or executing instructions, programs, code sets or instruction sets stored in the memory 1005, and calling data stored in the memory 1005. 1000's of various functions and processing data.
  • the processor 1001 may use at least one of Digital Signal Processing (Digital Signal Processing, DSP), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), and Programmable Logic Array (Programmable Logic Array, PLA). implemented in the form of hardware.
  • DSP Digital Signal Processing
  • FPGA Field-Programmable Gate Array
  • PLA Programmable Logic Array
  • the processor 1001 may integrate one or a combination of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU) and a modem.
  • CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • the CPU mainly handles the operating system, user interface and application programs, etc.
  • the GPU is used to render and draw the content that needs to be displayed on the display screen
  • the modem is used to handle wireless communication. It can be understood that the above modem may also not be integrated into the processor 1001, but implemented by a single chip.
  • the memory 1005 may include a random access memory (Random Access Memory, RAM), and may also include a read-only memory (Read-Only Memory).
  • the memory 1005 includes a non-transitory computer-readable storage medium (non-transitory computer-readable storage medium). The memory 1005 may be used to store instructions, programs, codes, sets of codes, or sets of instructions.
  • the memory 1005 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playback function, an image playback function, etc.), Instructions and the like for implementing the above method embodiments; the storage data area can store the data and the like involved in the above method embodiments.
  • the memory 1005 may also be at least one storage device located away from the aforementioned processor 1001 .
  • the memory 1005 as a computer storage medium may include an operating system, a network communication module, a user interface module, and an obstacle recognition application program.
  • the user interface 1003 is mainly used to provide the user with an input interface and obtain the data input by the user; and the processor 1001 can be used to call the obstacle recognition application program stored in the memory 1005, and Specifically perform the following operations:
  • the occlusion rate of the occluded object and the first confidence level determine the type of the occluded object as the second confidence level of the target obstacle type
  • the second confidence level is greater than a preset confidence level threshold, it is determined that the type of the occluded object is the type of the target obstacle.
  • the processor 1001 executes the occlusion calculation on the point cloud data corresponding to the occluded object, determines the occlusion rate of the occluded object, and specifically performs the following operations:
  • the processor 1001 performs the extraction of the feature value of the collected point cloud data, specifically performing the following operations:
  • a distance value and an angle value between each point and other points in the collected point cloud data are obtained, and the distance value and angle value are used as feature values of the point cloud data.
  • the processor 1001 performs the determining the second confidence level according to the occlusion rate of the occluded object and the first confidence level, and specifically performs the following operations:
  • Compensation calculation is performed on the first confidence level according to the compensation score to determine a second confidence level.
  • the processor 1001 executes the acquisition of the corresponding compensation score according to the occlusion rate, and specifically performs the following operations:
  • a corresponding compensation score is determined.
  • the user interface 1003 is mainly used to provide the user with an input interface and obtain the data input by the user;
  • An obstacle recognition application and specifically does the following:
  • the application After confirming that the type of the occluded object is the first confidence level of the target obstacle type, the application obtains the second confidence level through the occluded rate of the occluded object, so as to judge whether the type of the occluded object is based on the second confidence level is the target obstacle type; and this application uses the trained obstacle recognition model to input the occlusion rate and point cloud data of the occluded object, and obtains the target obstacle type corresponding to the occluded object based on the obstacle recognition model; this application Considering whether the obstacle is occluded, the point cloud data of the obstacle is synthesized based on the occlusion rate and other influencing factors, so as to obtain the confidence that the obstacle type is the target obstacle type, enhance the accuracy of obstacle recognition, and reduce the occurrence of errors. The possibility of false detection can effectively improve driving safety and reliability.
  • the present application also provides an obstacle recognition system, which is characterized in that it includes the electronic device shown in FIG. 10 and a radar sensor connected to the electronic device; the radar sensor is used to collect radar data corresponding to the target scene, and the For the structure of the electronic device, reference may be made to the specific description of the embodiment shown in FIG. 10 , and details are not repeated here.
  • the processes in the methods of the above embodiments can be implemented through computer programs to instruct related hardware, and the programs can be stored in a computer-readable storage medium. During execution, it may include the processes of the embodiments of the above-mentioned methods.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory or a random access memory, and the like.

Landscapes

  • Image Analysis (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

本申请实施例公开了一种障碍物识别方法、装置、存储介质及电子设备,其中,方法包括:对被遮挡物对应的点云数据进行第一障碍物检测,确定所述被遮挡物的类型为目标障碍物类型的第一置信度;对所述被遮挡物对应的点云数据进行遮挡计算,确定所述被遮挡物的被遮挡率;根据所述被遮挡物的被遮挡率以及所述第一置信度,确定所述被遮挡物的类型为所述目标障碍物类型的第二置信度;在所述第二置信度大于预设置信度阈值的情况下,确定所述被遮挡物的类型为所述目标障碍物类型。采用本申请实施例,可以增强障碍物识别的准确性。

Description

障碍物识别方法、装置、存储介质及电子设备 技术领域
本申请涉及雷达成像领域,尤其涉及一种障碍物识别方法、装置、存储介质及电子设备。
背景技术
在自动驾驶领域,障碍物的检测准确率具有重要的意义,是进行自动驾驶的关键。激光雷达能够产生三维信息,测距精度高,可以精确获得目标位置,能够有效提升障碍物的检测效果,因而在自动驾驶中广泛应用。以及深度学习网络能够有效地提取特征,并利用多层神经网络的表达能力,基于激光雷达获取的点云数据对障碍物进行准确的检测,因此在自动驾驶中,常常使用深度学习网络配合处理器对激光雷达采集的点云数据进行还原、建模和判断。然而为了提升自动驾驶***的安全性,保障相关人员在驾驶和乘坐过程中的人身安全和驾驶体验,如何进一步提高对障碍物的识别和判断一直以来都是自动驾驶领域的技术难点。
发明内容
本申请实施例提供了一种障碍物识别方法、装置、存储介质及电子设备,可以增强障碍物识别的准确性。所述技术方案如下:
第一方面,本申请实施例提供了一种障碍物识别方法,所述方法包括:
对被遮挡物对应的点云数据进行第一障碍物检测,确定所述被遮挡物的类型为目标障碍物类型的第一置信度;
对所述被遮挡物对应的点云数据进行遮挡计算,确定所述被遮挡物的被遮挡率;
根据所述被遮挡物的被遮挡率以及所述第一置信度,确定所述被遮挡物的类型为所述目标障碍物类型的第二置信度;
在所述第二置信度大于预设置信度阈值的情况下,确定所述被遮挡物的类型为所述目标障碍物类型。
第二方面,本申请实施例提供了一种障碍物识别方法,所述方法包括:
对被遮挡物对应的点云数据进行遮挡计算,确定所述被遮挡物的被遮挡率;
将所述被遮挡物对应的点云数据以及对应的被遮挡率输入至障碍物识别模型中,确定所述被遮挡物对应的目标障碍物类型;其中,所述障碍物识别模型由多个已知障碍物类型以及对应的点云数据、对应的被遮挡率训练得到。
第三方面,本申请实施例提供了一种障碍物识别装置,所述装置包括:
第一物体识别模块,用于对被遮挡物对应的点云数据进行第一障碍物检测,确定所述被遮挡物的类型为目标障碍物类型的第一置信度;
遮挡计算模块,用于对所述被遮挡物对应的点云数据进行遮挡计算,确定所述被遮挡物的被遮挡率;
第二物体识别模块,用于根据所述被遮挡物的被遮挡率以及所述第一置信度,确定所述被遮挡物的类型为所述目标障碍物类型的第二置信度;
第三物体识别模块,用于在所述第二置信度大于预设置信度阈值的情况下,确定所述被遮挡物的类型为所述目标障碍物类型。
第四方面,本申请实施例提供了一种障碍物识别装置,所述装置包括:
计算模块,用于对被遮挡物对应的点云数据进行遮挡计算,确定所述被遮挡物的被遮挡率;
物体识别模块,用于将所述被遮挡物对应的点云数据以及对应的被遮挡率输入至障碍物识别模型中,确定所述被遮挡物对应的目标障碍物类型;其中,所述障碍物识别模型由多个已知障碍物类型以及对应的点云数据、对应的被遮挡率训练得到。
第五方面,本申请实施例提供一种计算机存储介质,所述计算机存储介质存储有多条指令,所述指令适于由处理器加载并执行上述的方法步骤。
第六方面,本申请实施例提供一种电子设备,可包括:处理器和存储器;其中,所述存储器存储有计算机程序,所述计算机程序适于由所述处理器加载并执行上述的方法步骤。
本申请一些实施例提供的技术方案带来的有益效果至少包括:
本申请通过在确认了被遮挡物的类型为目标障碍物类型的第一置信度之后,通过被遮挡物的被遮挡率获取第二置信度,以基于第二置信度判断被遮挡物的类型是否为目标障碍物类型;以及本申请通过训练好的障碍物识别模型,输入被遮挡物的被遮挡率以及点云数据,基于障碍物识别模型获取该被遮挡物对应的目标障碍物类型;本申请中考虑到障碍物是否被遮挡,基于遮挡率等影响因素综合障碍物的点云数据,从而得出障碍物类型为目标障碍物类型的置信度,增强对障碍物识别的准确性,降低出现漏检错检的可能,有效提高驾驶安全性和可靠性。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述 中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1A是本申请实施例提供的一种获取点云数据的场景示意图;
图1B是本申请实施例提供的一种车辆和车载雷达的装配示意图;
图2是本申请实施例提供的一种障碍物识别方法的流程示意图;
图3是本申请实施例提供的一种被遮挡物被遮挡的场景示意图;
图4是本申请实施例提供的一种预设对照表的数值示意图;
图5是本申请实施例提供的另一种障碍物识别方法的流程示意图;
图6是本申请实施例提供的另一种障碍物识别方法的流程示意图;
图7是本申请实施例提供的一种深度学习网络的结构示意图;
图8是本申请实施例提供的一种障碍物识别装置的结构示意图;
图9是本申请实施例提供的另一种障碍物识别装置的结构示意图;
图10是本申请实施例提供的一种电子设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
在本申请的描述中,需要理解的是,术语“第一”、“第二”等仅用于描述目的,而不能理解为指示或暗示相对重要性。在本申请的描述中,需要说明的是,除非另有明确的规定和限定,“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、***、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其他步骤或单元。对于本领域的普通技术人员而言,可以具体情况理解上述术语在本申请中的具体含义。此外,在本申请的描述中,除非另有说明,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
下面结合具体的实施例对本申请进行详细说明。
在一个实施例中,如图1A所示,为本申请实施例提供的一种获取点云数据的场景示意图,该场景示意图包括:设置有车载雷达的车辆101、具有遮挡 关系的树102A和行人102B、具有遮挡关系的花坛103A和卡车103B。其中,树102A是对于行人102B的遮挡物,行人102B是对于树102A的被遮挡物,花坛103A是对卡车103B的遮挡物,卡车103B是对于花坛103A的被遮挡物。可以理解的是,树102A、行人102B、花坛103A和卡车103B等皆为对于车辆101的障碍物。
在本申请实施例中,如图1B所示,为本申请实施例提供的车辆和车载雷达的装配示意图,该结构示意图包括:车辆101和车载雷达101A。
车辆101设置有车载雷达101A,可以理解的是,在本申请中,车辆101仅为激光雷达的承载平台。承载平台起承载作用和带动激光雷达进行运动,因此激光雷达会产生相应的线速度和角速度。承载平台可以是车辆、无人机或其他装置,本申请不作限制。
该车载雷达101A可以是毫米波雷达、激光雷达等雷达。例如,激光雷达可以为机械式激光雷达、固态激光雷达等。车载雷达101通过TOF测距方法、调频连续波方法等测距方法获取包括空间位置坐标、时间戳和回波强度等信息中的一个或多个信息的反射信号,将每个反射信号作为一个数据点,该数据点进一步包括对应的障碍物相对于车载雷达的距离信息、角度信息、径向速度等信息中的一个或多个信息。
由于遮挡物树102A对行人102B的遮挡,车辆101采集的针对行人102B的点云数据中存在部分质量较低的点。因此基于该点云数据对行人102B的轮廓边框进行还原时,针对被遮挡部分的轮廓边框还原存在一定的难度,即使采用点云补全技术(Point Cloud Completion,一种从缺失点云出发估计完整点云,从而获得更高质量的点云,达到修补的目的技术)辅助对被遮挡部分的还原,仍需要基于一定物体基础结构的先验信息的决策。
尤其利用深度学习网络对点云数据进行处理进而得到障碍物的识别还原方法时,采用何种决策辅助深度学习网络更快更准确地还原点云数据对应的障碍物是本领域的一个关键技术问题。
以及在某些情况下,基于使用场景的实际需求,更希望雷达***能更多地专注于预设类型的障碍物。例如,在高级驾驶辅助***(advanced driver assistance systems,ADAS)和无人驾驶技术的领域,更希望基于车载雷达实现的高级驾驶辅助***更多地专注于行人和车辆的特定类型的障碍物;在医疗领域,更希望探测器更多地专注于典型异物的特定类型的障碍物。
在一个实施例中,如图2所示,特提出了一种障碍物识别方法,该方法可依赖于计算机程序实现,可运行于基于冯诺依曼体系的障碍物识别装置上。该计算机程序可集成在应用中,也可作为独立的工具类应用运行。
具体的,该障碍物识别方法包括:
S101、对被遮挡物的点云数据进行第一障碍物检测,确定被遮挡物的类型为目标障碍物的第一置信度。
障碍物类型,可以理解为用于对障碍物进行划分的预设类型。例如,一种划分类型为将障碍物分为活体或非活体,另一种划分类型将障碍物分为行人、车辆、动物、植物等;在另一个使用场景中,一种划分类型为将障碍物分为心脏、胃、肝脏、脾脏等。本申请对障碍物类型不作任何限定。
置信度,可以理解为当这个障碍物的真实类型有一定概率为目标障碍物类型的程度时,置信度为上述“一定概率”的大小。举例来说,某个障碍物的类型为行人类型的置信度为0.55。
在一个实施例中,在步骤S101之前,还包括步骤:将采集的点云数据分为被遮挡物的点云数据和遮挡物的点云数据。基于被分为被遮挡物的点云数据和遮挡物的点云数据,进一步的实现步骤S101:对被遮挡物的点云数据进行第一障碍物检测,确定被遮挡物的类型为目标障碍物的第一置信度。
举例来说,车辆101基于车载雷达101A采集了包括30291个点的点云数据,通过聚类算法或分类算法将包括30291个点的点云数据分为多个障碍物的点云数据,再通过基于对应的点云数据解析出的多个障碍物之间的遮挡关系,将障碍物分为被遮挡物类和遮挡物类两个部分;进一步的,获取被遮挡物类包括行人102B的点云数据和卡车103B的点云数据,以及遮挡物类包括树102A的点云数据和花坛103A的点云数据。此步骤的具体工作原理参见下述图4所示步骤S202和S203。
在另一个实施例中,本申请步骤S101还包括:对遮挡物的点云数据进行第一障碍物检测,确定遮挡物的类型为目标障碍物的第一置信度。具体而言,本申请通过聚类算法或分类算法,将采集的大量点云数据分为多个障碍物对应的点云数据,进一步对每个障碍物进行第一障碍物检测,确定障碍物的类型为目标障碍物类型的第一置信度。
此外,本申请还包括:将点云数据分为多个障碍物的点云数据,再通过基于对应的点云数据解析出的多个障碍物之间的遮挡关系,将障碍物分为被遮挡物类和遮挡物类两个部分;分别对每个被遮挡物对应的点云数据进行第一障碍物检测,以及对每个遮挡物对应的点云数据进行第一障碍物检测,获取每个遮挡物和被遮挡物分别对应的第一置信度。
可以理解的是,针对障碍物的点云数据进行第一障碍物检测的方法、针对被遮挡物对应的点云数据进行第一障碍物检测的方法和针对这遮挡物对应的点云数据进行初步检测的方法相同。下述为针对被遮挡物对应的点云数据进行 初步障碍检测的方法举例,同样适用于其他类型的点云数据进行第一障碍物检测。
在本申请实施例中,使用深度学习网络对被遮挡物对应的点云数据进行初步障碍检测,得到被遮挡物的类型目标障碍物类型的第一置信度。深度学习网络,可以是轻量级的深度学习框架构建的学习网络,例如SqueezeSeg轻量级深度学习网络。下述为以SqueezeSeg深度学习网络为例,本申请进行第一障碍物检测的工作原理:
利用DBSCAN算法获取一个被遮挡物对应的点云数据;获取点云数据中每个点的三维(x,y,z)坐标,将每个点的(x,y,z)坐标转换为图像像素坐标(i,j)格式,转换方程式为:i=(arcsin(z/range)*57.3+16)/0.33,j=arctan(x/y)*57.3/0.2(y>0),j=(90-arctan(x/y)*57.3)/0.2(y<0);通过每个点的图像像素坐标,将所有点排列为64×512像素大小的图像,为便于SqueezeSeg深度学习网络进行计算,将64×512像素大小的图像转换为1×32768的排列,将该排列输入到深度学习网络中;SqueezeSeg深度学习网络对每个输入的点进行计算,综合输出该点云数据对应的被遮挡物的标签值,该标签值包括目标障碍物类型以及该被遮挡物的类型为目标障碍物类型的置信度。例如,基于SqueezeSeg深度学习网络得到一个被遮挡物的类型为小型垃圾桶的置信度为0.87,另一个被遮挡物的类型为行人的置信度为0.48。
本申请还包括基于其他深度学习网络对被遮挡物进行初步障碍物计算从而得到第一置信度的方法,对深度学习网络的类型、结构以及初步障碍物计算的过程没有任何限定。
S102、对被遮挡物对应的点云数据进行遮挡计算,确定被遮挡物的被遮挡率。
被遮挡率,可以理解为被遮挡物的受遮挡程度。例如,被遮挡物的被遮挡物面积占被遮挡物的总面积的百分比,被遮挡物的被遮挡部位的重要程度。
对被遮挡物对应的点云数据进行遮挡计算,确定被遮挡物的被遮挡率。其中,遮挡计算的方法可以是通过设置在车辆101上的各式传感器获取包括丰富语义信息的初始数据,以使车辆101通过计算机视觉方法将初始数据还原为三维图像,从而对被遮挡物进行遮挡计算。例如,各式传感器包括激光雷达传感器和视觉传感器,视觉传感器包括深度相机、单目相机、双目相机,RGB-D相机等,激光雷达传感器包括机械式激光雷达、半固态激光雷达、固态激光雷达等。例如,深度相机包括结构光深度相机(Structured Light)、TOF深度相机或双目立体视觉相机(Binocular Stereo Vision)。处理器通过深度相机得到包括被遮挡物和遮挡物之间深度距离信息的三维图像。在另一个实施例中,本申请 通过激光雷达传感器采集的点云数据还原得到包括被遮挡物和遮挡物之间深度距离信息的三维图像。
如图3所示,为本申请实施例提供的一种被遮挡物被遮挡的场景示意图,包括:路人301、卡车302和小狗303。其中,路人301、卡车302和小狗303都是障碍物,路人301是卡车302的遮挡物,卡车302是小狗303的遮挡物。卡车302的被遮挡面积占总面积较低,且被遮挡物部位的重要程度不高,而小狗303的被遮挡面积占总面积较高,且被遮挡物部位的重要程度高,举例来说,卡车302的被遮挡率为15%,小狗303的被遮挡率为67%。
S103、根据被遮挡物的被遮挡率以及第一置信度,确定被遮挡物的类型为目标障碍物类型的第二置信度。
由于被遮挡物的点云数据中包括被遮挡导致的质量不高的点云数据,与激光雷达相连的处理器基于上述点云数据计算得到被遮挡物的类型为目标障碍物类型的第一置信度,根据被遮挡物的被遮挡率对第一置信度进行补偿,从而得到第二置信度。
被遮挡物的被遮挡率与针对第一置信度的补偿呈负相关。换而言之,被遮挡率越低,对第一置信度的补偿程度越高。举例来说,在一个实施例中,得到第二置信度的方法为:通过被遮挡率得到补偿系数,第一置信度乘以补偿系数得到第二置信度;如图3所述,处理器基于卡车302的点云数据得到卡车302的类型为大型车辆类型的第一置信度为0.48,卡车302的被遮挡率为15%,因此针对第一置信度的补偿系数为1.4,进一步得到被遮挡率得到第二置信度为0.68;处理器基于小狗303的点云数据得到小狗303的类型为小型动物类型的第一置信度为0.3,小狗303的被遮挡率为67%,因此针对第一置信度的补偿系数为1.1,进一步得到基于被遮挡率得到的第二置信度为0.33。
在另一个实施例中,基于被遮挡率得到针对第一置信度的补偿的方法是设置了预设对照表。如图4所示,为本申请实施例提供的一种预设对照表的数值示意图,该预设对照表包括被遮挡物的被遮挡率和对应的补偿值,可以由用户提前设置在处理器中,在本实施例中,得到第二置信度的方法为:通过被遮挡率在预设对照表中查到到该被遮挡率对应的补偿值,第一置信度加上补偿值得到第二置信度。可以理解的是,图4所示的预设对照表仅为示例,本申请对预设对照表的形式、具体数值等不作任何限制。
在另一个实施例中,基于被遮挡率得到针对第一置信度的补偿的方法是补偿模型,该补偿模型由包括被遮挡率和每个被遮挡率对应的补偿值的训练集训练至收敛得到。具体而言,通过补偿模型获取被遮挡率对应的补偿值,基于补偿值和第一置信度得到第二置信度。
S104、在第二置信度大于预设置信度阈值的情况下,确定被遮挡物的类型为目标障碍物类型。
通过被遮挡物的被遮挡率对第一置信度进行补偿后获取第二置信度,在第二置信度大于预设置信度阈值的情况下,确定被遮挡物的类型为目标障碍物的类型。举例来说,预设置信度阈值是0.5,处理器基于卡车302的点云数据得到卡车302的类型为大型车辆类型的第一置信度为0.48,卡车302的被遮挡率为15%,因此基于被遮挡率得到第二置信度为0.6,确定卡车302的类型为大型车辆类型。
本申请解决了在某些情况下,基于使用场景的实际需求,更希望雷达***能更多地专注于预设类型的障碍物的技术问题。例如,在高级驾驶辅助***(advanced driver assistance systems,ADAS)和无人驾驶技术的领域,更希望基于车载雷达实现的高级驾驶辅助***更多地专注于行人和车辆的特定类型的障碍物;在医疗领域,更希望探测器更多地专注于典型异物的特定类型的障碍物。
本申请通过在确认了被遮挡物的类型为目标障碍物类型的第一置信度之后,通过被遮挡物的被遮挡率获取第二置信度,以基于第二置信度判断被遮挡物的类型是否为目标障碍物类型;本申请中考虑到障碍物是否被遮挡,基于遮挡率等影响因素综合障碍物的点云数据,从而得出障碍物类型为目标障碍物类型的置信度,增强对障碍物识别的准确性,降低出现漏检错检的可能,有效提高驾驶安全性和可靠性。
在一个实施例中,如图5所示,特提出了另一种障碍物识别方法,该方法可依赖于计算机程序实现,可运行于基于冯诺依曼体系的障碍物识别装置上。该计算机程序可集成在应用中,也可作为独立的工具类应用运行。
具体的,该障碍物识别方法包括:
S201、对被遮挡物的点云数据进行第一障碍物检测,确定被遮挡物的类型为目标障碍物的第一置信度。
具体的,步骤S201与步骤S101一致,此处不再赘述。
S202、提取采集的点云数据的特征值,根据特征值区分点云数据包括的被遮挡物对应的点云数据和遮挡物对应的点云数据。
在本申请实施例中,通过聚类算法将点云数据分为多个障碍物各自对应的点云数据,进一步通过点云数据的特征值,将障碍物分为被遮挡物和遮挡物,从而将点云数据分为遮挡物的点云数据和被遮挡物的点云数据。举例来说,考虑到一个较大障碍物由于部分受到遮挡或不同位置材料不同导致出现几个点 云质心,无法被聚类为同一个障碍物的问题,通过对于运动的刚体障碍物,两端的径向速度可能不同,但是其整体的移动速度具有一致性的原理,将径向速度差异小的点聚类为一个障碍物的点云数据。进一步将多个障碍物分为遮挡物和被遮挡物,从而将点云数据分为遮挡物对应的点云数据和被遮挡物对应的点云数据。
在另一个实施例中,本申请还包括其他区分障碍物各自对应的点云数据的算法以及实施方式,例如DBSCAN算法、深度学习网络等等。
在一个实施例中,本申请基于相机和雷达获取点云数据的特征值,从而得到遮挡物对应的点云数据和被遮挡物对应的点云数据。可以理解的是,本申请还包括其他判断多个障碍物中的遮挡物和被遮挡物的方法,例如,RGB-D的三维障碍物检测方法。
S203、根据遮挡物对应的点云数据对被遮挡物对应的点云数据进行遮挡判断计算,确定被遮挡物的被遮挡物率。
换而言之,根据遮挡物对应的点云数据和被遮挡物对应的点云数据,对遮挡物进行遮挡判断计算,确定被遮挡物的被遮挡率。
下述为一种可能的获取被遮挡率的方法:根据点云数据的特征值判断该点云数据对应的障碍物为被遮挡物后,基于该点云数据获取被遮挡物的以矩阵单元表示的二维图像,采用实例分割算法提取被遮挡物的被遮挡处的掩码,该掩码的格式为与二维图像相同的矩阵单元;通过形态学闭运算计算该遮挡物的完整掩码,进一步通过被遮挡物处的掩码和完整掩码计算该被遮挡物的被遮挡率。例如,卡车的被遮挡率为25%。
S204、根据被遮挡率,确定对应的补偿得分。
在一个实施例中,基于被遮挡率得到针对第一置信度的补偿的方法是设置了预设对照表。该预设对照表包括被遮挡物的被遮挡率和对应的补偿,可以由用户提前设置在处理器中,以使处理器基于该预设对照表对第一置信度进行补偿。在另一个实施例中,基于被遮挡率得到针对第一置信度的补偿的方法是补偿模型,该补偿模型由包括被遮挡率的训练集训练得到至收敛得到。
在另一个实施例中,确定补偿得分的方法为:根据被遮挡物对应的目标障碍物类型,确定对应的补偿系数f;根据被遮挡率,获取对应的基础补偿得分x;基于补偿系数和基础补偿得分f×x,确定对应的补偿得分。例如,被遮挡物的目标障碍物类型为大型车辆类型,大型车辆类型的补偿系数为0.8;基于该被遮挡物的被遮挡率得到卡车的基础补偿得分为0.2,基于该补偿系数和基础补偿得分,得到最终的补偿得分为0.16。
在上述实施例中,基于对应的目标障碍物类型,确定对应的补偿系数的方 法,可以是判断目标障碍物类型是否为容易被遮挡的类型,若为是,则会设置更高的补偿系数,例如大型车辆类型、大型建筑类型的补偿系数高于小型垃圾桶类型的补偿系数;也可以是判断目标障碍物类型是否为基于实际运用场景更需要专注的类型,若为是,则会设置更高的补偿系数,例如行人类型的补偿系数高于长杆路障类型的补偿系数。
S205、根据补偿得分对第一置信度进行补偿计算,确定第二置信度。
根据补偿得分对第一置信度进行补偿计算,确定第二置信度。举例来说,处理器基于卡车的点云数据得到卡车,的类型为大型车辆类型的第一置信度为0.48,卡车302的被遮挡率为15%,因此得到补偿得分为0.16,最终得到第二置信度为0.64;处理器基于小狗的点云数据得到小狗的类型为小型动物类型的第一置信度为0.3,小狗的被遮挡率为67%,因此得到补偿得分为0.05,最终得到第二置信度为0.35。
S206、在第二置信度大于预设置信度阈值的情况下,确定被遮挡物的类型为目标障碍物类型。
通过被遮挡物的被遮挡率对第一置信度进行补偿后获取第二置信度,在第二置信度大于预设置信度阈值的情况下,确定被遮挡物的类型为目标障碍物的类型。步骤S206的具体步骤参见图1所示的步骤S104,此处不再赘述。
在一个实施例中,本申请还包括当预设标记障碍物类型中某个或多个障碍物类型为受关注障碍物类型,当确定障碍物对应的目标障碍物类型为受关注障碍物类型时,在输出的基于全部点云数据制作的三维成像图中进行标记,标记方式可以是注释、颜色等。例如,确定受关注障碍物类型为行人时,当处理器确定多个障碍物各自对应的目标障碍物类型中包括行人类型时,在三维成像中用红色边框标记该障碍物的边框。采用本申请实施例,进一步提高基于该障碍物识别方法实现的雷达在实际运用中的实用性和可靠性。
本申请通过在确认了被遮挡物的类型为目标障碍物类型的第一置信度之后,通过被遮挡物的被遮挡率获取第二置信度,以基于第二置信度判断被遮挡物的类型是否为目标障碍物类型;本申请中考虑到障碍物是否被遮挡,基于遮挡率等影响因素综合障碍物的点云数据,从而得出障碍物类型为目标障碍物类型的置信度,增强对障碍物识别的准确性,降低出现漏检错检的可能,有效提高驾驶安全性和可靠性。
在一个实施例中,如图6所示,特提出了另一种障碍物识别方法,该方法可依赖于计算机程序实现,可运行于基于冯诺依曼体系的障碍物识别装置上。该计算机程序可集成在应用中,也可作为独立的工具类应用运行。
具体的,该障碍物识别方法包括:
S301、对被遮挡物对应的点云数据进行遮挡计算,确定所述被遮挡物的被遮挡率。
对被遮挡物对应的点云数据进遮挡计算,确定被遮挡物的被遮挡率。具体而言,获取采集的点云数据中每个点与其他点之间的距离值和角度值,距离值和角度值作为所述点云数据的特征值,根据特征值区分点云数据包括的所述被遮挡物对应的点云数据和遮挡物对应的点云数据;根据遮挡物对应的点云数据对被遮挡物对应的点云数据进行遮挡判断计算,确定所述被遮挡物的被遮挡率。
具体的,步骤S301与S202、S203一致,此处不再赘述。
S302、将被遮挡物对应的点云数据以及对应的被遮挡率输入至障碍物识别模型中,确定被遮挡物对应的目标障碍物类型。
由于遮挡物对被遮挡物的遮挡,被遮挡物的点云数据中存在部分质量较低的点。因此基于该点云数据对被遮挡物的轮廓边框进行还原时,针对被遮挡部分的轮廓边框还原存在一定的难度,即使采用点云补全技术(Point Cloud Completion,一种从缺失点云出发估计完整点云,从而获得更高质量的点云,达到修补的目的技术)辅助对被遮挡部分的还原,仍需要基于一定物体基础结构的先验信息的决策。尤其利用深度学习网络对点云数据进行处理进而得到障碍物的识别还原方法时,采用何种决策辅助深度学习网络更快更准确地还原点云数据对应的障碍物是本领域的一个关键技术问题。
在本申请实施例中,通过将被遮挡物对应的点云数据以及对应的被遮挡率输入至障碍物识别模型中,确定被遮挡物对应的目标障碍物类型。该障碍物识别模型由深度学习网络作为框架,由多个已知障碍物类型以及对应的点云数据、对应的被遮挡率训练得到。
如图7所示,为本申请实施例提供的一种深度学习网络的结构示意图,即一种DNN-HMM模型,在现有神经网络模型基础上引入误差反向传播算法进行优化,提高深度学习网络模型的识别准确率。
所示深度学习网络由输入层、隐含层、输出层组成,输入层用于根据输入所述深度学习网络的被遮挡率和点云数据输入至最底层的隐层单元的输出值,输入层通常包括多个输入单元,输入单元用于根据被遮挡率和点云数据计算输入至最底层的隐层单元的输出值。将被遮挡率和点云数据输入至所述输入单元后,输入单元根据自身的加权值利用输入至输入单元的被遮挡率和点云数据计算向最底层的隐层输出的输出值。
隐层通常为多个,每一层隐层包括多个隐层单元,隐层单元接收来自于下 一隐层中的隐层单元的输入值。根据本层的加权值对来自于下一层隐层中的隐层单元的输入值进行加权求和,并将加权求和的结果作为输出至上一层隐层的隐层单元的输出值。
输出层包括多个输出单元,输出单元接收来自于最上层隐层中的隐层单元的输入值,根据本层的加权值对来自于最上层隐层中的隐层单元的输入值进行加权求和,根据加权求和的结果计算实际输出值,基于期望输出值与实际输出值的误差从输出层反向传播并沿输出路径调整各层连接权重值和阈值。
具体的,本实施例中采用引入误差反向传播算法的DNN-HMM模型创建初始模型,在提取该被遮挡物对应的被遮挡率和点云数据之后,将遮挡物对应的被遮挡率和点云数据输入到深度学习网络模型中。深度学习网络模型的训练过程通常由正向传播和反向传播两部分组成,在正向传播过程中,用户终端输入被遮挡物对应的被遮挡率和点云数据从深度学习网络模型的输入层经过隐层神经元(也称节点)的传递函数(又称激活函数、转换函数)运算后,传向输出层,其中每一层神经元状态影响下一层神经元状态,在输出层计算实际输出值——被遮挡物对应的目标障碍物类型的置信度,计算实际输出值与期望输出值的期望误差,基于期望误差调整深度学习网络模型的参数,参数包含每一层的权重值和阈值,训练完成后,生成障碍物识别模型。
该障碍物识别模型可以在基于点云数据做出决策——输出被遮挡物对应的目标障碍物类型的置信度时,将被遮挡率作为决策条件之一,提高障碍物识别模型对障碍物识别的准确性。
可以理解的是,该障碍物识别模型同样可以运用在对遮挡物进行目标障碍物类型的识别上。即当障碍物的被遮挡率为0时,该障碍物识别模型同样可以输出遮挡物对应的目标障碍物类型的置信度。
在另一个实施例中,障碍物识别模型还可以基于障碍物的目标障碍物类型输出障碍物的二维或三维边框。
在一个实施例中,本申请还包括当预设标记障碍物类型中某个或多个障碍物类型为受关注障碍物类型,当障碍物识别模型确定障碍物对应的目标障碍物类型为受关注障碍物类型时,在障碍物识别模型输出的基于全部点云数据制作的三维成像图中进行标记,标记方式可以是注释、颜色等。例如,确定受关注障碍物类型为行人时,当障碍物识别模型确定多个障碍物各自对应的目标障碍物类型中包括行人类型时,在三维成像中用红色边框标记该障碍物的边框。采用本申请实施例,进一步提高基于该障碍物识别方法实现的雷达在实际运用中的实用性和可靠性。
本申请通过训练好的障碍物识别模型,输入被遮挡物的被遮挡率以及点云 数据,基于障碍物识别模型获取该被遮挡物对应的目标障碍物类型;本申请中考虑到障碍物是否被遮挡,基于遮挡率等影响因素综合障碍物的点云数据,从而得出障碍物类型为目标障碍物类型的置信度,增强对障碍物识别的准确性,降低出现漏检错检的可能,有效提高驾驶安全性和可靠性。
下述为本申请装置实施例,可以用于执行本申请方法实施例。对于本申请装置实施例中未披露的细节,请参照本申请方法实施例。
请参见图8,其示出了本申请一个示例性实施例提供的障碍物识别装置的结构示意图。该障碍物识别装置可以通过软件、硬件或者两者的结合实现成为装置的全部或一部分。该障碍物识别装置包括第一物体识别模块801、遮挡判断模块802、第二物体识别模块803和第三物体识别模块804。
第一物体识别模块801,用于对被遮挡物对应的点云数据进行第一障碍物检测,确定所述被遮挡物的类型为目标障碍物类型的第一置信度;
遮挡计算模块802,用于对所述被遮挡物对应的点云数据进行遮挡计算,确定所述被遮挡物的被遮挡率;
第二物体识别模块803,用于根据所述被遮挡物的被遮挡率以及所述第一置信度,确定所述被遮挡物的类型为所述目标障碍物类型的第二置信度;
第三物体识别模块804,用于在所述第二置信度大于预设置信度阈值的情况下,确定所述被遮挡物的类型为所述目标障碍物类型。
在一个可能的实施例中,遮挡计算模块802包括:
提取特征单元,用于提取采集的点云数据的特征值,根据所述特征值区分所述点云数据包括的所述被遮挡物对应的点云数据和遮挡物对应的点云数据;
遮挡率单元,用于根据所述遮挡物对应的点云数据对所述被遮挡物对应的点云数据进行遮挡判断计算,确定所述被遮挡物的被遮挡率。
在一个可能的实施例中,提取特征单元包括:
距离角度子单元,用于获取采集的点云数据中每个点与其他点之间的距离值和角度值,所述距离值和角度值作为所述点云数据的特征值。
在一个可能的实施例中,第二物体识别模块803包括:
补偿单元,用于根据所述被遮挡率,确定对应的补偿得分;
确定单元,用于根据所述补偿得分对所述第一置信度进行补偿计算,确定第二置信度。
在一个可能的实施例中,补偿单元包括:
补偿系数子单元,用于根据所述被遮挡物对应的目标障碍物类型,确定对 应的补偿系数;
基础补偿子单元,用于根据所述被遮挡率,获取对应的基础补偿得分;
补偿得分子单元,用于基于所述补偿系数和所述基础补偿得分,确定对应的补偿得分。
本申请通过在确认了被遮挡物的类型为目标障碍物类型的第一置信度之后,通过被遮挡物的被遮挡率获取第二置信度,以基于第二置信度判断被遮挡物的类型是否为目标障碍物类型;本申请中考虑到障碍物是否被遮挡,基于遮挡率等影响因素综合障碍物的点云数据,从而得出障碍物类型为目标障碍物类型的置信度,增强对障碍物识别的准确性,降低出现漏检错检的可能,有效提高驾驶安全性和可靠性。
如图9所示,其示出了本申请一个示例性实施例提供的障碍物识别装置的结构示意图。该障碍物识别装置可以通过软件、硬件或者两者的结合实现成为装置的全部或一部分。该障碍物识别装置包括计算模块901和物体识别模块902。
计算模块901,用于对被遮挡物对应的点云数据进行遮挡计算,确定所述被遮挡物的被遮挡率;
物体识别模块902,用于将所述被遮挡物对应的点云数据以及对应的被遮挡率输入至障碍物识别模型中,确定所述被遮挡物对应的目标障碍物类型;其中,所述障碍物识别模型由多个已知障碍物类型以及对应的点云数据、对应的被遮挡率训练得到。
本申请通过训练好的障碍物识别模型,输入被遮挡物的被遮挡率以及点云数据,基于障碍物识别模型获取该被遮挡物对应的目标障碍物类型;本申请中考虑到障碍物是否被遮挡,基于遮挡率等影响因素综合障碍物的点云数据,从而得出障碍物类型为目标障碍物类型的置信度,增强对障碍物识别的准确性,降低出现漏检错检的可能,有效提高驾驶安全性和可靠性。
需要说明的是,上述实施例提供的障碍物识别装置在执行障碍物识别方法时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的障碍物识别装置与障碍物识别方法实施例属于同一构思,其体现实现过程详见方法实施例,这里不再赘述。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
本申请实施例还提供了一种计算机存储介质,所述计算机存储介质可以存储有多条指令,所述指令适于由处理器加载并执行如上述图1-图7所示实施例的所述障碍物识别方法,具体执行过程可以参见图1-图7所示实施例的具体说明,在此不进行赘述。
本申请还提供了一种计算机程序产品,该计算机程序产品存储有至少一条指令,所述至少一条指令由所述处理器加载并执行如上述图1-图7所示实施例的所述障碍物识别方法,具体执行过程可以参见图1-图7所示实施例的具体说明,在此不进行赘述。
请参见图10,为本申请实施例提供了一种电子设备的结构示意图。如图10所示,所述电子设备1000可以包括:至少一个处理器1001,至少一个网络接口1004,用户接口1003,存储器1005,至少一个通信总线1002。
其中,通信总线1002用于实现这些组件之间的连接通信。
其中,用户接口1003可以包括显示屏(Display)、摄像头(Camera),可选用户接口1003还可以包括标准的有线接口、无线接口。
其中,网络接口1004可选的可以包括标准的有线接口、无线接口(如WI-FI接口)。
其中,处理器1001可以包括一个或者多个处理核心。处理器1001利用各种借口和线路连接整个服务器1000内的各个部分,通过运行或执行存储在存储器1005内的指令、程序、代码集或指令集,以及调用存储在存储器1005内的数据,执行服务器1000的各种功能和处理数据。可选的,处理器1001可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器1001可集成中央处理器(Central Processing Unit,CPU)、图像处理器(Graphics Processing Unit,GPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作***、用户界面和应用程序等;GPU用于负责显示屏所需要显示的内容的渲染和绘制;调制解调器用于处理无线通信。可以理解的是,上述调制解调器也可以不集成到处理器1001中,单独通过一块芯片进行实现。
其中,存储器1005可以包括随机存储器(Random Access Memory,RAM),也可以包括只读存储器(Read-Only Memory)。可选的,该存储器1005包括非瞬时性计算机可读介质(non-transitory computer-readable storage medium)。存储 器1005可用于存储指令、程序、代码、代码集或指令集。存储器1005可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作***的指令、用于至少一个功能的指令(比如触控功能、声音播放功能、图像播放功能等)、用于实现上述各个方法实施例的指令等;存储数据区可存储上面各个方法实施例中涉及到的数据等。存储器1005可选的还可以是至少一个位于远离前述处理器1001的存储装置。如图10所示,作为一种计算机存储介质的存储器1005中可以包括操作***、网络通信模块、用户接口模块以及障碍物识别应用程序。
在图10所示的电子设备1000中,用户接口1003主要用于为用户提供输入的接口,获取用户输入的数据;而处理器1001可以用于调用存储器1005中存储的障碍物识别应用程序,并具体执行以下操作:
对被遮挡物对应的点云数据进行第一障碍物检测,确定所述被遮挡物的类型为目标障碍物类型的第一置信度;
对所述被遮挡物对应的点云数据进行遮挡计算,确定所述被遮挡物的被遮挡率;
根据所述被遮挡物的被遮挡率以及所述第一置信度,确定所述被遮挡物的类型为所述目标障碍物类型的第二置信度;
在所述第二置信度大于预设置信度阈值的情况下,确定所述被遮挡物的类型为所述目标障碍物类型。
在一个可能的实施例中,处理器1001执行所述对所述被遮挡物对应的点云数据进行遮挡计算,确定所述被遮挡物的被遮挡率,具体执行以下操作:
提取采集的点云数据的特征值,根据所述特征值区分所述点云数据包括的所述被遮挡物对应的点云数据和遮挡物对应的点云数据;
根据所述遮挡物对应的点云数据对所述被遮挡物对应的点云数据进行遮挡判断计算,确定所述被遮挡物的被遮挡率。
在一个可能的实施例中,处理器1001执行所述提取采集的点云数据的特征值,具体执行以下操作:
获取采集的点云数据中每个点与其他点之间的距离值和角度值,所述距离值和角度值作为所述点云数据的特征值。
在一个可能的实施例中,处理器1001执行所述根据所述被遮挡物的被遮挡率以及所述第一置信度,确定第二置信度,具体执行以下操作:
根据所述被遮挡率,确定对应的补偿得分;
根据所述补偿得分对所述第一置信度进行补偿计算,确定第二置信度。
在一个可能的实施例中,处理器1001执行所述根据所述被遮挡率,获取对应的补偿得分,具体执行以下操作:
根据所述被遮挡物对应的目标障碍物类型,确定对应的补偿系数;
根据所述被遮挡率的数值,获取对应的基础补偿得分;
基于所述补偿系数和所述基础补偿得分,确定对应的补偿得分。
在另一个实施例中,如图10所示的电子设备1000,用户接口1003主要用于为用户提供输入的接口,获取用户输入的数据;而处理器1001可以用于调用存储器1005中存储的另一种障碍物识别应用程序,并具体执行以下操作:
对被遮挡物对应的点云数据进行遮挡计算,确定所述被遮挡物的被遮挡率;
将所述被遮挡物对应的点云数据以及对应的被遮挡率输入至障碍物识别模型中,确定所述被遮挡物对应的目标障碍物类型;其中,所述障碍物识别模型由多个已知障碍物类型以及对应的点云数据、对应的被遮挡率训练得到。
本申请通过在确认了被遮挡物的类型为目标障碍物类型的第一置信度之后,通过被遮挡物的被遮挡率获取第二置信度,以基于第二置信度判断被遮挡物的类型是否为目标障碍物类型;以及本申请通过训练好的障碍物识别模型,输入被遮挡物的被遮挡率以及点云数据,基于障碍物识别模型获取该被遮挡物对应的目标障碍物类型;本申请中考虑到障碍物是否被遮挡,基于遮挡率等影响因素综合障碍物的点云数据,从而得出障碍物类型为目标障碍物类型的置信度,增强对障碍物识别的准确性,降低出现漏检错检的可能,有效提高驾驶安全性和可靠性。
本申请还提供一种障碍物识别***,其特征在于,包括图10所示的电子设备以及与所述电子设备连接的雷达传感器;所述雷达传感器用于采集目标场景对应的雷达数据,所述电子设备的结构可以参见图10所示实施例的具体说明,在此不进行赘述。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体或随机存储记忆体等。
以上所揭露的仅为本申请较佳实施例而已,当然不能以此来限定本申请之权利范围,因此依本申请权利要求所作的等同变化,仍属本申请所涵盖的范围。

Claims (11)

  1. 一种障碍物识别方法,其特征在于,所述方法包括:
    对被遮挡物对应的点云数据进行第一障碍物检测,确定所述被遮挡物的类型为目标障碍物类型的第一置信度;
    对所述被遮挡物对应的点云数据进行遮挡计算,确定所述被遮挡物的被遮挡率;
    根据所述被遮挡物的被遮挡率以及所述第一置信度,确定所述被遮挡物的类型为所述目标障碍物类型的第二置信度;
    在所述第二置信度大于预设置信度阈值的情况下,确定所述被遮挡物的类型为所述目标障碍物类型。
  2. 根据权利要求1所述的方法,其特征在于,所述对所述被遮挡物对应的点云数据进行遮挡计算,确定所述被遮挡物的被遮挡率,包括:
    提取采集的点云数据的特征值,根据所述特征值区分所述点云数据包括的所述被遮挡物对应的点云数据和遮挡物对应的点云数据;
    根据所述遮挡物对应的点云数据对所述被遮挡物对应的点云数据进行遮挡判断计算,确定所述被遮挡物的被遮挡率。
  3. 根据权利要求2所述的方法,其特征在于,所述提取采集的点云数据的特征值,包括:
    获取采集的点云数据中每个点与其他点之间的距离值和角度值,所述距离值和角度值作为所述点云数据的特征值。
  4. 根据权利要求1所述的方法,其特征在于,所述根据所述被遮挡物的被遮挡率以及所述第一置信度,确定第二置信度,包括:
    根据所述被遮挡率,确定对应的补偿得分;
    根据所述补偿得分对所述第一置信度进行补偿计算,确定第二置信度。
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述被遮挡率,获取对应的补偿得分,包括:
    根据所述被遮挡物对应的目标障碍物类型,确定对应的补偿系数;
    根据所述被遮挡率,获取对应的基础补偿得分;
    基于所述补偿系数和所述基础补偿得分,确定对应的补偿得分。
  6. 一种障碍物识别方法,其特征在于,包括:
    对被遮挡物对应的点云数据进行遮挡计算,确定所述被遮挡物的被遮挡率;
    将所述被遮挡物对应的点云数据以及对应的被遮挡率输入至障碍物识别模型中,确定所述被遮挡物对应的目标障碍物类型;其中,所述障碍物识别模型由多个已知障碍物类型以及对应的点云数据、对应的被遮挡率训练得到。
  7. 一种障碍物识别装置,其特征在于,所述装置包括:
    第一物体识别模块,用于对被遮挡物对应的点云数据进行第一障碍物检测,确定所述被遮挡物的类型为目标障碍物类型的第一置信度;
    遮挡计算模块,用于对所述被遮挡物对应的点云数据进行遮挡计算,确定所述被遮挡物的被遮挡率;
    第二物体识别模块,用于根据所述被遮挡物的被遮挡率以及所述第一置信度,确定所述被遮挡物的类型为所述目标障碍物类型的第二置信度;
    第三物体识别模块,用于在所述第二置信度大于预设置信度阈值的情况下,确定所述被遮挡物的类型为所述目标障碍物类型。
  8. 一种障碍物识别装置,其特征在于,所述装置包括:
    计算模块,用于对被遮挡物对应的点云数据进行遮挡计算,确定所述被遮挡物的被遮挡率;
    物体识别模块,用于将所述被遮挡物对应的点云数据以及对应的被遮挡率输入至障碍物识别模型中,确定所述被遮挡物对应的目标障碍物类型;其中,所述障碍物识别模型由多个已知障碍物类型以及对应的点云数据、对应的被遮挡率训练得到。
  9. 一种计算机存储介质,其特征在于,所述计算机存储介质存储有多条指令,所述指令适于由处理器加载并执行如权利要求1~6任意一项的方法步骤。
  10. 一种电子设备,其特征在于,包括:处理器和存储器;其中,所述存储器存储有计算机程序,所述计算机程序适于由所述处理器加载并执行如权利要求1~6任意一项的方法步骤。
  11. 一种障碍物识别***,其特征在于,包括权利要求10所述的电子设 备以及与所述电子设备连接的雷达传感器;
    所述雷达传感器用于采集目标场景对应的雷达数据。
PCT/CN2021/125748 2021-10-22 2021-10-22 障碍物识别方法、装置、存储介质及电子设备 WO2023065312A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/125748 WO2023065312A1 (zh) 2021-10-22 2021-10-22 障碍物识别方法、装置、存储介质及电子设备
CN202180102319.3A CN118043864A (zh) 2021-10-22 2021-10-22 障碍物识别方法、装置、存储介质及电子设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/125748 WO2023065312A1 (zh) 2021-10-22 2021-10-22 障碍物识别方法、装置、存储介质及电子设备

Publications (1)

Publication Number Publication Date
WO2023065312A1 true WO2023065312A1 (zh) 2023-04-27

Family

ID=86058730

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/125748 WO2023065312A1 (zh) 2021-10-22 2021-10-22 障碍物识别方法、装置、存储介质及电子设备

Country Status (2)

Country Link
CN (1) CN118043864A (zh)
WO (1) WO2023065312A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116605212A (zh) * 2023-07-11 2023-08-18 北京集度科技有限公司 车辆控制方法、装置、计算机设备以及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170369051A1 (en) * 2016-06-28 2017-12-28 Toyota Motor Engineering & Manufacturing North America, Inc. Occluded obstacle classification for vehicles
CN110222764A (zh) * 2019-06-10 2019-09-10 中南民族大学 遮挡目标检测方法、***、设备及存储介质
CN111291697A (zh) * 2020-02-19 2020-06-16 北京百度网讯科技有限公司 用于识别障碍物的方法和装置
CN113420682A (zh) * 2021-06-28 2021-09-21 阿波罗智联(北京)科技有限公司 车路协同中目标检测方法、装置和路侧设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170369051A1 (en) * 2016-06-28 2017-12-28 Toyota Motor Engineering & Manufacturing North America, Inc. Occluded obstacle classification for vehicles
CN110222764A (zh) * 2019-06-10 2019-09-10 中南民族大学 遮挡目标检测方法、***、设备及存储介质
CN111291697A (zh) * 2020-02-19 2020-06-16 北京百度网讯科技有限公司 用于识别障碍物的方法和装置
CN113420682A (zh) * 2021-06-28 2021-09-21 阿波罗智联(北京)科技有限公司 车路协同中目标检测方法、装置和路侧设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116605212A (zh) * 2023-07-11 2023-08-18 北京集度科技有限公司 车辆控制方法、装置、计算机设备以及存储介质
CN116605212B (zh) * 2023-07-11 2023-10-20 北京集度科技有限公司 车辆控制方法、装置、计算机设备以及存储介质

Also Published As

Publication number Publication date
CN118043864A (zh) 2024-05-14

Similar Documents

Publication Publication Date Title
KR102198724B1 (ko) 포인트 클라우드 데이터를 처리하기 위한 방법 및 장치
WO2020052540A1 (zh) 对象标注方法、移动控制方法、装置、设备及存储介质
WO2021134441A1 (zh) 基于自动驾驶的车辆速度控制方法、装置和计算机设备
US11373067B2 (en) Parametric top-view representation of scenes
WO2019179464A1 (zh) 用于预测目标对象运动朝向的方法、车辆控制方法及装置
CN112347999B (zh) 障碍物识别模型训练方法、障碍物识别方法、装置及***
WO2022188663A1 (zh) 一种目标检测方法及装置
CN111666921A (zh) 车辆控制方法、装置、计算机设备和计算机可读存储介质
TWI666595B (zh) 物件標示系統及方法
CN112329754B (zh) 障碍物识别模型训练方法、障碍物识别方法、装置及***
US20220058818A1 (en) Object-centric three-dimensional auto labeling of point cloud data
CN111753609A (zh) 一种目标识别的方法、装置及摄像机
CN112507862A (zh) 基于多任务卷积神经网络的车辆朝向检测方法及***
WO2023065312A1 (zh) 障碍物识别方法、装置、存储介质及电子设备
US11961256B2 (en) Method for indoor localization using deep learning
WO2021199584A1 (en) Detecting debris in a vehicle path
JP2013069045A (ja) 画像認識装置、画像認識方法および画像認識プログラム
JP2021092996A (ja) 計測システム、車両、計測方法、計測装置及び計測プログラム
CN114882458A (zh) 一种目标跟踪方法、***、介质及设备
CN113569803A (zh) 一种基于多尺度卷积的多模态数据融合车道目标检测的方法及***
WO2023065313A1 (zh) 遮挡关系判断方法、装置、存储介质及电子设备
LU506004B1 (en) Method for 3d object detection via point-wise dense segmentation by fusing lidar point cloud and image
Nayak et al. BEV detection and localisation using semantic segmentation in autonomous car driving systems
US20230267749A1 (en) System and method of segmenting free space based on electromagnetic waves
KR102673546B1 (ko) 선박 운항 데이터 처리 장치, 방법 및 시스템

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE