CN115830555A - Target identification method based on radar point cloud, storage medium and equipment - Google Patents

Target identification method based on radar point cloud, storage medium and equipment Download PDF

Info

Publication number
CN115830555A
CN115830555A CN202211485304.2A CN202211485304A CN115830555A CN 115830555 A CN115830555 A CN 115830555A CN 202211485304 A CN202211485304 A CN 202211485304A CN 115830555 A CN115830555 A CN 115830555A
Authority
CN
China
Prior art keywords
target
point cloud
dimensional
radar point
radar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211485304.2A
Other languages
Chinese (zh)
Inventor
田中山
王现中
杨昌群
吴小川
牛道东
张俊
梁珈铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongyi Zhilian Wuxi Industrial Automation Technology Co ltd
Harbin Institute of Technology
China Oil and Gas Pipeline Network Corp
Original Assignee
Zhongyi Zhilian Wuxi Industrial Automation Technology Co ltd
Harbin Institute of Technology
China Oil and Gas Pipeline Network Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongyi Zhilian Wuxi Industrial Automation Technology Co ltd, Harbin Institute of Technology, China Oil and Gas Pipeline Network Corp filed Critical Zhongyi Zhilian Wuxi Industrial Automation Technology Co ltd
Priority to CN202211485304.2A priority Critical patent/CN115830555A/en
Publication of CN115830555A publication Critical patent/CN115830555A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Optical Radar Systems And Details Thereof (AREA)

Abstract

A target identification method, a storage medium and equipment based on radar point cloud belong to the technical field of radar technology and target identification technology. The method aims to solve the problem that the radar point cloud segmentation effect can seriously affect the target identification accuracy rate in the existing target identification method based on the radar point cloud. Firstly, radar point cloud data of an environment to be identified are obtained, the radar point cloud data are input into a point cloud target segmentation network to carry out target segmentation on the three-dimensional point cloud data, and three-dimensional point cloud of a target to be identified is obtained; then inputting the three-dimensional point cloud corresponding to the target to be recognized into Occupancy Networks to obtain a target three-dimensional model corresponding to the target to be recognized; and matching the target three-dimensional model of the target to be recognized with the target three-dimensional models in the database, and taking the result corresponding to the target three-dimensional model in the database which is successfully matched as a recognition result. The method is used for radar point cloud target identification in auxiliary driving or automatic driving.

Description

Target identification method based on radar point cloud, storage medium and equipment
Technical Field
The invention belongs to the technical field of radar technology and target identification, and particularly relates to a target identification method based on radar point cloud, a storage medium and equipment.
Background
Environmental sensors commonly used in Advanced Driver Assistance Systems (ADAS) and unmanned systems include cameras, radars, and the like. Compared with a camera, the radar has the advantages that the limitation of the use environment is small, for example, the radar is not influenced by light, and even in rainy and snowy weather, the radar can acquire target information relatively accurately.
In the process of detecting and identifying targets by using radar in automatic driving, the space identified by the radar may relate to the condition that objects meet or coincide with each other, such as a person sitting on a bench beside a street or a person being behind a post of a sign indicating plate corresponding to a car. Therefore, the corresponding radar point clouds may overlap or affect each other, and therefore the radar point clouds need to be segmented first, and then target identification is performed based on the segmented radar point clouds.
One of the main ways of target identification based on segmenting radar point clouds is based on direct identification of the radar point clouds. The direct identification mode based on the radar point cloud is as follows: after the segmented radar point cloud is obtained, target identification is directly carried out according to the radar point cloud by utilizing a neural network model and the like, although the method is more convenient to use, the segmentation effect of the radar point cloud directly influences the identification result, and therefore the accuracy and the applicability are greatly limited.
In order to solve this problem as much as possible, how to improve the segmentation accuracy can be solved, and therefore, many researchers have been devoted to research on a segmentation model of a radar point cloud. Actually, research shows that the more accurate the radar point cloud segmentation model is, the better the radar point cloud segmentation model is, but in practice, it is impossible to ensure that the segmented radar point cloud can restore the real target by one hundred percent, and there is certainly segmentation error, so some scholars can extract the features of the segmented radar point cloud and then realize recognition based on the features, wherein one way is to train a neural network model not directly by using the radar point cloud, but to train the neural network model by using the extracted features to obtain a recognition result, for example, a laser radar target recognition method, device, electronic device, etc. with application number 202111591037.2. Naturally, the extracted features may also be used to perform recognition by other classification models, for example, the method for recognizing a road surface scene target based on the lidar point cloud with application number 202210203259.0 is to input the extracted structural feature information into a support vector machine model obtained by training to perform classification of the target radar point cloud.
The method can avoid the influence of the segmentation effect to a certain extent, thereby improving the identification accuracy. However, it is undeniable that, if the radar point cloud segmentation is inaccurate, the features extracted based on the radar point cloud are still affected, which also indirectly affects the accuracy of identification.
In summary, the above method has a problem that the radar point cloud segmentation effect can seriously affect the target identification accuracy, that is, the final identification accuracy directly or indirectly depends on the segmentation effect, and once the segmentation effect is not good, the identification accuracy is seriously reduced.
Disclosure of Invention
The invention aims to solve the problem that the radar point cloud segmentation effect can seriously affect the target identification accuracy rate in the conventional target identification method based on radar point cloud.
The target identification method based on the radar point cloud comprises the following steps:
firstly, radar point cloud data of an environment to be identified are obtained, the radar point cloud data are input into a point cloud target segmentation network to carry out target segmentation on the three-dimensional point cloud data, and three-dimensional point cloud of a target to be identified is obtained;
then inputting the three-dimensional point cloud corresponding to the target to be recognized into Occupancy Networks to obtain a target three-dimensional model corresponding to the target to be recognized; matching a target three-dimensional model of a target to be recognized with target three-dimensional models in a database, and taking a result corresponding to the target three-dimensional model in the database which is successfully matched as a recognition result;
the Occupancy Networks are obtained by the following steps:
s1, radar point cloud data of an environment are obtained, the radar point cloud data are input into a point cloud target segmentation network, and target segmentation is carried out on the three-dimensional point cloud data to obtain three-dimensional point clouds of all targets;
obtaining a target three-dimensional model based on the segmented target three-dimensional point cloud, and constructing a target three-dimensional model database;
s2, training Occupacy Networks by using the target three-dimensional model, namely occupying a network; and further obtaining the trained Occupancy Networks.
Further, the occupied network is realized by adopting a fully-connected neural network with 5 ResNet blocks.
Further, the process of training Occupanacy Networks by using the target three-dimensional model comprises the following steps:
the method comprises the steps that radar point cloud data of an environment are obtained, meanwhile, visible light images of the environment are obtained through visible light image collecting equipment arranged on a vehicle, and target visible light images are obtained on the basis of the visible light images;
inputting the target visible light images and the target three-dimensional models corresponding to each target visible light image into Occupanacy Networks for training until the loss of the Occupanacy Networks is converged, and obtaining optimized pre-training parameter models.
Further, when the target three-dimensional model of the target to be recognized is matched with the target three-dimensional model in the database, if the matching is not successful, namely the corresponding target three-dimensional model is not found in the database, the target is recognized based on the visible light image.
Further, the process of target identification based on visible light images is realized based on a neural network model.
Further, before target recognition is carried out based on the visible light image, while radar point cloud data of the environment are obtained, the visible light image of the environment is obtained by utilizing visible light image acquisition equipment arranged on the vehicle.
Furthermore, the visible light image acquisition equipment and the laser radar arranged on the vehicle are arranged in an integrated arrangement mode.
A computer storage medium having stored therein at least one instruction that is loaded and executed by a processor to implement the radar point cloud based target identification method.
A target recognition device based on radar point cloud comprises a processor and a memory, wherein at least one instruction is stored in the memory, and the at least one instruction is loaded and executed by the processor to realize the target recognition method based on radar point cloud.
Further, the device also comprises a laser radar arranged on the vehicle.
Has the advantages that:
according to the method, the corresponding three-dimensional model of the segmented radar point cloud is determined through the Occupancy Networks, then the three-dimensional model is matched with the three-dimensional model in the data according to the obtained three-dimensional model, and finally the identification process is realized, so that the corresponding three-dimensional model can still be obtained even if the segmented point cloud is not very accurate, and the three-dimensional model is obtained based on the radar point cloud, but the influence of segmentation errors can be reduced to a great extent, namely, even if certain errors exist in the segmentation, the identification accuracy of the target cannot be influenced by the point cloud segmentation effect. Therefore, the invention can still obtain relatively good effect even if the point cloud model is not very accurate during segmentation.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
The first embodiment is as follows: the present embodiment is described in connection with figure 1,
the embodiment is a target identification method based on radar point cloud, which comprises the following steps:
step 1, acquiring radar point cloud data of an environment by using a laser radar arranged on a vehicle; inputting radar point cloud data into a point cloud target segmentation network, and performing target segmentation on the three-dimensional point cloud data to obtain three-dimensional point clouds of all targets;
the invention has no special requirements on the point cloud target segmentation network, and only needs to use the existing point cloud target segmentation network, so that the point cloud target segmentation network is not explained excessively in the invention.
And then, manually modeling the target three-dimensional point cloud segmented based on the point cloud target segmentation network to obtain a target three-dimensional model, and constructing a target three-dimensional model database.
The process of manually modeling the target three-dimensional point cloud can be realized by adopting the prior art, and the manual modeling process is not excessively explained in the invention.
In fact, the segmentation can be performed without depending on a segmentation network, and manual modeling can be directly performed, but the problem that the workload is too large and the processing time is too long is caused. Therefore, the method can ensure the accuracy of subsequent treatment on the basis of greatly improving the efficiency.
Step 2, training Occupacy Networks by using a target three-dimensional model, namely occupying a network;
the Occupancy network uses the existing solution three-dimensional target Occupancy function Occupancy Networks, which can approximate this 3D function with a neural network assigned to each location p an Occupancy probability between 0 and 1. Occupanacy Networks are equivalent to binary classified neural Networks, and mainly pay attention to a decision boundary implicitly representing the surface of an object, so that a three-dimensional target occupation function is as follows:
f θ (P,x)→[0,1]
wherein, (P, x) is used as the input of the neural network for calculating the space occupation probability;
Figure BDA0003961967660000041
for a sample point, X ∈ X represents the feature vector of the image to be recognized.
Since Occupanacy Networks are prior art, the invention does not give much description. The invention focuses on the description of the process of approximating this 3D function with a neural network:
the invention adopts a fully-connected neural network with 5 ResNet blocks to realize network occupation.
The method comprises the steps that when the laser radar arranged on a vehicle is used for obtaining radar point cloud data of an environment, visible light images of the environment are obtained through visible light image collecting equipment (a camera) arranged on the vehicle, the visible light image collecting equipment and the laser radar arranged on the vehicle are arranged in an integrated mode, the visual field range and the angle of the visible light images collected in the forward direction (the vehicle advancing direction) are guaranteed to be as consistent as possible with the data visual field range of the laser radar collected in the forward direction, and training of an Occupancy Networks is facilitated. And then, manually marking the visible light image for later training.
Inputting a target visible light image and a target three-dimensional model corresponding to each target visible light image into an Occupanacy network for training (images and corresponding models need to be input simultaneously during training, the probability of three-dimensional space occupation of grid points in a subsequent three-dimensional grid model is learned, and the characteristics of the images can be compared with the three-dimensional grid model) until the loss of the Occupanacy network is converged, and obtaining an optimized pre-training parameter model; the specific process is as follows:
in order to enhance the generalization performance of the network, a large number of visible light images and a three-dimensional model of a target corresponding to each image are respectively input into an Occupancy network, so as to establish a mapping relation between a function and the model, learn and optimize a parameter weight theta of the Occupancy network, and randomly sample a three-dimensional boundary of the target visible light image in training, wherein K sampling points of an ith sample in the three-dimensional boundary of the target visible light image are calculated by the following formula:
Figure BDA0003961967660000042
wherein L (theta) represents the loss value of Occupanacy Networks; n represents the number of models in the training batch; k represents the number of sampling points in the model; l (·, ·) represents a cross-entropy classification penalty; f. of θ (. -) represents a three-dimensional target occupancy function; p ij Represents the jth sampling point in the ith target visible light image,
Figure BDA0003961967660000043
o ij represents P ij Probability of real space occupation of (o) ij ≡o(P ij )。
And 3, after the Occupacy Networks are trained, really recognizing the Occupacy Networks by using the Occupacy Networks.
Firstly, radar point cloud data of an environment are obtained by using a laser radar arranged on a vehicle. The radar point cloud data can be directly used for identification, but the method needs to be combined with the actual scene in auxiliary driving or automatic driving in consideration of the limitation of early modeling and other conditions which may exist in actual use, so that the method provided by the invention considers the supplementary auxiliary identification by using the image. Therefore, when the radar point cloud data of the environment is acquired, the visible light image of the environment is acquired by using visible light image acquisition equipment (a camera) arranged on a vehicle, the visible light image acquisition equipment and a laser radar arranged on the vehicle are integrally arranged, the visual field range and the angle acquired in the forward direction (the vehicle advancing direction) of the visible light image are ensured to be consistent with the data visual field range acquired in the forward direction by the laser radar as much as possible, and at least the visual field range of the visible light image is ensured to be in the data visual field range acquired in the forward direction by the laser radar; therefore, the consistency of laser radar identification and visible light image identification can be effectively matched, and the consistency of auxiliary identification is improved.
And 4, inputting the radar point cloud data into a point cloud target segmentation network, and performing target segmentation on the three-dimensional point cloud data to obtain the three-dimensional point cloud of the target to be identified.
And 5, inputting the three-dimensional point cloud corresponding to the target to be recognized into Occupancy Networks to obtain a target three-dimensional model corresponding to the target to be recognized.
And matching the target three-dimensional model of the target to be recognized with the target three-dimensional models in the database, and taking the result corresponding to the target three-dimensional model in the database which is successfully matched as a recognition result.
And if the matching is not successful, namely the corresponding target three-dimensional model is not found in the database, carrying out target identification based on the visible light image. The visible light image recognition may be based on neural network recognition or other recognition methods, using results of other predecessors. It should be noted that image recognition is only a supplementary aid, and the recognition result is based on the matching recognition result.
Compared with the prior art, the method does not directly judge and identify the three-dimensional point cloud obtained on the basis of segmentation, or directly extracts the memorability characteristics of the radar point cloud and then identifies the radar point cloud. The method comprises the steps of determining a corresponding three-dimensional model of a partitioned radar point cloud through an Occupancy network, matching the three-dimensional model with the three-dimensional model in data according to the obtained three-dimensional model, and finally realizing the identification process, so that the corresponding three-dimensional model can be obtained even if the partitioned point cloud is not very accurate, and the three-dimensional model is obtained based on the radar point cloud, but the influence of a partition error can be reduced to a great extent, namely, even if the partition has a certain error, the identification accuracy of a target cannot be influenced by the point cloud partition effect. Therefore, the invention can still obtain relatively good effect even if the point cloud model is not very accurate during segmentation.
Meanwhile, in order to avoid the matching failure condition, the invention also sets a preparation scheme, namely, the auxiliary identification is realized through the visible light image. Compared with the effect of three-dimensional model identification, the visible light image identification result provides relatively limited information for the auxiliary driving technology, but the problem caused by failure of three-dimensional point cloud identification can be effectively avoided, so that the method is an effective supplementary means, and the applicability and the effectiveness of the method are greatly improved.
The second embodiment is as follows:
the present embodiment is a computer storage medium, in which at least one instruction is stored, and the at least one instruction is loaded and executed by a processor to implement the radar point cloud-based target identification method. .
It should be understood that any method described herein, including any methods described herein, may accordingly be provided as a computer program product, software, or computerized method, which may include a non-transitory machine-readable medium having stored thereon instructions, which may be used to program a computer system, or other electronic device. Storage media may include, but is not limited to, magnetic storage media, optical storage media; a magneto-optical storage medium comprising: read only memory ROM, random access memory RAM, erasable programmable memory (e.g., EPROM and EEPROM), and flash memory layers; or other type of media suitable for storing electronic instructions.
The third concrete implementation mode:
the embodiment is a target identification device based on radar point cloud, the device comprises a processor and a memory, and it should be understood that any device described in the present invention, which comprises a processor and a memory, may also comprise other units and modules which perform display, interaction, processing, control, etc. and other functions through signals or instructions; the apparatus described in this embodiment further includes a laser radar, a camera, and the like provided on the vehicle.
The memory has stored therein at least one instruction that is loaded and executed by the processor to implement the radar point cloud based target identification method.
The above-described calculation examples of the present invention are merely to explain the calculation model and the calculation flow of the present invention in detail, and are not intended to limit the embodiments of the present invention. It will be apparent to those skilled in the art that other variations and modifications of the present invention can be made based on the above description, and it is not intended to be exhaustive or to limit the invention to the precise form disclosed, and all such modifications and variations are possible and contemplated as falling within the scope of the invention.

Claims (10)

1. The target identification method based on the radar point cloud is characterized by comprising the following steps:
firstly, radar point cloud data of an environment to be identified are obtained, the radar point cloud data are input into a point cloud target segmentation network to carry out target segmentation on the three-dimensional point cloud data, and three-dimensional point cloud of a target to be identified is obtained;
then inputting the three-dimensional point cloud corresponding to the target to be recognized into Occupancy Networks to obtain a target three-dimensional model corresponding to the target to be recognized; matching a target three-dimensional model of a target to be recognized with target three-dimensional models in a database, and taking a result corresponding to the target three-dimensional model in the database which is successfully matched as a recognition result;
the Occupancy Networks are obtained by the following steps:
s1, radar point cloud data of an environment are obtained, the radar point cloud data are input into a point cloud target segmentation network, and target segmentation is carried out on the three-dimensional point cloud data to obtain three-dimensional point clouds of all targets;
obtaining a target three-dimensional model based on the segmented target three-dimensional point cloud, and constructing a target three-dimensional model database;
s2, training Occupacy Networks by using the target three-dimensional model, namely occupying a network; and further obtaining the trained Occupancy Networks.
2. The method of claim 1, wherein the occupancy network is implemented by a fully connected neural network of 5 ResNet blocks.
3. The method for identifying targets based on radar point cloud as claimed in claim 2, wherein the process of training Occupanacy Networks by using a target three-dimensional model comprises the following steps:
the method comprises the steps that radar point cloud data of an environment are obtained, meanwhile, visible light images of the environment are obtained through visible light image collecting equipment arranged on a vehicle, and target visible light images are obtained on the basis of the visible light images;
inputting the target visible light images and the target three-dimensional models corresponding to each target visible light image into Occupanacy Networks for training until the loss of the Occupanacy Networks is converged, and obtaining optimized pre-training parameter models.
4. The method for identifying targets based on radar point clouds according to claim 1, 2 or 3, characterized in that when matching the target three-dimensional model of the target to be identified with the target three-dimensional models in the database, if the matching is not successful, i.e. the corresponding target three-dimensional model is not found in the database, the target identification is performed based on the visible light image.
5. The method for identifying the target based on the radar point cloud as claimed in claim 4, wherein the process of identifying the target based on the visible light image is realized based on a neural network model.
6. The radar point cloud-based target identification method according to claim 5, wherein before the target identification is performed based on the visible light image, the visible light image of the environment is acquired by using a visible light image acquisition device arranged on the vehicle while the radar point cloud data of the environment is acquired.
7. The radar point cloud-based target identification method according to claim 6, wherein the visible light image acquisition device and the laser radar arranged on the vehicle are arranged in an integrated manner.
8. A computer storage medium having stored therein at least one instruction that is loaded and executed by a processor to implement the radar point cloud based target identification method of one of claims 1 to 7.
9. A radar point cloud based target recognition device, characterized in that the device comprises a processor and a memory, wherein at least one instruction is stored in the memory, and the at least one instruction is loaded and executed by the processor to realize the radar point cloud based target recognition method according to one of claims 1 to 7.
10. The radar point cloud based target identification device of claim 9, further comprising a lidar disposed on the vehicle.
CN202211485304.2A 2022-11-24 2022-11-24 Target identification method based on radar point cloud, storage medium and equipment Pending CN115830555A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211485304.2A CN115830555A (en) 2022-11-24 2022-11-24 Target identification method based on radar point cloud, storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211485304.2A CN115830555A (en) 2022-11-24 2022-11-24 Target identification method based on radar point cloud, storage medium and equipment

Publications (1)

Publication Number Publication Date
CN115830555A true CN115830555A (en) 2023-03-21

Family

ID=85531306

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211485304.2A Pending CN115830555A (en) 2022-11-24 2022-11-24 Target identification method based on radar point cloud, storage medium and equipment

Country Status (1)

Country Link
CN (1) CN115830555A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116704472A (en) * 2023-05-15 2023-09-05 小米汽车科技有限公司 Image processing method, device, apparatus, medium, and program product

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116704472A (en) * 2023-05-15 2023-09-05 小米汽车科技有限公司 Image processing method, device, apparatus, medium, and program product
CN116704472B (en) * 2023-05-15 2024-04-02 小米汽车科技有限公司 Image processing method, device, apparatus, medium, and program product

Similar Documents

Publication Publication Date Title
WO2020186678A1 (en) Three-dimensional map constructing method and apparatus for unmanned aerial vehicle, computer device, and storage medium
CN110443225B (en) Virtual and real lane line identification method and device based on feature pixel statistics
CN110738121A (en) front vehicle detection method and detection system
CN114359181B (en) Intelligent traffic target fusion detection method and system based on image and point cloud
CN109658442B (en) Multi-target tracking method, device, equipment and computer readable storage medium
CN113033604A (en) Vehicle detection method, system and storage medium based on SF-YOLOv4 network model
CN114581887B (en) Method, device, equipment and computer readable storage medium for detecting lane line
CN110807123A (en) Vehicle length calculation method, device and system, computer equipment and storage medium
CN113761999A (en) Target detection method and device, electronic equipment and storage medium
CN115187964A (en) Automatic driving decision-making method based on multi-sensor data fusion and SoC chip
CN107796373A (en) A kind of distance-finding method of the front vehicles monocular vision based on track plane geometry model-driven
CN113221750A (en) Vehicle tracking method, device, equipment and storage medium
CN113255444A (en) Training method of image recognition model, image recognition method and device
CN115830555A (en) Target identification method based on radar point cloud, storage medium and equipment
CN116935369A (en) Ship water gauge reading method and system based on computer vision
CN116778262B (en) Three-dimensional target detection method and system based on virtual point cloud
CN112085101A (en) High-performance and high-reliability environment fusion sensing method and system
CN116630920A (en) Improved lane line type identification method of YOLOv5s network model
CN116343148A (en) Lane line detection method, device, vehicle and storage medium
CN115618602A (en) Lane-level scene simulation method and system
CN110660113A (en) Method and device for establishing characteristic map, acquisition equipment and storage medium
CN116263504A (en) Vehicle identification method, device, electronic equipment and computer readable storage medium
CN114429621A (en) UFSA algorithm-based improved lane line intelligent detection method
CN113343785A (en) YOLO ground mark detection method and equipment based on perspective downsampling and storage medium
CN113762195A (en) Point cloud semantic segmentation and understanding method based on road side RSU

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination