CN112990293B - Point cloud labeling method and device and electronic equipment - Google Patents

Point cloud labeling method and device and electronic equipment Download PDF

Info

Publication number
CN112990293B
CN112990293B CN202110260628.5A CN202110260628A CN112990293B CN 112990293 B CN112990293 B CN 112990293B CN 202110260628 A CN202110260628 A CN 202110260628A CN 112990293 B CN112990293 B CN 112990293B
Authority
CN
China
Prior art keywords
frame
obstacle
point cloud
determining
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110260628.5A
Other languages
Chinese (zh)
Other versions
CN112990293A (en
Inventor
黎明慧
李恒
刘明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yiqing Innovation Technology Co ltd
Original Assignee
Shenzhen Yiqing Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yiqing Innovation Technology Co ltd filed Critical Shenzhen Yiqing Innovation Technology Co ltd
Priority to CN202110260628.5A priority Critical patent/CN112990293B/en
Publication of CN112990293A publication Critical patent/CN112990293A/en
Application granted granted Critical
Publication of CN112990293B publication Critical patent/CN112990293B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/10Pre-processing; Data cleansing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/042Knowledge-based neural networks; Logical representations of neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of data processing, and discloses a point cloud labeling method, a point cloud labeling device, electronic equipment and a nonvolatile computer readable storage medium. The method comprises the following steps: acquiring point cloud data; processing the point cloud data by using a preset algorithm model to obtain a pre-labeling frame; detecting the pre-marked frame to obtain a detection result; based on the detection result, the point cloud data is input into a database, and the semi-automatic labeling of the point cloud data is realized through a preset algorithm model, so that the labeling speed can be improved, and the cost can be reduced.

Description

Point cloud labeling method and device and electronic equipment
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a point cloud labeling method, a point cloud labeling device, an electronic device, and a non-volatile computer readable storage medium.
Background
The point cloud 3D annotation refers to a task of performing common obstacle 3D frame annotation on point cloud data of a single or multiple laser radars in the same scene acquired by a vehicle end, a road end or other platforms.
The point cloud 3D is divided into a pure point cloud label (which means that the original sensor data is only laser radar data) and a multi-sensor fusion label (for example, the label is obtained after the image and the point cloud are fused), but the current 3D point cloud label is too dependent on manual label, and the manual label is needed to label the point cloud data of each frame, so that the speed is low and the price is extremely high.
Disclosure of Invention
The embodiment of the invention provides a point cloud labeling method, a point cloud labeling device, electronic equipment and a nonvolatile computer readable storage medium, which can not only improve the labeling speed, but also reduce the cost.
In a first aspect, an embodiment of the present invention provides a point cloud labeling method, where the method includes:
acquiring point cloud data;
processing the point cloud data by using a preset algorithm model to obtain a pre-labeling frame;
detecting the pre-marked frame to obtain a detection result;
and inputting the point cloud data into a database based on the detection result.
In some embodiments, the method further comprises:
pre-training an algorithm model to obtain a preset algorithm model;
and importing the preset algorithm model into a deep learning reasoning optimizer so as to optimize the preset algorithm model.
In some embodiments, after the obtaining the point cloud data, the method further comprises:
and preprocessing the point cloud data, wherein the preprocessing comprises cleaning, screening and/or segmentation.
In some embodiments, the processing the point cloud data using a preset algorithm model, after obtaining the pre-labeling frame, the method further includes:
receiving an adjustment instruction input by a user;
and adjusting parameters of the pre-marking frame according to the adjusting instruction to obtain an adjusted pre-marking frame.
In some embodiments, the detecting the pre-label box includes:
and carrying out automatic quality inspection on the adjusted pre-marked frame by using constraint conditions.
In some embodiments, the automated quality inspection of the adjusted pre-label frame using constraints includes:
acquiring the category of the obstacle;
determining the number of preset point clouds corresponding to the obstacle category according to the obstacle category;
determining whether the number of the point clouds in the adjusted pre-labeling frame is smaller than the preset point clouds, if so, determining that the adjusted pre-labeling frame is unqualified; and/or the number of the groups of groups,
acquiring the category of the obstacle;
determining a preset size corresponding to the obstacle category according to the obstacle category;
determining whether the difference value of the adjusted size of the pre-marking frame and the preset size corresponding to the obstacle category exceeds a preset difference value range, and if so, determining that the adjusted pre-marking frame is unqualified; and/or the number of the groups of groups,
acquiring a preset position interval of a marking frame;
determining whether the position interval of the adjusted pre-marking frame is outside the preset position interval of the marking frame, if so, determining that the adjusted pre-marking frame is unqualified; and/or the number of the groups of groups,
clustering the point cloud data in the adjusted pre-labeling frame to obtain a clustering frame;
acquiring a course angle of the clustering frame;
determining the course angle error of the adjusted pre-annotation frame and the course angle error of the clustering frame, and if the error exceeds an error threshold value, determining that the adjusted pre-annotation frame is unqualified; and/or the number of the groups of groups,
acquiring position coordinates of surrounding obstacle marking frames;
converting the position coordinates of the surrounding obstacle marking frames to obtain the movement direction of the obstacle;
acquiring a course angle of the obstacle according to the movement direction of the obstacle;
determining an error of the course angle of the pre-marking frame and the course angle of the obstacle, and if the error exceeds an error threshold value, determining that the adjusted pre-marking frame is unqualified; and/or the number of the groups of groups,
acquiring a center guide line of a moving lane;
sampling at least one of the adjusted pre-marked frames to obtain a position sampling point;
analyzing the position sampling points, determining the distance between the position sampling points and the center guide line of the moving lane, and if the distance is larger than a preset distance threshold value, determining that the adjusted pre-marking frame is unqualified.
In some embodiments, the method further comprises:
and when the preset time is reached, updating the preset algorithm model.
In a second aspect, an embodiment of the present invention further provides a point cloud labeling device, where the device includes:
the acquisition module is used for acquiring the point cloud data;
the processing module is used for processing the point cloud data by using a preset algorithm model to obtain a pre-labeling frame;
the detection module is used for detecting the pre-marked frame to obtain a detection result;
and the input module is used for inputting the point cloud data into a database based on the detection result.
In a third aspect, an embodiment of the present invention further provides an electronic device, including:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the point cloud labeling method described above.
In a fourth aspect, embodiments of the present invention further provide a non-volatile computer-readable storage medium storing computer-executable instructions that, when executed by a processor, cause the processor to perform the point cloud labeling method described above.
Compared with the prior art, the invention has the beneficial effects that: compared with the prior art, the point cloud labeling method in the embodiment of the invention has the advantages that the point cloud data are acquired, the point cloud data are processed by using the preset algorithm model to obtain the pre-labeling frame, the pre-labeling frame is detected to obtain the detection result, the point cloud data are input into the database based on the detection result, the semi-automatic labeling of the point cloud data is realized by using the preset algorithm model, the labeling speed can be improved, and the cost can be reduced.
Drawings
One or more embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements, and in which the figures of the drawings are not to be taken in a limiting sense, unless otherwise indicated.
FIG. 1 is a flow chart of a point cloud labeling method according to one embodiment of the invention;
FIG. 2 is a flow chart of automated quality inspection constraint on the number of point clouds in a pre-labeled box in accordance with one embodiment of the present invention;
FIG. 3 is a flow chart of automated quality inspection versus pre-framed size constraint in accordance with an embodiment of the invention;
FIG. 4 is a schematic diagram of preset sizes corresponding to types of obstacles in an embodiment of the invention;
FIG. 5 is a flow chart of pre-label box position constraints for automated quality inspection in accordance with one embodiment of the present invention;
FIG. 6 is a flow chart of a single frame heading angle constraint for automated quality inspection in accordance with an embodiment of the invention;
FIG. 7 is a flow chart of continuous frame heading angle constraints for automated quality inspection in accordance with an embodiment of the invention;
FIG. 8 is a flow chart of motion state constraints on successive frames for automated quality inspection in accordance with one embodiment of the present invention;
FIG. 9 is a schematic diagram of a specific flow of a point cloud labeling method according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of a point cloud labeling apparatus according to an embodiment of the present invention;
fig. 11 is a schematic diagram of a hardware structure of an electronic device in an embodiment of the invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that, if not in conflict, the features of the embodiments of the present invention may be combined with each other, which is within the protection scope of the present invention. In addition, while functional block division is performed in a device diagram and logical order is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the block division in the device, or in the flowchart. Furthermore, the words "first," "second," "third," and the like as used herein do not limit the order of data and execution, but merely distinguish between identical or similar items that have substantially the same function and effect.
As shown in fig. 1, an embodiment of the present invention provides a point cloud labeling method, where the method is performed by an electronic device, and the method includes:
and 102, acquiring point cloud data.
In the embodiment of the invention, the point cloud data is not limited to the point cloud data acquired by scanning the target object through the laser radar, and the point cloud data also comprises corresponding meta data, such as an acquisition platform, acquisition time, a single laser radar system or a multi-laser radar system, and the external parameter calibration data, the point cloud line number, the point cloud channel number, the accurate time stamp of each frame of data and the like corresponding to each laser radar. Specifically, the electronic device acquires point cloud data acquired by the laser radar, wherein the point cloud data exists in the form of a point cloud data frame.
In other embodiments, after obtaining the point cloud data, the method further comprises: and preprocessing the point cloud data, wherein the preprocessing comprises cleaning, screening and/or segmentation.
In the embodiment of the invention, after the point cloud data is acquired, the point cloud data is required to be cleaned, screened and/or segmented. Specifically, firstly, range limitation and angle limitation are carried out on point cloud data through a program, redundant invalid points are removed, NAN points are removed, and invalid frames are removed. And secondly, removing repeated scene data, and screening valuable rich scene point cloud data for labeling.
Further, after the point cloud data is cleaned and screened, the point cloud data is packetized according to the original time stamp of the point cloud data frame, and the point cloud data frame is packetized mainly for segmenting the long-time-sequence point cloud data, so that the follow-up parallelization labeling is facilitated. Specifically, according to the detection frequency of the laser radar, for example, the detection frequency of the laser radar is 10HZ, a time series is calculated for 10s, and the time series includes 100 frames of point cloud data. It should be noted that the length of the time series of the packet cutting will be changed according to the point cloud data requirements of different scenes, and the limitation in the present embodiment is not limited.
And 104, processing the point cloud data by using a preset algorithm model to obtain a pre-labeling frame.
The preset algorithm model is the most critical part in the whole point cloud labeling method, and the quality of the preset algorithm model directly determines the labeling quality of the pre-labeling frame, wherein the preset algorithm model is obtained by learning and training based on a large number of sample point cloud data.
In the embodiment of the invention, after the point cloud data are acquired, each frame of point cloud data needs to be marked, and specifically, basic information of the obstacle in each frame of point cloud data, such as category, tracking number in continuous frames, three-dimensional coordinate information, three-dimensional size information, visibility and the like, is defined through a marking frame. After all the point cloud data frames are marked, training a model by using marked sample point cloud data, so as to improve the accuracy of model training and further improve the marking quality of the pre-marking frame. The more sample point cloud data, the more cases are covered, and thus the higher the recognition capability of the algorithm model.
Further, in order to optimize the preset model, that is, to accelerate the inference speed of the preset algorithm model, after the preset algorithm model is trained, the preset algorithm model can be imported into a deep learning inference optimizer, so that data flow blockage caused by overlong algorithm processing time can be avoided. In addition, when the preset time is reached, the preset algorithm model is updated, and the preset time is determined according to the accumulated quantity in the database. After the algorithm model is trained, the algorithm model can be directly loaded on the electronic equipment to run. After the electronic equipment acquires the point cloud data, the acquired point cloud data is processed by using a preset algorithm model, and a pre-labeling frame is obtained.
In some other embodiments, the processing the point cloud data using a preset algorithm model, after obtaining the pre-labeling frame, the method further includes: receiving an adjustment instruction input by a user; and adjusting parameters of the pre-marking frame according to the adjusting instruction to obtain an adjusted pre-marking frame.
In the embodiment of the invention, the pre-labeling frame obtained through the preset algorithm model may have a condition of mismatch with the actual size, position or orientation of the obstacle, so that a user is required to manually adjust related parameters of the problematic pre-labeling frame. Specifically, the electronic device receives an adjustment instruction input by a user, and adjusts relevant parameters of the pre-annotation frame according to the adjustment instruction, so that the adjusted pre-annotation frame is obtained.
And 106, detecting the pre-labeling frame to obtain a detection result.
The detection of the pre-marked frame mainly utilizes specific technical indexes to restrict the pre-marked frame, so that the problematic pre-marked frame can be obtained easily, and the error-free point cloud data can be input into a database conveniently. Specifically, the electronic device automatically detects the pre-labeling frame, so that a detection result is obtained.
And step 108, inputting the point cloud data into a database based on the detection result.
The point data finally recorded into the database are error-free point cloud data, and the point cloud data are stored in the database in a time sequence storage mode.
In some of these embodiments, as one implementation of step 106, the method includes: and carrying out automatic quality inspection on the adjusted pre-marked frame by using constraint conditions.
In the embodiment of the invention, any part related to manual work is more or less wrong, so that the constraint condition is used for carrying out automatic quality inspection on the adjusted pre-marked frame. The constraint condition is a specific technical index, and the pre-marked frame is constrained through the specific technical index.
In some embodiments, as shown in fig. 2, automated quality inspection of the adjusted pre-label box using constraints includes:
step 202, obtaining the obstacle category.
The electronic device acquires the type of the obstacle in the pre-marked frame because the pre-marked frame defines the most basic appearance information of the obstacle, such as type, tracking number in continuous frames, three-dimensional coordinate information, three-dimensional size information, visibility and the like.
Step 204, determining the number of preset point clouds corresponding to the obstacle category according to the obstacle category.
The preset point cloud quantity corresponding to the obstacle category can be stored in the electronic equipment in advance, the obstacle category is different, and the in-frame point cloud constraint quantity is different. The phenomenon of partial marked empty frames and dislocation can be found out by restraining the number of point clouds in the pre-marked frames. After the electronic equipment acquires the obstacle category, determining the preset point cloud quantity corresponding to the obstacle category according to the obstacle category.
Step 206, determining whether the number of the point clouds in the adjusted pre-labeling frame is smaller than the preset number of the point clouds, and if so, determining that the adjusted pre-labeling frame is unqualified.
In the embodiment of the invention, the quantity of the point clouds in the pre-labeling frame is constrained by comparing the quantity of the point clouds in the adjusted pre-labeling frame with the quantity of the preset point clouds. Specifically, if the number of points in the adjusted pre-labeling frame is smaller than the number of the preset point clouds, determining that the adjusted pre-labeling frame is unqualified, and further, marking the unqualified pre-labeling frame so as to facilitate subsequent review. It should be noted that, in principle, less than 20 points of motor vehicles, trucks, special vehicles, etc. need not be marked, and in addition, in principle, less than 10 points of pedestrians and bicycles need not be marked.
In some embodiments, as shown in fig. 3, automated quality inspection of the adjusted pre-label box using constraints includes:
step 302, obtaining an obstacle category.
Step 304, determining a preset size corresponding to the obstacle category according to the obstacle category.
Specifically, the preset size corresponding to the obstacle category may be stored in the electronic device in advance, and the preset size corresponding to the obstacle category is shown in fig. 4. The electronic equipment acquires the category of the obstacle in the pre-labeling frame, and determines the preset size corresponding to the category of the obstacle according to the category of the obstacle.
Step 306, determining whether the difference value between the adjusted size of the pre-marked frame and the preset size corresponding to the obstacle category exceeds a preset difference value range, and if so, determining that the adjusted pre-marked frame is unqualified.
In the embodiment of the invention, whether the adjusted size of the pre-marked frame is reasonable or not is determined by the prior information of the preset size corresponding to the type of the obstacle, and the prior information is required to be added with a difference value to adapt to the size of the obstacle which possibly appears. Wherein the difference is a variance.
Specifically, the adjusted pre-labeling frame is subjected to dimension evaluation in a statistical sense through a program, and the average value of the dimension length, width and height of each category and corresponding variance data are given. Illustratively, the dimensional variance of the motor vehicle is not greater than [1.0,0.5,0.5] (l, w, h), the dimensional variance of the truck is not greater than [6.0,1.0,1.0] (l, w, h), the dimensional variance of the pedestrian is not greater than [0.4,0.4,0.4] (l, w, h), the dimensional variance of the individual is not greater than [0.8,0.5,0.5] (l, w, h), and the remaining categories are uniformly subject to data acceptance according to [0.8,0.8,0.8 ]. And comparing the adjusted size of the pre-marked frame with the difference value of the preset size corresponding to the obstacle category to determine whether the size of the pre-marked frame is reasonable, and if the adjusted size of the pre-marked frame and the difference value of the preset size corresponding to the obstacle category exceed the range of the preset difference value, determining that the adjusted pre-marked frame is unqualified. In addition, if a special obstacle exists, the special obstacle needs to be marked with a corresponding mark, so that the subsequent treatment is convenient.
In some embodiments, as shown in fig. 5, automated quality inspection of the adjusted pre-label box using constraints includes:
step 502, obtaining a preset position interval of the annotation frame.
In the embodiment of the invention, the preset position interval of the marking frame is positioned in the intervals of [ -100m, -100m, -5m,100m, 3m ].
Step 504, determining whether the adjusted position interval of the pre-labeling frame is outside the preset position interval of the labeling frame, and if so, determining that the adjusted pre-labeling frame is not qualified.
Judging whether the adjusted pre-marking frame is qualified or not, specifically, judging whether the adjusted pre-marking frame is located outside a preset position interval of the marking frame or not, if the position interval of the adjusted pre-marking frame exceeds the preset position interval, determining that the adjusted pre-marking frame is unqualified, regarding the pre-marking frame as invalid marking data, and not counting the effective marking frame.
In some embodiments, as shown in fig. 6, automated quality inspection of the adjusted pre-label box using constraints includes:
and step 602, clustering the point cloud data in the adjusted pre-labeling frame to obtain a clustering frame.
And step 604, acquiring the course angle of the clustering frame.
In the embodiment of the invention, the single-frame course angle is restrained, and the method is mainly used for motor vehicles and bicycles with obvious mechanical mechanisms. Each adjusted pre-marking frame has a corresponding marking course angle, so that the course angle of the adjusted pre-marking frame needs to be restrained, and whether the orientation of the adjusted pre-marking frame is reasonable is determined.
Specifically, clustering is carried out on the point cloud data in the adjusted pre-labeling frame to obtain a clustering frame. In single-frame point clouds, an unverified adjusted pre-labeling frame is known, point cloud indexes in the adjusted pre-labeling frame are obtained, PCA principal component analysis is conducted on the point clouds in the adjusted pre-labeling frame, so that principal direction vectors and secondary direction vectors are obtained, an OBB clustering frame of the point clouds in the frame is found through the obtained parameter information, and the course angle of the clustering frame is obtained.
And step 606, determining the course angle error of the adjusted pre-annotation frame and the course angle error of the clustering frame, and if the error exceeds an error threshold value, determining that the adjusted pre-annotation frame is unqualified.
And comparing the course angle of the adjusted pre-marked frame with the course angle of the clustering frame, so as to determine whether the adjusted pre-marked frame is reasonable, if the error exceeds an error threshold, the adjusted pre-marked frame is unqualified, and the unqualified pre-marked frame is required to be marked, so that the follow-up rechecking is convenient.
In some embodiments, as shown in fig. 7, automated quality inspection of the adjusted pre-label box using constraints includes:
step 702, obtaining position coordinates of surrounding obstacle marking frames.
In the embodiment of the invention, the continuous frame course angle is constrained, and the constraint of the continuous frame course angle needs the position information of the vehicle. Course angle constraint under continuous frame state mainly considers that the motion state of motor vehicle and partial non-motor vehicle under unit time is not abrupt, and motor vehicle can travel according to road structure rule, so the motion state of surrounding obstacle is constrained to avoid random noise of manual annotation data. Specifically, since the vehicle itself in the continuous frame state has global positioning information and surrounding obstacle marking frame information, the electronic device acquires the position coordinates of the surrounding obstacle marking frame.
And step 704, converting the position coordinates of the surrounding obstacle marking frames to obtain the movement direction of the obstacle.
The point cloud data are acquired through the laser radar, so that the position coordinates of the obstacle marking frames are the position coordinates under the laser radar coordinate system, and after the electronic equipment acquires the position coordinates of the surrounding obstacle marking frames, the position coordinates of the surrounding obstacle marking frames need to be converted, namely, the position coordinates of the surrounding obstacle marking frames are converted from the laser radar coordinate system to the global map coordinate system, so that the movement direction among the obstacle frames can be obtained.
And step 706, acquiring the course angle of the obstacle according to the movement direction of the obstacle.
Specifically, 10 frames are selected as a local motion sliding window for sampling, and the course angle of each obstacle is obtained based on the motion direction of the obstacle.
Step 708, determining an error between the course angle of the adjusted pre-marking frame and the course angle of the obstacle, and if the error exceeds an error threshold, determining that the adjusted pre-marking frame is not qualified.
And comparing the course angle of the adjusted pre-marking frame with the course angle of the obstacle to determine whether the adjusted pre-marking frame is reasonable or not, and if the error of the course angle of the adjusted pre-marking frame and the course angle of the obstacle exceeds an error threshold, determining that the adjusted pre-marking frame is unqualified. And the unqualified pre-marked frames are required to be marked with corresponding marks, so that subsequent rechecking is facilitated.
In some embodiments, as shown in fig. 8, automated quality inspection of the adjusted pre-label box using constraints includes:
step 802, a center guide line of a moving lane is acquired.
In the embodiment of the invention, the continuous frame motion state is constrained by a central guide line, the continuous frame motion state constraint mainly considers the high-precision map information of the vehicle driving period, and the annotation data is constrained by the lane information of the high-precision map. Specifically, the electronic device acquires a center guide line of the moving lane.
And step 804, sampling at least one of the adjusted pre-marked frames to obtain a position sampling point.
Specifically, the electronic device samples at least one adjusted pre-labeling frame to obtain a position sampling point. The motor vehicle normally runs on the lane, the motion track is a central guide line which is closely attached to the current motion lane in a time sequence, the left and right oscillation condition does not occur, the laser radar frequency is 10HZ in a point cloud sequence of 30 frames, the time sequence duration is 3S, and 30 position sampling points of the motor vehicle can exist in ideal conditions.
And step 806, analyzing the position sampling points, determining the distance between the position sampling points and the central guide line of the moving lane, and if the distance is greater than a preset distance threshold, determining that the adjusted pre-marking frame is unqualified.
After the electronic equipment obtains the position sampling points, the position sampling points are placed under a batch for speed curve and acceleration curve analysis, so that the distance between the position sampling points and the center guide line of the moving lane is determined, and if the distance between the position sampling points and the center guide line of the moving lane is greater than a preset distance threshold, the adjusted pre-marking frame is considered to be unqualified, and the position sampling points are indicated to be misplaced and need to be corrected. The acceleration curve is guided in the time domain.
In other embodiments, the method further comprises: marking the unqualified pre-marked frame, and rechecking.
Specifically, after the adjusted pre-labeling frame is automatically inspected by using constraint conditions, marking the unqualified pre-labeling frame with a corresponding mark for rechecking, and recording point cloud data in the pre-labeling frame into a database when the rechecking passes the pre-labeling frame in the rechecking process, otherwise, continuing to execute the step of receiving the adjustment instruction input by the user.
In the embodiment of the invention, the point cloud data is acquired, then the point cloud data is processed by using a preset algorithm model to obtain the pre-labeling frame, then the pre-labeling frame is detected to obtain the detection result, finally the point cloud data is input into a database based on the detection result, and the semi-automatic labeling of the point cloud data is realized by using the preset algorithm model, so that the labeling speed can be improved, and the cost can be reduced.
It should be noted that, in the foregoing embodiments, there is not necessarily a certain sequence between the steps, and those skilled in the art will understand that, in different embodiments, the steps may be performed in different execution sequences, that is, may be performed in parallel, may be performed interchangeably, or the like.
To facilitate an understanding of the present invention, a specific embodiment will be described below as an example, as shown in fig. 9,
s900, acquiring point cloud data, and turning to S901;
s901, preprocessing the point cloud data, wherein the preprocessing comprises cleaning, screening and/or segmentation, and the step is transferred to S902;
s902, processing the point cloud data by using a preset algorithm model to obtain a pre-labeling frame, and turning to S903;
s903, receiving an adjustment instruction input by a user, and turning to S904;
s404, adjusting parameters of the pre-labeling frame according to the adjustment instruction to obtain an adjusted pre-labeling frame, and turning to S905;
s905, automatically inspecting quality of the adjusted pre-labeling frame by using constraint conditions, passing the pre-labeling frame to be qualified, passing the pre-labeling frame to S906, failing the pre-labeling frame to be qualified, and passing the pre-labeling frame to S907; the method comprises the steps of carrying out a first treatment on the surface of the
S906, inputting the point cloud data into a database, and turning to S908;
s907, marking the unqualified pre-marking frame, checking, passing the check, turning to S906, turning to S903, and turning to S903;
s908, updating the preset algorithm model, and turning to S902.
Correspondingly, the embodiment of the invention further provides a point cloud labeling device 100, as shown in fig. 10, including:
an acquisition module 102, configured to acquire point cloud data;
the processing module 104 is configured to process the point cloud data by using a preset algorithm model to obtain a pre-labeling frame;
the detection module 106 is configured to detect the pre-labeling frame to obtain a detection result;
and the input module 108 is used for inputting the point cloud data into a database based on the detection result.
In the embodiment of the invention, the point cloud data is acquired through the acquisition module, then the processing module processes the point cloud data by using the preset algorithm model to obtain the pre-labeling frame, then the detection module detects the pre-labeling frame to obtain the detection result, and finally the input module is used for inputting the point cloud data into the database based on the detection result, so that the labeling speed can be improved, and the cost can be reduced.
Optionally, in other embodiments of the apparatus, as shown in fig. 10, the apparatus 100 further includes:
the training module 110 is configured to pre-train the algorithm model to obtain a preset algorithm model;
the optimizing module 112 is configured to introduce the preset algorithm model into a deep learning inference optimizer to optimize the preset algorithm model.
Optionally, in other embodiments of the apparatus, as shown in fig. 10, the apparatus 100 further includes:
the receiving module 114 is configured to receive an adjustment instruction input by a user.
And the adjustment module 116 is configured to adjust parameters of the pre-labeling frame according to the adjustment instruction, so as to obtain an adjusted pre-labeling frame.
Optionally, in other embodiments of the apparatus, as shown in fig. 10, the apparatus 100 further includes:
and the rechecking module 118 is used for marking the unqualified pre-marked frame and rechecking.
Optionally, in other embodiments of the apparatus, as shown in fig. 10, the apparatus 100 further includes:
and the updating module 120 is configured to update the preset algorithm model when a preset time is reached.
Optionally, in other embodiments of the apparatus, the detection module 106 is specifically configured to:
and carrying out automatic quality inspection on the adjusted pre-marked frame by using constraint conditions.
Acquiring the category of the obstacle;
determining the number of preset point clouds corresponding to the obstacle category according to the obstacle category;
determining whether the number of the point clouds in the adjusted pre-labeling frame is smaller than the preset point clouds, if so, determining that the adjusted pre-labeling frame is unqualified; and/or the number of the groups of groups,
acquiring the category of the obstacle;
determining a preset size corresponding to the obstacle category according to the obstacle category;
determining whether the difference value of the adjusted size of the pre-marking frame and the preset size corresponding to the obstacle category exceeds a preset difference value range, and if so, determining that the adjusted pre-marking frame is unqualified; and/or the number of the groups of groups,
acquiring a preset position interval of a marking frame;
determining whether the position interval of the adjusted pre-marking frame is outside the preset position interval of the marking frame, if so, determining that the adjusted pre-marking frame is unqualified; and/or the number of the groups of groups,
clustering the point cloud data in the adjusted pre-labeling frame to obtain a clustering frame;
acquiring a course angle of the clustering frame;
determining the course angle error of the adjusted pre-annotation frame and the course angle error of the clustering frame, and if the error exceeds an error threshold value, determining that the adjusted pre-annotation frame is unqualified; and/or the number of the groups of groups,
acquiring position coordinates of surrounding obstacle marking frames;
converting the position coordinates of the surrounding obstacle marking frames to obtain the movement direction of the obstacle;
acquiring a course angle of the obstacle according to the movement direction of the obstacle;
determining an error between the course angle of the adjusted pre-marking frame and the course angle of the obstacle, and if the error exceeds an error threshold value, determining that the adjusted pre-marking frame is unqualified; and/or the number of the groups of groups,
acquiring a center guide line of a moving lane;
sampling at least one of the adjusted pre-marked frames to obtain a position sampling point;
analyzing the position sampling points, determining the distance between the position sampling points and the center guide line of the moving lane, and if the distance is larger than a preset distance threshold value, determining that the adjusted pre-marking frame is unqualified.
It should be noted that, the point cloud labeling device may execute the corresponding functional module and beneficial effects of the point cloud labeling method provided by the embodiment of the present invention. Technical details which are not described in detail in the embodiment of the point cloud labeling device can refer to the point cloud labeling method provided by the embodiment of the invention.
Fig. 11 is a schematic hardware structure of an electronic device according to an embodiment of the present invention, as shown in fig. 11, an electronic device 110 includes:
one or more processors 12 and a memory 14, one processor 12 being illustrated in fig. 11.
The processor 12 and the memory 14 may be connected by a bus or otherwise, for example in fig. 11.
The memory 14 is used as a non-volatile computer readable storage medium for storing non-volatile software programs, non-volatile computer executable programs and modules, such as program instructions/modules corresponding to the point cloud labeling method in the embodiment of the present invention. The processor 12 executes various functional applications of the electronic device and data processing, i.e., implements the point cloud labeling method in the above-described embodiments, by running non-volatile software programs, instructions, and modules stored in the memory 14.
Memory 14 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created from the point cloud labeling device usage, etc. In addition, memory 14 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some embodiments, memory 14 may optionally include memory located remotely from processor 12, which may be connected to the point cloud labeling apparatus via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Embodiments of the present invention also provide a non-volatile computer-readable storage medium storing computer-executable instructions that, when executed by one or more processors, cause the one or more processors to perform the point cloud labeling method in any of the method embodiments described above.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
From the above description of embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus a general purpose hardware platform, or may be implemented by hardware. Those skilled in the art will appreciate that all or part of the processes implementing the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and where the program may include processes implementing the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), or the like.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; the technical features of the above embodiments or in the different embodiments may also be combined within the idea of the invention, the steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (8)

1. A point cloud labeling method, the method comprising:
acquiring point cloud data;
processing the point cloud data by using a preset algorithm model to obtain a pre-labeling frame;
receiving an adjustment instruction input by a user;
adjusting parameters of the pre-marking frame according to the adjustment instruction to obtain an adjusted pre-marking frame;
carrying out automatic quality inspection on the adjusted pre-marked frame by using constraint conditions to obtain a detection result;
based on the detection result, inputting the point cloud data into a database;
the automatic quality inspection of the adjusted pre-marked frame by using the constraint condition comprises the following steps:
acquiring the category of the obstacle;
determining the number of preset point clouds corresponding to the obstacle category according to the obstacle category;
determining whether the number of the point clouds in the adjusted pre-labeling frame is smaller than the preset point clouds, if so, determining that the adjusted pre-labeling frame is unqualified; and/or the number of the groups of groups,
acquiring the category of the obstacle;
determining a preset size corresponding to the obstacle category according to the obstacle category;
determining whether the difference value of the adjusted size of the pre-marking frame and the preset size corresponding to the obstacle category exceeds a preset difference value range, and if so, determining that the adjusted pre-marking frame is unqualified; and/or the number of the groups of groups,
acquiring a preset position interval of a marking frame;
determining whether the position interval of the adjusted pre-marking frame is outside the preset position interval of the marking frame, if so, determining that the adjusted pre-marking frame is unqualified; and/or the number of the groups of groups,
clustering the point cloud data in the adjusted pre-labeling frame to obtain a clustering frame;
acquiring a course angle of the clustering frame;
determining the course angle error of the adjusted pre-annotation frame and the course angle error of the clustering frame, and if the error exceeds an error threshold value, determining that the adjusted pre-annotation frame is unqualified; and/or the number of the groups of groups,
acquiring position coordinates of surrounding obstacle marking frames;
converting the position coordinates of the surrounding obstacle marking frames to obtain the movement direction of the obstacle;
acquiring a course angle of the obstacle according to the movement direction of the obstacle;
determining an error between the course angle of the adjusted pre-marking frame and the course angle of the obstacle, and if the error exceeds an error threshold value, determining that the adjusted pre-marking frame is unqualified; and/or the number of the groups of groups,
acquiring a center guide line of a moving lane;
sampling at least one of the adjusted pre-marked frames to obtain a position sampling point;
analyzing the position sampling points, determining the distance between the position sampling points and the center guide line of the moving lane, and if the distance is larger than a preset distance threshold value, determining that the adjusted pre-marking frame is unqualified.
2. The method according to claim 1, wherein the method further comprises:
pre-training an algorithm model to obtain a preset algorithm model;
and importing the preset algorithm model into a deep learning reasoning optimizer so as to optimize the preset algorithm model.
3. The method of claim 1, wherein after the obtaining the point cloud data, the method further comprises:
and preprocessing the point cloud data, wherein the preprocessing comprises cleaning, screening and/or segmentation.
4. The method according to claim 1, wherein the method further comprises:
marking the unqualified pre-marked frame, and rechecking.
5. The method according to any one of claims 1-4, further comprising:
and when the preset time is reached, updating the preset algorithm model.
6. A point cloud labeling apparatus, the apparatus comprising:
the acquisition module is used for acquiring the point cloud data;
the processing module is used for processing the point cloud data by using a preset algorithm model to obtain a pre-labeling frame;
the receiving module is used for receiving an adjustment instruction input by a user;
the adjusting module is used for adjusting the parameters of the pre-marking frame according to the adjusting instruction to obtain an adjusted pre-marking frame;
the detection module is used for carrying out automatic quality inspection on the adjusted pre-marked frame by utilizing constraint conditions to obtain a detection result;
the input module is used for inputting the point cloud data into a database based on the detection result;
the detection module is specifically used for:
acquiring the category of the obstacle;
determining the number of preset point clouds corresponding to the obstacle category according to the obstacle category;
determining whether the number of the point clouds in the adjusted pre-labeling frame is smaller than the preset point clouds, if so, determining that the adjusted pre-labeling frame is unqualified; and/or the number of the groups of groups,
acquiring the category of the obstacle;
determining a preset size corresponding to the obstacle category according to the obstacle category;
determining whether the difference value of the adjusted size of the pre-marking frame and the preset size corresponding to the obstacle category exceeds a preset difference value range, and if so, determining that the adjusted pre-marking frame is unqualified; and/or the number of the groups of groups,
acquiring a preset position interval of a marking frame;
determining whether the position interval of the adjusted pre-marking frame is outside the preset position interval of the marking frame, if so, determining that the adjusted pre-marking frame is unqualified; and/or the number of the groups of groups,
clustering the point cloud data in the adjusted pre-labeling frame to obtain a clustering frame;
acquiring a course angle of the clustering frame;
determining the course angle error of the adjusted pre-annotation frame and the course angle error of the clustering frame, and if the error exceeds an error threshold value, determining that the adjusted pre-annotation frame is unqualified; and/or the number of the groups of groups,
acquiring position coordinates of surrounding obstacle marking frames;
converting the position coordinates of the surrounding obstacle marking frames to obtain the movement direction of the obstacle;
acquiring a course angle of the obstacle according to the movement direction of the obstacle;
determining an error between the course angle of the adjusted pre-marking frame and the course angle of the obstacle, and if the error exceeds an error threshold value, determining that the adjusted pre-marking frame is unqualified; and/or the number of the groups of groups,
acquiring a center guide line of a moving lane;
sampling at least one of the adjusted pre-marked frames to obtain a position sampling point;
analyzing the position sampling points, determining the distance between the position sampling points and the center guide line of the moving lane, and if the distance is larger than a preset distance threshold value, determining that the adjusted pre-marking frame is unqualified.
7. An electronic device, comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.
8. A non-transitory computer readable storage medium storing computer executable instructions which, when executed by a processor, cause the processor to perform the method of any of claims 1-5.
CN202110260628.5A 2021-03-10 2021-03-10 Point cloud labeling method and device and electronic equipment Active CN112990293B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110260628.5A CN112990293B (en) 2021-03-10 2021-03-10 Point cloud labeling method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110260628.5A CN112990293B (en) 2021-03-10 2021-03-10 Point cloud labeling method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112990293A CN112990293A (en) 2021-06-18
CN112990293B true CN112990293B (en) 2024-03-29

Family

ID=76334807

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110260628.5A Active CN112990293B (en) 2021-03-10 2021-03-10 Point cloud labeling method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112990293B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113449632B (en) * 2021-06-28 2023-04-07 重庆长安汽车股份有限公司 Vision and radar perception algorithm optimization method and system based on fusion perception and automobile
CN113901991A (en) * 2021-09-15 2022-01-07 天津大学 3D point cloud data semi-automatic labeling method and device based on pseudo label
CN114549644A (en) * 2022-02-24 2022-05-27 北京百度网讯科技有限公司 Data labeling method and device, electronic equipment and storage medium
CN114581739B (en) * 2022-04-15 2023-04-18 长沙公信诚丰信息技术服务有限公司 Point cloud labeling method and device based on feature recognition and electronic equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104750499A (en) * 2015-04-21 2015-07-01 南京大学 Constraint solving and description logic based web service combination method
CN105513129A (en) * 2016-01-15 2016-04-20 浙江中产科技有限公司 Laser 3D modeling-based automatic rod counting system
WO2019137196A1 (en) * 2018-01-11 2019-07-18 阿里巴巴集团控股有限公司 Image annotation information processing method and device, server and system
CN110826432A (en) * 2019-10-23 2020-02-21 南京农业大学 Power transmission line identification method based on aerial picture
CN111563450A (en) * 2020-04-30 2020-08-21 北京百度网讯科技有限公司 Data processing method, device, equipment and storage medium
CN111833358A (en) * 2020-06-26 2020-10-27 中国人民解放军32802部队 Semantic segmentation method and system based on 3D-YOLO
CN111931727A (en) * 2020-09-23 2020-11-13 深圳市商汤科技有限公司 Point cloud data labeling method and device, electronic equipment and storage medium
CN112036462A (en) * 2020-08-25 2020-12-04 北京三快在线科技有限公司 Method and device for model training and target detection
CN112347986A (en) * 2020-11-30 2021-02-09 上海商汤临港智能科技有限公司 Sample generation method, neural network training method, intelligent driving control method and device
CN112395962A (en) * 2020-11-03 2021-02-23 北京京东乾石科技有限公司 Data augmentation method and device, and object identification method and system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104750499A (en) * 2015-04-21 2015-07-01 南京大学 Constraint solving and description logic based web service combination method
CN105513129A (en) * 2016-01-15 2016-04-20 浙江中产科技有限公司 Laser 3D modeling-based automatic rod counting system
WO2019137196A1 (en) * 2018-01-11 2019-07-18 阿里巴巴集团控股有限公司 Image annotation information processing method and device, server and system
CN110826432A (en) * 2019-10-23 2020-02-21 南京农业大学 Power transmission line identification method based on aerial picture
CN111563450A (en) * 2020-04-30 2020-08-21 北京百度网讯科技有限公司 Data processing method, device, equipment and storage medium
CN111833358A (en) * 2020-06-26 2020-10-27 中国人民解放军32802部队 Semantic segmentation method and system based on 3D-YOLO
CN112036462A (en) * 2020-08-25 2020-12-04 北京三快在线科技有限公司 Method and device for model training and target detection
CN111931727A (en) * 2020-09-23 2020-11-13 深圳市商汤科技有限公司 Point cloud data labeling method and device, electronic equipment and storage medium
CN112395962A (en) * 2020-11-03 2021-02-23 北京京东乾石科技有限公司 Data augmentation method and device, and object identification method and system
CN112347986A (en) * 2020-11-30 2021-02-09 上海商汤临港智能科技有限公司 Sample generation method, neural network training method, intelligent driving control method and device

Also Published As

Publication number Publication date
CN112990293A (en) 2021-06-18

Similar Documents

Publication Publication Date Title
CN112990293B (en) Point cloud labeling method and device and electronic equipment
CN111149131B (en) Dividing line recognition device
CN107909012B (en) Real-time vehicle tracking detection method and device based on disparity map
CN113807333B (en) Data processing method and storage medium for detecting lane line
CN111866728B (en) Multi-site roadbed network sensing method, device, terminal and system
CN111723724B (en) Road surface obstacle recognition method and related device
CN110532961A (en) A kind of semantic traffic lights detection method based on multiple dimensioned attention mechanism network model
CN113771573A (en) Vehicle suspension control method and device based on road surface identification information
CN110135216B (en) Method and device for detecting lane number change area in electronic map and storage equipment
CN115115790B (en) Training method of prediction model, map prediction method and device
CN114296095A (en) Method, device, vehicle and medium for extracting effective target of automatic driving vehicle
Joy et al. Real time road lane detection using computer vision techniques in python
WO2018030103A1 (en) Displayed content recognition device and vehicle control device
CN117173669A (en) Picture identification method and system based on artificial intelligence
CN112883871A (en) Model training and unmanned vehicle motion strategy determining method and device
CN117110987A (en) Positioning method, device, vehicle and storage medium in tunnel
CN111126154A (en) Method and device for identifying road surface element, unmanned equipment and storage medium
CN114359233B (en) Image segmentation model training method and device, electronic equipment and readable storage medium
CN112489466A (en) Traffic signal lamp identification method and device
JP2019117501A (en) Determination device, determination method, and determination program
EP4145399A1 (en) Method and device for plausibilising extracted lane properties using sensor data captured by at least one sensor system installed at an automated vehicle
US20220309799A1 (en) Method for Automatically Executing a Vehicle Function, Method for Evaluating a Computer Vision Method and Evaluation Circuit for a Vehicle
CN116938960B (en) Sensor data processing method, device, equipment and computer readable storage medium
EP4148619A1 (en) Method and device for lane property detection and plausibilisation using a camera system installed at an automated vehicle and comprising at least three cameras
KR20240036975A (en) Device and Method for Reviewing Labeled Data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant