CN115937825B - Method and device for generating robust lane line under BEV of on-line pitch angle estimation - Google Patents
Method and device for generating robust lane line under BEV of on-line pitch angle estimation Download PDFInfo
- Publication number
- CN115937825B CN115937825B CN202310016576.6A CN202310016576A CN115937825B CN 115937825 B CN115937825 B CN 115937825B CN 202310016576 A CN202310016576 A CN 202310016576A CN 115937825 B CN115937825 B CN 115937825B
- Authority
- CN
- China
- Prior art keywords
- lane line
- information
- bev
- pitch angle
- lane
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a method and a device for generating robust lane lines under BEV of on-line pitch angle estimation, which finish pixel-level dense segmentation on a forward-looking monocular image based on lane line information and other information to obtain corresponding image mask information; extracting a plurality of groups of lane lines meeting the parallel relation in a two-dimensional image plane according to the image mask information; constructing an external reference matrix by combining the unknown pitch angle, back projecting the end points of parallel lane lines on an image plane to the position under the bird's eye view BEV, and constructing a cost function about the unknown pitch angle according to the lane line parallel priori information; and (3) constructing a grid interest area under the aerial view BEV according to the given resolution and size information, substituting the unknown pitch angle obtained by solving, and generating a lane line under the aerial view BEV by combining the image mask information, so that the lane line is more effectively detected.
Description
Technical Field
The invention relates to the field of environmental awareness of ground unmanned vehicles, in particular to a method and a device for generating a robust lane line under BEV of online pitch angle estimation.
Background
The lane line detection is one of the most important perception tasks of the unmanned and high-order auxiliary driving system, and provides important information for real-time robust positioning and motion planning of the unmanned vehicle. Early conventional lane line detection methods were mostly designed based on manual features. Because the lane line detection task depends on texture features and advanced semantic analysis at the same time, the lane line detection task can benefit from the strong characterization capability of a deep learning model, and now with the rapid development of deep learning, lane line detection has entered a new era of higher robustness and stronger generalization. Among these, in order to realize an effective lane line detection task, a semantic segmentation-based method is one of the most typical methods in the field of lane detection, and attracts a great deal of attention of students in the field.
In practical applications, lane line information extracted in the image space cannot be directly used for robust positioning and motion planning, and BEV (Bird's Eye View) images are usually required to be generated by using inverse perspective transformation (Inverse Perspective Mapping, IPM) to eliminate perspective effects, so that more effective perception information is provided for a post algorithm. While effective implementation of IPM relies on accurate in-camera parameters and out-parameters between the camera and the drone, it is necessary to assume a rigid body relationship between the camera and the ground. However, when serious motion changes occur in the unmanned vehicle platform, the generated bird's eye view BEV image information is distorted.
Disclosure of Invention
In order to solve the defects existing in the existing lane line detection and realize the purpose of robustly extracting the BEV lower lane line information, the invention adopts the following technical scheme:
a robust lane line generation method under BEV of on-line pitch angle estimation comprises the following steps:
step S1: based on lane line information and other information, completing pixel-level dense segmentation on the forward-looking monocular image to obtain corresponding image mask information;
step S2: extracting a plurality of groups of lane lines meeting the parallel relation in a two-dimensional image plane according to the image mask information;
step S3: constructing an external reference matrix by combining the unknown pitch angle, back projecting the end points of parallel lane lines on an image plane to the position under the bird's eye view BEV, and constructing a cost function about the unknown pitch angle according to the lane line parallel priori information;
step S4: and (3) constructing a grid interest area under the aerial view BEV according to the given resolution and size information, substituting the solved unknown pitch angle, and generating lane lines under the aerial view BEV by combining the image mask information.
Further, the step S1 includes the steps of:
step S1.1: according to the original image information, reasoning to obtain corresponding lane line type mask information, and completing pixel level segmentation;
step S1.2: and de-distorting the image according to distortion parameters obtained by monocular camera calibration to obtain a new camera internal reference matrix.
Further, in the step S1.1, the segmentation result of the forward-looking monocular image is obtained by online reasoning using a deep learning framework based on a pure-vision transducer, and the classes of dense-level pixel segmentation include: lane boundary lines, lane center lines, stop lines, and the like.
Further, in the step S1.2, considering the influence of the monocular image distortion on the generation of the subsequent BEV lane line, before the implementation process of the step S1.2, the monocular camera is calibrated by using the checkerboard, so as to obtain the distortion parameters of the monocular camera:whereinThe radial distortion parameter is represented by a reference number,representing tangential distortion parameters; and combining the camera distortion parameters, de-distorting the obtained category mask information, and obtaining a new camera internal reference matrix.
Further, the step S2 includes the steps of:
step S2.1: converting the image mask information to obtain binarized image information, and performing image refinement processing;
step S2.2: based on the image skeletonizing processing result, finishing lane line object extraction by utilizing region growth, counting the number of corresponding pixel points to remove lane line objects smaller than a given number threshold, and removing image burrs at the same time; removing redundant lane noise information to further improve subsequent processing efficiency, and meanwhile, overcoming image burrs caused by image thinning operation;
step S2.3: in the subsequent straight line segment feature extraction process, the pixel level path information corresponding to the lane line is required to be obtained, so that the pixel level path information of the lane line is obtained by utilizing a shortest path method at the end points of two sides of the given lane line;
step S2.4: the lane line is segmented by utilizing a linear feature extraction algorithm, the lane line is expressed as a broken line segment, partial line segments with the length larger than a given length threshold value are reserved, and a plurality of groups of linear segments meeting the parallel relation are returned.
Further, in the step S2.1, the lane line type mask information is traversed, the center lane line and the boundary lane line representing the current road direction information are used as the foreground portion of the binary image, noise existing in dense segmentation of the image and a large amount of effective pixel information are considered, the lane line direction information is reserved through image refinement processing, and effective pixel points are further reduced.
Further, in step S2.4, the pitch angle at the previous time is applied to the judgment of the parallel relationship of the current lane line, and finally, a plurality of groups of straight line segments meeting the parallel relationship are obtained.
Further, the step S3 includes the following steps:
step S3.1: according to the unknown pitch angle, an external reference matrix relative to the ground is constructed, and through perspective projection transformation, two end points of parallel lane lines are respectively projected to a bird's eye view BEV, and corresponding point information under European space is obtained;
step S3.2: and respectively constructing vector information of the corresponding lane lines in the bird's eye view BEV space through the point information, obtaining lane line parallel constraint by taking two-vector cross multiplication as reference information for quantitatively evaluating the parallel relationship, constructing a cost function with an unknown pitch angle by combining the lane line parallel constraint, and solving to obtain the unknown pitch angle through minimizing the cost function.
Further, the step S4 includes the steps of:
step S4.1: given offset distance, resolution and size information, constructing a grid interest region under the BEV for the generation of subsequent BEV lane lines under the camera coordinates;
step S4.2: and projecting the center of each grid of the region of interest to an image plane according to the unknown pitch angle obtained by solving, and endowing the grids with corresponding lane line semantic categories by combining image mask information, so as to generate lane line information under the bird's eye view BEV.
The device for generating the BEV robust lane line of the on-line pitch angle estimation comprises a memory and one or more processors, wherein executable codes are stored in the memory, and the one or more processors are used for realizing the BEV robust lane line generation method of the on-line pitch angle estimation when executing the executable codes.
The invention has the advantages that:
the invention discloses a method and a device for generating a robust lane line under BEV of on-line pitch angle estimation, which aim to more effectively acquire lane line information under BEV by on-line estimation of pitch angle information relative to the ground. And combining lane line parallel constraint construction with respect to a cost function to solve an unknown pitch angle, and realizing robust and reliable generation of the BEV lower lane line by estimating the pitch angle on line so as to achieve the aim of detecting the BEV lower lane line information in a robust manner.
Drawings
FIG. 1 is a flow chart of a method according to an embodiment of the invention.
FIG. 2 is a schematic diagram of a method according to an embodiment of the invention.
Fig. 3 is a schematic view showing a lane line segment according to an embodiment of the present invention.
FIG. 4a is a graph of the effect of generating robust lane lines at BEVs without pitch angle estimation in an embodiment of the invention.
FIG. 4b is a graph of the effect of generating robust lane lines at BEV for pitch angle estimation in an embodiment of the present invention.
Fig. 5 is a road information effect diagram generated by multi-frame probability accumulation in an embodiment of the present invention.
Fig. 6 is a schematic diagram of the structure of the device in the embodiment of the present invention.
Detailed Description
The following describes specific embodiments of the present invention in detail with reference to the drawings. It should be understood that the detailed description and specific examples, while indicating and illustrating the invention, are not intended to limit the invention.
The invention discloses a robust lane line generation method for an unmanned and high-order auxiliary driving system, which comprises the steps of firstly, carrying out pixel-level dense segmentation on a forward-looking monocular image to obtain corresponding lane line type mask information by reasoning, then expressing lane lines as folded line segments on an image plane to obtain a plurality of groups of straight line segments meeting a parallel relation, and finally, constructing a cost function related to an unknown pitch angle by combining the parallel constraint of the lane lines to solve the unknown pitch angle. Aiming at the problem that the current unmanned vehicle field overcomes the change of pitching angles and the related work for generating robust lane lines under BEV is less, a solution with reliable performance and easy development is provided.
As shown in fig. 1 and 2, a method for generating a robust lane line under BEV for online pitch angle estimation includes the following steps:
step S1: based on lane line information and other information, pixel-level dense segmentation is completed on the forward-looking monocular image to obtain corresponding image mask information, and the method specifically comprises the following substeps:
step S1.1: according to the original image information, reasoning to obtain corresponding lane line type mask information, and completing pixel level segmentation;
in recent years, a transition deep learning framework plays an increasingly important role in visual tasks, and in the implementation process of the invention, the segmentation result of a forward-looking monocular image is obtained by online reasoning by utilizing the deep learning framework based on a pure-visual transition, and the classification of dense-level pixel segmentation comprises lane boundary lines, lane center lines, stop lines and other four types of information.
Step S1.2: de-distorting the image according to distortion parameters obtained by monocular camera calibration to obtain a new camera internal reference matrix;
considering the influence of monocular image distortion on the generation of the subsequent BEV lane lines, before the implementation process of this substep, the monocular camera is calibrated by utilizing a checkerboard first to obtain the monocular camera distortion parametersThe number:whereinIs a parameter of the radial distortion,is a tangential distortion parameter; and combining the camera distortion parameters, de-distorting the obtained category mask information, and obtaining a new camera internal reference matrix.
Step S2: extracting a plurality of groups of lane lines meeting a parallel relation on a two-dimensional image plane according to image mask information, wherein the method specifically comprises the following sub-steps:
step S2.1: converting the image mask information to obtain binarized image information, and performing image refinement processing;
in the embodiment of the invention, a binary image is required to be generated, specifically, a center lane line and a boundary lane line representing current road direction information are used as a foreground part of the binary image by traversing lane line type mask information obtained in the previous step; if the corresponding pixels are boundary lines and central lines capable of expressing the lane direction, the value is set to 1, and otherwise, the value is set to 0. In consideration of noise existing in dense segmentation of an image and a large amount of effective pixel information, effective pixel points are further reduced by retaining lane line direction information by utilizing image thinning processing.
Step S2.2: based on the image skeletonizing processing result, finishing lane line object extraction by utilizing region growth, counting the number of corresponding pixels to remove lane line objects smaller than a given threshold value, and removing image burrs at the same time;
in the practical application process, redundant lane noise information needs to be removed, so that the subsequent processing efficiency is further improved, and meanwhile, image burrs caused by image thinning operation need to be overcome.
Step S2.3: giving two side end points of the lane line, and obtaining pixel-level path information of the lane line by utilizing a shortest path method;
in the subsequent straight line segment feature extraction process, pixel-level path information corresponding to a lane line needs to be acquired, and in the implementation process of the invention, the end points at two sides of the lane line are setObtaining a slave endpoint by using dijkstra shortest path algorithmTo the end pointIs the shortest pixel level path information of (a).
Step S2.4: dividing the lane line obtained in the steps by using a linear feature extraction algorithm, representing the lane line as a broken line segment, reserving a part of the line segments which are larger than a given length threshold value, and returning a plurality of groups of linear segments which meet a parallel relation;
through this substep straight line segment feature extraction, the corresponding lane line information can be converted into a folded line segment, as shown in fig. 3. Specifically, the current lane line end point is combinedThe lane line is finally represented as a plurality of straight line segments through a plurality of iterations by using a classical straight line segment feature extraction method, namely segmentation and merging (Split-and-Merge). In practical application, the pitch angle at the previous moment is applied to judging the parallel relation of the current lane line, and finally, a plurality of groups of straight line segments meeting the parallel relation are obtained.
Step S3: combining unknown pitch anglesConstructing an external reference matrix, reversely projecting endpoints of parallel lane lines on an image plane to a Bird's Eye View (BEV), and constructing an unknown pitch angle according to lane line parallel priori informationCost function of (2)The method comprises the steps of carrying out a first treatment on the surface of the The method specifically comprises the following substeps:
step S3.1: according to unknown pitch angleConstructing an external reference matrix relative to the ground, respectively projecting two end points of parallel lane lines to a bird's eye view BEV through perspective projection transformation, and obtaining corresponding point information under European space;
according to unknown pitch angleCan construct an external reference matrix relative to the groundAnd respectively projecting two end points of the lane lines with parallel relation to the BEV through perspective projection transformation, and acquiring corresponding point information under European space.
Step S3.2: respectively constructing vector information of corresponding lane lines under the bird's eye view BEV space through the point information, obtaining lane line parallel constraint by taking two-vector cross multiplication as reference information for quantitatively evaluating the parallel relationship, and constructing a lane line parallel constraint with unknown pitch angleCost function of (2)Obtaining an unknown pitch angle by minimizing a cost function and solving;
And respectively constructing vector information of the corresponding lane line under the BEV space by combining the point information obtained in the substeps, and taking the vector information as reference information for quantitatively evaluating the parallel relation by two-vector cross multiplication.Finally, combining multiple groups of parallel lane lines to construct cost functionAnd obtaining corresponding pitch angle information by minimizing a cost function.
Step S4: given resolution and size information, constructing a grid interest region under the aerial view BEV, substituting the unknown pitch angle obtained by solving, and generating lane lines under the aerial view BEV by combining image mask information:
step S4.1: under the camera coordinates, given offset distance, resolution and size information, constructing a grid interest region under the BEV;
given offset distanceGrid resolutionAnd size informationA grid region of interest is constructed under the current camera coordinate system for the generation of subsequent BEV lane lines.
Step S4.2: according to the unknown pitch angle obtained by solvingProjecting the center of each grid of the region of interest to an image plane, and endowing the corresponding lane line semantic category to the grid by combining the image mask information, so as to generate lane line information under the bird's eye view BEV;
the method of the embodiment comprises the steps of firstly completing pixel-level dense segmentation based on a pure-vision transducer deep learning frame to obtain mask information corresponding to a forward-looking monocular image; then, extracting a plurality of groups of lane lines meeting the parallel relation in a two-dimensional image plane according to the image mask information; then, combining the unknown pitch angleConstructing a cost function according to lane line parallel constraint to solve pitch angle information on line; and finally, constructing a grid interest area under the BEV, substituting the pitch angle information obtained by solving, and combining the mask information to generate a lane line under the BEV. Fig. 4a and 4b show a comparative experimental graph of robust lane line generation under BEV showing a typical case of the present invention, wherein the left graph does not perform pitch angle estimation, the right graph performs pitch angle estimation, and it can be known from the graph that the implementation effect of pitch angle estimation is due to the experimental effect of not performing pitch angle estimation; fig. 5 shows a road information experimental graph generated by multi-frame probability accumulation.
The invention further provides an embodiment of the online pitch angle estimated BEV robust lane line generation device, corresponding to the embodiment of the online pitch angle estimated BEV robust lane line generation method.
Referring to fig. 6, the BEV robust lane line generation apparatus for online pitch angle estimation according to the embodiment of the present invention includes a memory and one or more processors, where executable codes are stored in the memory, and the one or more processors are configured to implement the BEV robust lane line generation method for online pitch angle estimation according to the above embodiment when executing the executable codes.
The embodiment of the robust lane line generating device under BEV of the online pitch angle estimation can be applied to any device with data processing capability, wherein the device with data processing capability can be a device or a device such as a computer. The apparatus embodiments may be implemented by software, or may be implemented by hardware or a combination of hardware and software. Taking software implementation as an example, the device in a logic sense is formed by reading corresponding computer program instructions in a nonvolatile memory into a memory by a processor of any device with data processing capability. In terms of hardware, as shown in fig. 6, a hardware structure diagram of an apparatus with any data processing capability where the robust lane line generating apparatus under BEV for online pitch angle estimation of the present invention is located is shown in fig. 6, and in addition to the processor, the memory, the network interface, and the nonvolatile memory shown in fig. 6, any apparatus with data processing capability where the apparatus is located in an embodiment generally includes other hardware according to the actual function of the any apparatus with data processing capability, which is not described herein.
The implementation process of the functions and roles of each unit in the above device is specifically shown in the implementation process of the corresponding steps in the above method, and will not be described herein again.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present invention. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The embodiment of the present invention also provides a computer-readable storage medium having stored thereon a program which, when executed by a processor, implements the BEV robust lane line generation method for online pitch angle estimation in the above embodiment.
The computer readable storage medium may be an internal storage unit, such as a hard disk or a memory, of any of the data processing enabled devices described in any of the previous embodiments. The computer readable storage medium may be any external storage device that has data processing capability, such as a plug-in hard disk, a Smart Media Card (SMC), an SD Card, a Flash memory Card (Flash Card), or the like, which are provided on the device. Further, the computer readable storage medium may include both internal storage units and external storage devices of any data processing device. The computer readable storage medium is used for storing the computer program and other programs and data required by the arbitrary data processing apparatus, and may also be used for temporarily storing data that has been output or is to be output.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced with equivalents; such modifications and substitutions do not depart from the spirit of the technical solutions according to the embodiments of the present invention.
Claims (8)
1. The method for generating the robust lane line under the BEV of the on-line pitch angle estimation is characterized by comprising the following steps of:
step S1: based on lane line information and other information of lanes, pixel-level dense segmentation is completed on the forward-looking monocular image, and image mask information corresponding to the lane line type is obtained;
step S2: extracting a plurality of groups of lane lines meeting the parallel relation in a two-dimensional image plane according to the image mask information;
step S3: constructing an external reference matrix by combining unknown pitch angles, back projecting endpoints of parallel lane lines on an image plane to the position under a bird's eye view BEV, and constructing a cost function about the unknown pitch angles according to lane line parallel priori information, wherein the method comprises the following steps:
step S3.1: according to the unknown pitch angle, an external reference matrix relative to the ground is constructed, and through perspective projection transformation, two end points of parallel lane lines are respectively projected to a bird's eye view BEV, and corresponding point information under European space is obtained;
step S3.2: respectively constructing vector information of a corresponding lane line under a bird's eye view BEV space through the point information, obtaining lane line parallel constraint by taking two-vector cross multiplication as reference information for quantitatively evaluating a parallel relationship, constructing a cost function with an unknown pitch angle by combining the lane line parallel constraint, and solving to obtain the unknown pitch angle through minimizing the cost function;
step S4: given resolution and size information, constructing a grid interest region under the aerial view BEV, substituting the unknown pitch angle obtained by solving, and generating lane lines under the aerial view BEV by combining image mask information, wherein the method comprises the following steps of:
step S4.1: under the camera coordinates, given offset distance, resolution and size information, constructing a grid interest region under the BEV;
step S4.2: and projecting the center of each grid of the region of interest to an image plane according to the unknown pitch angle obtained by solving, and endowing the grids with corresponding lane line semantic categories by combining image mask information, so as to generate lane line information under the bird's eye view BEV.
2. The on-line pitch angle estimated BEV robust lane line generation method of claim 1, wherein: the step S1 includes the steps of:
step S1.1: according to the original image information, reasoning to obtain corresponding lane line type mask information, and completing pixel level segmentation;
step S1.2: and de-distorting the image according to distortion parameters obtained by monocular camera calibration to obtain a new camera internal reference matrix.
3. The on-line pitch angle estimated BEV robust lane line generation method according to claim 2, wherein: in the step S1.1, the segmentation result of the forward-looking monocular image is obtained by online reasoning using a deep learning framework based on a pure-vision transducer, and the classes of dense-level pixel segmentation include: lane boundary lines, lane center lines, stop lines, and the like.
4. The on-line pitch angle estimated BEV robust lane line generation method according to claim 2, wherein: in the step S1.2, calibration is performed on the monocular camera by using the checkerboard, so as to obtain distortion parameters of the monocular camera: (k) 1 ,k 2 ,k 3 ,p 1 ,p 2 ) Wherein (k) 1 ,k 2 ,k 3 ) Representing radial distortion parameters, (p) 1 ,p 2 ) Representing tangential distortion parameters; combining with camera distortion parameters, de-distorting the obtained category mask information, and obtainingA new camera reference matrix.
5. The on-line pitch angle estimated BEV robust lane line generation method of claim 1, wherein: the step S2 includes the steps of:
step S2.1: converting the image mask information to obtain binarized image information;
step S2.2: based on the image skeletonizing processing result, finishing lane line object extraction by utilizing region growth, and counting the number of corresponding pixel points to remove lane line objects smaller than a given number threshold;
step S2.3: giving two side end points of the lane line, and obtaining pixel-level path information of the lane line by utilizing a shortest path method;
step S2.4: the lane line is segmented by utilizing a linear feature extraction algorithm, the lane line is expressed as a broken line segment, partial line segments with the length larger than a given length threshold value are reserved, and a plurality of groups of linear segments meeting the parallel relation are returned.
6. The on-line pitch angle estimated BEV robust lane line generation method of claim 5, wherein: in the step S2.1, the lane line type mask information is traversed, the center lane line and the boundary lane line representing the current road direction information are used as the foreground part of the binarized image, and the lane line direction information is reserved through image refinement processing, so that the effective pixel points are further reduced.
7. The on-line pitch angle estimated BEV robust lane line generation method of claim 5, wherein: in the step S2.4, the pitch angle at the previous moment is applied to the judgment of the parallel relationship of the current lane line, and finally, a plurality of groups of straight line segments meeting the parallel relationship are obtained.
8. An on-line pitch angle estimated BEV robust lane line generation apparatus comprising a memory and one or more processors, the memory having executable code stored therein, the one or more processors, when executing the executable code, being configured to implement the on-line pitch angle estimated BEV robust lane line generation method of any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310016576.6A CN115937825B (en) | 2023-01-06 | 2023-01-06 | Method and device for generating robust lane line under BEV of on-line pitch angle estimation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310016576.6A CN115937825B (en) | 2023-01-06 | 2023-01-06 | Method and device for generating robust lane line under BEV of on-line pitch angle estimation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115937825A CN115937825A (en) | 2023-04-07 |
CN115937825B true CN115937825B (en) | 2023-06-20 |
Family
ID=86552512
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310016576.6A Active CN115937825B (en) | 2023-01-06 | 2023-01-06 | Method and device for generating robust lane line under BEV of on-line pitch angle estimation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115937825B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116168173B (en) * | 2023-04-24 | 2023-07-18 | 之江实验室 | Lane line map generation method, device, electronic device and storage medium |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109785291A (en) * | 2018-12-20 | 2019-05-21 | 南京莱斯电子设备有限公司 | A kind of lane line self-adapting detecting method |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3624001B1 (en) * | 2018-09-13 | 2024-05-01 | Volvo Car Corporation | Methods and systems for parking line marker detection and pairing and parking spot detection and classification |
CN111401150B (en) * | 2020-02-27 | 2023-06-13 | 江苏大学 | Multi-lane line detection method based on example segmentation and self-adaptive transformation algorithm |
US11731639B2 (en) * | 2020-03-03 | 2023-08-22 | GM Global Technology Operations LLC | Method and apparatus for lane detection on a vehicle travel surface |
CN111652952B (en) * | 2020-06-05 | 2022-03-18 | 腾讯科技(深圳)有限公司 | Lane line generation method, lane line generation device, computer device, and storage medium |
CN114037970A (en) * | 2021-11-19 | 2022-02-11 | 中国重汽集团济南动力有限公司 | Sliding window-based lane line detection method, system, terminal and readable storage medium |
CN114399588B (en) * | 2021-12-20 | 2022-11-11 | 禾多科技(北京)有限公司 | Three-dimensional lane line generation method and device, electronic device and computer readable medium |
CN114445593B (en) * | 2022-01-30 | 2024-05-10 | 重庆长安汽车股份有限公司 | Bird's eye view semantic segmentation label generation method based on multi-frame semantic point cloud splicing |
CN114445392A (en) * | 2022-01-31 | 2022-05-06 | 重庆长安汽车股份有限公司 | Lane line-based pitch angle calibration method and readable storage medium |
-
2023
- 2023-01-06 CN CN202310016576.6A patent/CN115937825B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109785291A (en) * | 2018-12-20 | 2019-05-21 | 南京莱斯电子设备有限公司 | A kind of lane line self-adapting detecting method |
Also Published As
Publication number | Publication date |
---|---|
CN115937825A (en) | 2023-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7033373B2 (en) | Target detection method and device, smart operation method, device and storage medium | |
Uhrig et al. | Sparsity invariant cnns | |
US10726599B2 (en) | Realistic augmentation of images and videos with graphics | |
US20200380711A1 (en) | Method and device for joint segmentation and 3d reconstruction of a scene | |
CN108491848B (en) | Image saliency detection method and device based on depth information | |
CN111524168B (en) | Point cloud data registration method, system and device and computer storage medium | |
CN110688947A (en) | Method for synchronously realizing human face three-dimensional point cloud feature point positioning and human face segmentation | |
US11189032B2 (en) | Method and apparatus for extracting a satellite image-based building footprint | |
US11042986B2 (en) | Method for thinning and connection in linear object extraction from an image | |
CN115937825B (en) | Method and device for generating robust lane line under BEV of on-line pitch angle estimation | |
CN116402976A (en) | Training method and device for three-dimensional target detection model | |
JP2013080389A (en) | Vanishing point estimation method, vanishing point estimation device, and computer program | |
Wasenmüller et al. | Combined bilateral filter for enhanced real-time upsampling of depth images | |
CN111860084B (en) | Image feature matching and positioning method and device and positioning system | |
CN109785367B (en) | Method and device for filtering foreign points in three-dimensional model tracking | |
Tal et al. | An accurate method for line detection and manhattan frame estimation | |
Pertuz et al. | Region-based depth recovery for highly sparse depth maps | |
KR20220144456A (en) | Method and system for recognizing a driving enviroment in proximity based on the svm original image | |
Murayama et al. | Depth Image Noise Reduction and Super-Resolution by Pixel-Wise Multi-Frame Fusion | |
CN114118188A (en) | Processing system, method and storage medium for moving objects in an image to be detected | |
Liu et al. | Research on lane detection method with shadow interference | |
KR102102369B1 (en) | Method and apparatus for estimating matching performance | |
EP4198895A1 (en) | Planar object tracking method and system, computer program, computer-readable storage medium | |
CN114783172B (en) | Parking lot empty space recognition method and system and computer readable storage medium | |
Ducrot et al. | Real-time quasi dense two-frames depth map for autonomous guided vehicles |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |