CN114898332A - Lane line identification method and system based on automatic driving - Google Patents

Lane line identification method and system based on automatic driving Download PDF

Info

Publication number
CN114898332A
CN114898332A CN202210466529.7A CN202210466529A CN114898332A CN 114898332 A CN114898332 A CN 114898332A CN 202210466529 A CN202210466529 A CN 202210466529A CN 114898332 A CN114898332 A CN 114898332A
Authority
CN
China
Prior art keywords
lane line
lane
vehicle
line recognition
recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210466529.7A
Other languages
Chinese (zh)
Inventor
漆晓静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Telecommunication Polytechnic College
Original Assignee
Chongqing Telecommunication Polytechnic College
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Telecommunication Polytechnic College filed Critical Chongqing Telecommunication Polytechnic College
Priority to CN202210466529.7A priority Critical patent/CN114898332A/en
Publication of CN114898332A publication Critical patent/CN114898332A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a lane line identification method and a system based on automatic driving, wherein the method comprises the following steps: acquiring image information; acquiring real-time positioning information and driving parameters of a vehicle, and associating the positioning information with the acquired image information; sending the associated images into a map library which is accessed in advance to calculate based on the average gray value of the images so as to obtain the images to be identified with clear positioning points; sending the image to be identified into a pre-constructed lane line identification model for identification so as to obtain a lane line identification result; the lane line recognition model is obtained by training based on lane and road mark reference, wherein the reference comprises a plurality of lanes, road mark categories and a plurality of different severe environment scenes; the beneficial effects are as follows: the method has good robustness in the actual application scene, and further improves the accuracy of detecting the lane line.

Description

Lane line identification method and system based on automatic driving
Technical Field
The invention relates to the technical field of automatic driving, in particular to a lane line identification method and system based on automatic driving.
Background
At present, lane line detection is a basic function in automatic driving, and in the automatic driving process, a vehicle needs to detect a lane line in a road. However, for lane line identification, the conventional detection method generally adopts a mode of edge detection and hough transformation, and is suitable for the situation that the image is clear and the lane line is not blocked; however, in practical application scenarios, the scenario faced by lane line detection has the characteristic of diversity, and under the influence of severe environment interference and surrounding obstacles, the recognition reliability is reduced, thereby bringing about the defect of low detection accuracy.
Disclosure of Invention
Aiming at the technical defects in the prior art, the embodiment of the invention aims to provide a lane line identification method and system based on automatic driving so as to improve the accuracy of lane line detection.
In order to achieve the above object, in a first aspect, an embodiment of the present invention provides an automatic driving-based lane line identification method, where the method includes:
acquiring image information; wherein the image information comprises results detected by a perception layer disposed on a vehicle;
acquiring real-time positioning information and driving parameters of a vehicle, and associating the positioning information with the acquired image information;
sending the associated images into a map library which is accessed in advance to calculate based on the average gray value of the images so as to obtain the images to be identified with clear positioning points;
sending the image to be identified into a pre-constructed lane line identification model for identification so as to obtain a lane line identification result; the lane line recognition model is obtained by training based on lane and road mark reference, and the reference comprises a plurality of lanes, road mark categories and a plurality of different severe environment scenes.
Preferably, the method further comprises:
acquiring an operation signal of the vehicle, and combining the acquired running parameters to construct an actual running track of the vehicle;
analyzing a predicted driving track of the vehicle based on the driving parameters and the lane line identification result;
matching the predicted running track with the actual running track to judge whether the tracks of the predicted running track and the actual running track are within a preset deviation range;
and if the deviation range is exceeded, taking the corresponding lane line recognition result as training data to optimize the lane line recognition model.
Preferably, in the training of the lane line recognition model, each lane is trained as an example of the lane line recognition model, and learning perspective transformation based on images is performed to realize recognition of lane change.
Preferably, the method further comprises:
establishing a lane shape model based on the road structure and the angle of the acquired image information;
and during recognition, transmitting the lane parameters output by the lane shape model into a lane line recognition model so as to realize accurate positioning of the lane line.
In a second aspect, an embodiment of the present invention further provides an automatic driving-based lane line identification system, including:
the image acquisition module is used for acquiring image information; wherein the image information comprises results detected by a perception layer disposed on a vehicle;
the information association module is used for acquiring real-time positioning information and driving parameters of the vehicle and associating the positioning information with the acquired image information;
the preprocessing module is used for sending the associated images into a map library which is accessed in advance to carry out calculation based on the average gray value of the images so as to obtain the images to be identified with clear positioning points;
the recognition module is used for sending the image to be recognized into a pre-constructed lane line recognition model for recognition so as to obtain a lane line recognition result; the lane line recognition model is obtained by training based on lane and road mark reference, and the reference comprises a plurality of lanes, road mark categories and a plurality of different severe environment scenes.
Preferably, the lane line recognition system based on automatic driving further includes an optimization module, and the optimization module is configured to:
acquiring an operation signal of the vehicle, and combining the acquired running parameters to construct an actual running track of the vehicle;
analyzing a predicted driving track of the vehicle based on the driving parameters and the lane line identification result;
matching the predicted running track with the actual running track to judge whether the tracks of the predicted running track and the actual running track are within a preset deviation range;
and if the deviation range is exceeded, taking the corresponding lane line recognition result as training data to optimize the lane line recognition model.
Preferably, in the training of the lane line recognition model, each lane is trained as an example of the lane line recognition model, and learning perspective transformation based on images is performed to realize recognition of lane change.
Preferably, the identification module is further configured to:
establishing a lane shape model based on the road structure and the angle of the acquired image information;
and during recognition, transmitting the lane parameters output by the lane shape model into a lane line recognition model so as to realize accurate positioning of the lane line.
Embodiments of the present invention also provide an automatic driving-based lane line recognition system, including a perception layer disposed on a vehicle and an electronic device, where the electronic device includes one or more processors, one or more input devices, one or more output devices, and a memory, where the processors, the input devices, the output devices, and the memory are connected to each other through a bus, and the memory is used to store a computer program, and the computer program includes program instructions, and the processors are configured to call the program instructions to execute the method steps according to the first aspect.
By implementing the embodiment of the invention, the image information is associated with the positioning information of the vehicle, so that a clear image of the position is found in a map library, and then the clear image is sent to a constructed lane line identification model for identification, and the model is used based on lanes and road mark benchmarks, and the benchmarks comprise a plurality of lanes, road mark types and the characteristics of various different severe environment scenes, so that the lane line identification method has good robustness in the actual application scene, and further improves the accuracy of lane line detection.
Drawings
In order to more clearly illustrate the detailed description of the invention or the technical solutions in the prior art, the drawings that are needed in the detailed description of the invention or the prior art will be briefly described below.
Fig. 1 is a flowchart of a lane line identification method based on automatic driving according to an embodiment of the present invention;
fig. 2 is a block diagram of a lane line recognition system based on automatic driving according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device in another lane line identification system based on automatic driving according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, an embodiment of the present invention provides a lane line identification method based on automatic driving, where the method includes:
s101, acquiring image information; wherein the image information comprises results detected by a perception layer disposed on the vehicle.
Specifically, the sensing layer comprises various sensors, specifically including laser radar, ultrasonic waves, millimeter waves, a vehicle-mounted camera, a GPS, an IMU, an accelerometer, a gyroscope, and the like.
S102, acquiring real-time positioning information and driving parameters of the vehicle, and associating the positioning information with the acquired image information.
Specifically, the positioning information may include latitude and longitude information; the driving parameters include ECU information of the vehicle, such as current angle, linear velocity, direction of motion, gear shift, clutch, brake, and GPS, IMU, accelerometer, gyroscope data, and the like.
The collected image is associated with the current position, so that the position of the collected image is more accurate, and the interference of related similarity pictures is avoided.
And S103, sending the associated images into a map library which is accessed in advance to calculate based on the average gray value of the images so as to obtain the images to be identified with clear positioning points.
Specifically, in this embodiment, the map library includes a high-precision map (e.g., NavNet) and historical image information acquired in the past, and an image corresponding to a current locating point is clear is matched by performing calculation based on an average gray value of the image; due to the fact that the positioning information and the historical images are associated, even when a road changes, a high-precision map is not updated in time or the definition of the current picture obtained due to the influence of the environment is not enough, the clear image of the positioning point can be found from the historical images to serve as an identification reference, and therefore the accuracy of subsequent identification is improved.
S104, sending the image to be identified into a pre-constructed lane line identification model for identification so as to obtain a lane line identification result; the lane line recognition model is obtained by training based on lane and road mark reference, and the reference comprises a plurality of lanes, road mark categories and a plurality of different severe environment scenes.
Specifically, the image to be recognized is also subjected to preprocessing modes such as noise reduction; the lane line recognition model adopts a deep learning algorithm, takes a lane detection problem as a segmentation task, and performs pixel segmentation training; where the output is a segmentation map with per-pixel prediction given an input image. During training, the training data set comprises a plurality of image samples under severe weather, a plurality of lanes and road mark types, a plurality of task networks are obtained through training to adapt to the scenes, and the road marks comprise various marked lines, arrows and the like on the road; inclement weather includes common heavy rain, night, fog, etc.
Further, in order to further cope with severe weather conditions, for example, when fog occurs, denoising processing is also performed on the image to be recognized; for example, the defogging algorithm based on the dark channel is combined with the atmospheric scattering model to obtain atmospheric light and transmittance, so as to obtain the defogged image to be identified.
In another embodiment, to further enhance the recognition effect, the method further includes:
acquiring an operation signal of the vehicle, and combining the acquired running parameters to construct an actual running track of the vehicle;
analyzing a predicted driving track of the vehicle based on the driving parameters and the lane line identification result;
matching the predicted running track with the actual running track to judge whether the tracks of the predicted running track and the actual running track are within a preset deviation range;
and if the deviation range is exceeded, taking the corresponding lane line recognition result as training data to optimize the lane line recognition model.
Comparing the two driving tracks to verify whether the identification of the lane line is accurate or not; for example, if the identified lane line is a straight line, then the predicted trajectory should be a straight line and the speed should be maintained or increased; however, the operation signal of the vehicle is decelerated and steered, the gyroscope of the vehicle senses the change of the angle, and the tracks formed before and after the change exceed the deviation, so that the current result is considered to be an abnormal processing result, and the abnormal result is trained again, so that the model is optimized, and the accuracy of identification is improved.
According to the scheme, the image information is associated with the positioning information of the vehicle, so that a clear image of the position is found in the map library, the clear image is sent to a constructed lane line identification model for identification, the model is used based on lanes and lane mark benchmarks, and the benchmarks comprise a plurality of lanes, lane mark categories and the characteristics of various different severe environment scenes, so that the lane line identification method has good robustness in the actual application scene, and the accuracy of lane line detection is improved.
Further, considering lane change of the automobile in the moving process, when the lane line recognition model is trained, each lane is also used as an example of the lane line recognition model to be trained, and learning view angle conversion based on images is carried out, so that lane change recognition is realized.
Specifically, the lane detection problem is converted into an instance segmentation problem by performing instance training, each lane forms an instance thereof, end-to-end training is realized, and the segmented lane instance is parameterized before lane fitting is realized;
the image learning view angle transformation can adopt affine transformation, wherein the affine transformation refers to the process that in geometry, one vector space is subjected to linear transformation once and then is subjected to translation, and the vector space is transformed into the other vector space. The method keeps the straightness (straight lines are still straight lines after the straight lines are converted) and the parallelism (relative position relation between the two-dimensional graphs is kept unchanged, the parallel lines are still parallel lines, and the position sequence of points on the straight lines is unchanged), so that the lane identification is not limited to the detection of lanes with fixed quantity, and the condition of processing the variable lane number and the lane change is realized.
Further, to improve the efficiency of processing the lane parameters, in another embodiment, the method further comprises:
establishing a lane shape model based on the road structure and the angle of the acquired image information;
and during recognition, transmitting the lane parameters output by the lane shape model into a lane line recognition model so as to realize accurate positioning of the lane line.
Specifically, the angle of acquiring the image information is derived from the angle of the acquisition device; the lane shape model adopts a multi-attention model, and learns richer structures and contexts through a network established by the lane shape model, so that the processing timeliness is improved; meanwhile, the accurate positioning of the lane line is realized by utilizing the output physical parameters; it should be noted that the application field of the above technical solution is not limited to automatic driving, but also applied to a scene that needs to use lane line identification, as a lane line identification method, the solution is the same as the above, and is not described herein again.
Based on the same inventive concept, an embodiment of the present invention further provides an automatic driving-based lane line recognition system, as shown in fig. 2, including:
the image acquisition module is used for acquiring image information; wherein the image information comprises results detected by a perception layer disposed on a vehicle;
the information association module is used for acquiring real-time positioning information and driving parameters of the vehicle and associating the positioning information with the acquired image information;
the preprocessing module is used for sending the associated images into a map library which is accessed in advance to carry out calculation based on the average gray value of the images so as to obtain the images to be identified with clear positioning points;
the recognition module is used for sending the image to be recognized into a pre-constructed lane line recognition model for recognition so as to obtain a lane line recognition result; the lane line recognition model is obtained by training based on lane and road mark reference, and the reference comprises a plurality of lanes, road mark categories and a plurality of different severe environment scenes.
Further, in another embodiment, the lane line recognition system based on automatic driving further includes an optimization module, where the optimization module is configured to:
acquiring an operation signal of the vehicle, and combining the acquired running parameters to construct an actual running track of the vehicle;
analyzing a predicted driving track of the vehicle based on the driving parameters and the lane line identification result;
matching the predicted running track with the actual running track to judge whether the tracks of the predicted running track and the actual running track are within a preset deviation range;
and if the deviation range is exceeded, taking the corresponding lane line recognition result as training data to optimize the lane line recognition model.
When the lane line recognition model is applied, each lane is used as an example of the lane line recognition model to be trained during training, and learning view angle transformation based on images is carried out to realize lane change recognition.
Meanwhile, the identification module is further configured to:
establishing a lane shape model based on the road structure and the angle of the acquired image information;
and during recognition, transmitting the lane parameters output by the lane shape model into a lane line recognition model so as to realize accurate positioning of the lane line.
In another embodiment, the lane line recognition system based on automatic driving further includes a fusion processing module, and the fusion processing module is configured to:
acquiring navigation information generated by a vehicle-mounted navigation unit; the navigation information comprises a destination, a departure place and a navigation track;
fusing the navigation information with the current positioning information and a lane line identification result obtained based on the current image to judge whether deviation exists or not;
if the lane line identification result exists, the track prediction is carried out based on the currently obtained lane line identification result and the driving parameters so as to re-plan the navigation. By the method, the identified lane line is combined, so that the lane on which the user is positioned can be more accurately known, and the navigation track desired by the user can be planned according to the intention of the user; likewise, the role of the fusion processing module can also be applied to the aforementioned method embodiments; the scheme of the system embodiment is not limited to automatic driving, and is also applied to a lane line identification system needing to identify the lane line, and the two schemes are the same.
It should be noted that, for a more specific workflow of the recognition system, please refer to the foregoing method embodiment, which is not described herein again.
According to the scheme, the lane detection problem is used as a segmentation task by adopting a deep learning algorithm, pixel segmentation training is carried out, each lane is used as an example of the lane during training, and learning visual angle transformation based on images is carried out to realize lane change identification; meanwhile, a lane shape model is established based on the road structure and the angle of obtaining image information, so that accurate positioning of lane lines is realized, signals of vehicles are introduced for verification, and the recognition model is optimized after abnormality is recognized.
In this embodiment, in a preferred embodiment of the present application, an automatic driving-based lane line recognition system includes a sensing layer disposed on a vehicle and an electronic device, as shown in fig. 3, the electronic device may include: one or more processors 101, one or more input devices 102, one or more output devices 103, and memory 104, the processors 101, input devices 102, output devices 103, and memory 104 being interconnected via a bus 105. The memory 104 is used for storing a computer program comprising program instructions, the processor 101 being configured for invoking the program instructions for performing the aforementioned method steps.
It should be understood that, in the embodiment of the present invention, the Processor 101 may be a Central Processing Unit (CPU), and the Processor may also be other general processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The input device 102 may include a keyboard or the like, and the output device 103 may include a display (LCD or the like), a speaker, or the like.
The memory 104 may include read-only memory and random access memory, and provides instructions and data to the processor 101. A portion of the memory 104 may also include non-volatile random access memory. For example, the memory 104 may also store device type information.
In a specific implementation, the processor 101, the input device 102, and the output device 103 described in the embodiments of the present invention may execute the implementation described in the embodiments of the method provided in the embodiments of the present invention, and are not described herein again.
Those of ordinary skill in the art will appreciate that the various illustrative modules and steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed method and system may be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the modules is only one logical division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention.

Claims (9)

1. An automatic driving-based lane line identification method is characterized by comprising the following steps:
acquiring image information; wherein the image information comprises results detected by a perception layer disposed on a vehicle;
acquiring real-time positioning information and driving parameters of a vehicle, and associating the positioning information with the acquired image information;
sending the associated images into a map library which is accessed in advance to carry out calculation based on the average gray value of the images so as to obtain the images to be identified with clear positioning points;
sending the image to be identified into a pre-constructed lane line identification model for identification so as to obtain a lane line identification result; the lane line recognition model is obtained by training based on lane and road mark reference, and the reference comprises a plurality of lanes, road mark categories and a plurality of different severe environment scenes.
2. The lane line recognition method based on automatic driving according to claim 1, wherein the method further comprises:
acquiring an operation signal of the vehicle, and combining the acquired running parameters to construct an actual running track of the vehicle;
analyzing a predicted driving track of the vehicle based on the driving parameters and the lane line identification result;
matching the predicted running track with the actual running track to judge whether the tracks of the predicted running track and the actual running track are within a preset deviation range;
and if the deviation range is exceeded, taking the corresponding lane line recognition result as training data to optimize the lane line recognition model.
3. The lane line recognition method based on automatic driving as claimed in claim 1 or 2, wherein the lane line recognition model is trained by taking each lane as its own instance and performing image-based learning perspective transformation to realize recognition of lane change.
4. The lane line recognition method based on automatic driving according to claim 3, wherein the method further comprises:
establishing a lane shape model based on the road structure and the angle of the acquired image information;
and during recognition, transmitting the lane parameters output by the lane shape model into a lane line recognition model so as to realize accurate positioning of the lane line.
5. An automatic driving-based lane line recognition system, comprising:
the image acquisition module is used for acquiring image information; wherein the image information comprises results detected by a perception layer disposed on a vehicle;
the information association module is used for acquiring real-time positioning information and driving parameters of the vehicle and associating the positioning information with the acquired image information;
the preprocessing module is used for sending the associated images into a map library which is accessed in advance to carry out calculation based on the average gray value of the images so as to obtain the images to be identified with clear positioning points;
the recognition module is used for sending the image to be recognized into a pre-constructed lane line recognition model for recognition so as to obtain a lane line recognition result; the lane line recognition model is obtained by training based on lane and road mark reference, and the reference comprises a plurality of lanes, road mark categories and a plurality of different severe environment scenes.
6. The autopilot-based lane line identification system of claim 5 further comprising an optimization module for:
acquiring an operation signal of the vehicle, and combining the acquired running parameters to construct an actual running track of the vehicle;
analyzing a predicted driving track of the vehicle based on the driving parameters and the lane line identification result;
matching the predicted running track with the actual running track to judge whether the tracks of the predicted running track and the actual running track are within a preset deviation range;
and if the deviation range is exceeded, taking the corresponding lane line recognition result as training data to optimize the lane line recognition model.
7. The automatic driving-based lane line recognition system of claim 6, wherein the lane line recognition model is trained by taking each lane as its own instance and performing image-based learning perspective transformation to realize lane change recognition.
8. The autopilot-based lane line identification system of claim 6 wherein the identification module is further configured to:
establishing a lane shape model based on the road structure and the angle of the acquired image information;
and during recognition, transmitting the lane parameters output by the lane shape model into a lane line recognition model so as to realize accurate positioning of the lane line.
9. An autopilot-based lane marking recognition system comprising a perception layer and an electronic device deployed on a vehicle, characterized in that the electronic device comprises one or more processors, one or more input devices, one or more output devices and a memory, the processors, input devices, output devices and memory being interconnected by a bus, the memory being for storing a computer program comprising program instructions, the processor being configured for invoking the program instructions for performing the method steps of any of claims 1-4.
CN202210466529.7A 2022-04-29 2022-04-29 Lane line identification method and system based on automatic driving Pending CN114898332A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210466529.7A CN114898332A (en) 2022-04-29 2022-04-29 Lane line identification method and system based on automatic driving

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210466529.7A CN114898332A (en) 2022-04-29 2022-04-29 Lane line identification method and system based on automatic driving

Publications (1)

Publication Number Publication Date
CN114898332A true CN114898332A (en) 2022-08-12

Family

ID=82719333

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210466529.7A Pending CN114898332A (en) 2022-04-29 2022-04-29 Lane line identification method and system based on automatic driving

Country Status (1)

Country Link
CN (1) CN114898332A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117268424A (en) * 2023-11-21 2023-12-22 湖南仕博测试技术有限公司 Multi-sensor fusion automatic driving hunting method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117268424A (en) * 2023-11-21 2023-12-22 湖南仕博测试技术有限公司 Multi-sensor fusion automatic driving hunting method and device
CN117268424B (en) * 2023-11-21 2024-02-09 湖南仕博测试技术有限公司 Multi-sensor fusion automatic driving hunting method and device

Similar Documents

Publication Publication Date Title
CN111626208B (en) Method and device for detecting small objects
JP7461720B2 (en) Vehicle position determination method and vehicle position determination device
CN109887033B (en) Positioning method and device
CN110758246B (en) Automatic parking method and device
US10552982B2 (en) Method for automatically establishing extrinsic parameters of a camera of a vehicle
US9286524B1 (en) Multi-task deep convolutional neural networks for efficient and robust traffic lane detection
CN109815300B (en) Vehicle positioning method
US11371851B2 (en) Method and system for determining landmarks in an environment of a vehicle
CN108759823B (en) Low-speed automatic driving vehicle positioning and deviation rectifying method on designated road based on image matching
CN110969055B (en) Method, apparatus, device and computer readable storage medium for vehicle positioning
US20210174113A1 (en) Method for limiting object detection area in a mobile system equipped with a rotation sensor or a position sensor with an image sensor, and apparatus for performing the same
Shunsuke et al. GNSS/INS/on-board camera integration for vehicle self-localization in urban canyon
US11335099B2 (en) Proceedable direction detection apparatus and proceedable direction detection method
US11403947B2 (en) Systems and methods for identifying available parking spaces using connected vehicles
CN111339802A (en) Method and device for generating real-time relative map, electronic equipment and storage medium
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN111256693A (en) Pose change calculation method and vehicle-mounted terminal
CN111401255B (en) Method and device for identifying bifurcation junctions
JP2021149863A (en) Object state identifying apparatus, object state identifying method, computer program for identifying object state, and control apparatus
CN114694111A (en) Vehicle positioning
Jiménez et al. Improving the lane reference detection for autonomous road vehicle control
CN114898332A (en) Lane line identification method and system based on automatic driving
Cremean et al. Model-based estimation of off-highway road geometry using single-axis ladar and inertial sensing
CN111539305B (en) Map construction method and system, vehicle and storage medium
CN114821494B (en) Ship information matching method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination