CN113298050B - Lane line recognition model training method and device and lane line recognition method and device - Google Patents

Lane line recognition model training method and device and lane line recognition method and device Download PDF

Info

Publication number
CN113298050B
CN113298050B CN202110822081.3A CN202110822081A CN113298050B CN 113298050 B CN113298050 B CN 113298050B CN 202110822081 A CN202110822081 A CN 202110822081A CN 113298050 B CN113298050 B CN 113298050B
Authority
CN
China
Prior art keywords
lane line
image
lane
loss function
feature extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110822081.3A
Other languages
Chinese (zh)
Other versions
CN113298050A (en
Inventor
康含玉
张海强
李成军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhidao Network Technology Beijing Co Ltd
Original Assignee
Zhidao Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhidao Network Technology Beijing Co Ltd filed Critical Zhidao Network Technology Beijing Co Ltd
Priority to CN202110822081.3A priority Critical patent/CN113298050B/en
Publication of CN113298050A publication Critical patent/CN113298050A/en
Application granted granted Critical
Publication of CN113298050B publication Critical patent/CN113298050B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a lane line recognition model training method and device and a lane line recognition method and device, wherein the method comprises the following steps: acquiring a lane line image; marking the lane line type of each lane line in the lane line image according to the relative position relationship between the lane lines in the lane line image to obtain a marked lane line image; carrying out lane line recognition on the marked lane line image by using a lane line recognition model to obtain a lane line recognition result and a loss function value; and updating the parameters of the lane line identification model by using the loss function values. According to the lane line recognition model training method, the lane line images used for model training are reconstructed, so that the influence of the vehicle position on the lane line recognition is avoided, and the trained lane line recognition model has strong robustness and high recognition accuracy for the lane line recognition under the scenes that the vehicle turns, is about to change lanes and the like.

Description

Lane line recognition model training method and device and lane line recognition method and device
Technical Field
The application relates to the technical field of deep learning, in particular to a lane line recognition model training method and device and a lane line recognition method and device.
Background
Lane line identification is an important component of the perception module in the field of automatic driving. Lane line detection solutions that utilize visual algorithms are one of the more common solutions. The visual detection scheme is mainly based on an image algorithm, a lane line area in an image is detected, and different lane lines are divided into different categories so that a vehicle can automatically distinguish information such as lanes in the process of driving.
The existing lane line identification method comprises a traditional method and a deep learning method. The traditional method is mostly based on fitting and segmentation of the features such as edges, the extracted features are poor in robustness under the conditions of light, road condition change and the like, and the recognition effect is often not ideal enough in a complex scene.
A large number of lane line identification methods based on deep learning are developed in the following, complex and high-robustness features are extracted from the network, and great progress is made in speed and precision. However, the existing deep learning method can identify all lane lines as one type, or has no targeted research in common complex scenes such as lane change of vehicles, and the effect is poor. For example, when the vehicle is about to change lanes or turn, lane lines on both sides of the vehicle field of view are divided into different categories, resulting in low lane line recognition accuracy.
Disclosure of Invention
The embodiment of the application provides a lane line recognition model training method and device and a lane line recognition method and device, so as to improve the recognition effect of the lane line recognition model.
The embodiment of the application adopts the following technical scheme:
in a first aspect, an embodiment of the present application provides a lane line recognition model training method, where the method includes:
acquiring a lane line image;
marking the lane line type of each lane line in the lane line image according to the relative position relationship between the lane lines in the lane line image to obtain a marked lane line image;
carrying out lane line recognition on the marked lane line image by using a lane line recognition model to obtain a lane line recognition result and a loss function value;
and updating the parameters of the lane line identification model by using the loss function values.
Optionally, the marking, according to the relative position relationship between lane lines in the lane line image, lane line categories of each lane line in the lane line image includes:
converting the lane line image into a lane line binary image;
marking the lane line type of each lane line in the lane line binary image according to the relative position relation of each lane line in the lane line binary image.
Optionally, the performing lane line recognition on the marked lane line image by using the lane line recognition model to obtain a lane line recognition result and a loss function value includes:
carrying out feature extraction on the marked lane line image by using the lane line identification model to obtain a feature extraction result;
and calculating a loss function value of the feature extraction result by using a preset loss function according to the feature extraction result.
Optionally, the feature extraction result includes a feature map of a lane line image and a lane line position, the preset loss function includes a first loss function and a second loss function, and calculating the loss function value of the feature extraction result by using the preset loss function according to the feature extraction result includes:
calculating a loss function value of the feature map of the lane line image by using the first loss function according to the feature map of the lane line image;
and calculating a loss function value of the lane line position by using the second loss function according to the lane line position.
Optionally, the acquiring the lane line image includes:
and randomly generating blocking blocks in the lane line image, wherein the blocking blocks are used for randomly blocking the lane lines in the lane line image.
In a second aspect, an embodiment of the present application further provides a lane line identification method, where the method includes:
acquiring a lane line image to be identified;
carrying out feature extraction on the lane line image to be identified by using a lane line identification model to obtain a feature extraction result;
clustering the feature extraction results by using a clustering algorithm, and obtaining lane line identification results according to the clustering results;
the lane line recognition model is obtained by training based on any one of the lane line recognition model training methods.
Optionally, the feature extraction result includes a feature map of a lane line image to be identified and a lane line position, and the clustering the feature extraction result by using a clustering algorithm and obtaining a lane line identification result according to the clustering result includes:
performing mask operation on the characteristic diagram of the lane line image to be identified and the lane line position by using a mask algorithm to obtain a characteristic diagram of a lane line point;
and clustering the characteristic graph of the lane line points by using the clustering algorithm to obtain a point clustering result of the lane line as the lane line identification result.
In a third aspect, an embodiment of the present application further provides a lane line recognition model training device, where the device is configured to implement any one of the above lane line recognition model training methods.
In a fourth aspect, an embodiment of the present application further provides a lane line identification device, where the device is configured to implement any one of the lane line identification methods described above.
In a fifth aspect, an embodiment of the present application further provides a computer-readable storage medium storing one or more programs, which when executed by an electronic device including a plurality of application programs, cause the electronic device to perform any one of the lane line identification model training methods described above, or perform any one of the lane line identification methods described above.
The embodiment of the application adopts at least one technical scheme which can achieve the following beneficial effects: according to the lane line recognition model training method, lane line images are obtained firstly; marking the lane line type of each lane line in the lane line image according to the relative position relationship between the lane lines in the lane line image to obtain a marked lane line image; then, carrying out lane line recognition on the marked lane line image by using a lane line recognition model to obtain a lane line recognition result and a loss function value; and finally, updating the parameters of the lane line identification model by using the loss function values. According to the lane line recognition model training method, the lane line images used for model training are reconstructed, so that the influence of the vehicle position on lane line recognition is avoided, and the trained lane line recognition model has strong robustness and high recognition accuracy for the lane line recognition under the scenes that the vehicle turns, is about to change lanes and the like.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic flowchart of a lane line identification model training method in an embodiment of the present application;
FIG. 2 is a schematic structural diagram of a U-Net network according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating an effect of a blocking block to randomly block a lane line in the embodiment of the present application;
fig. 4 is a schematic flowchart of a lane line identification method in an embodiment of the present application;
FIG. 5 is a schematic structural diagram of a lane line identification model training apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a lane line identification apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device in an embodiment of the present application;
fig. 8 is a schematic structural diagram of another electronic device in the embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
The traditional lane line recognition model training method has the reason that recognition errors occur when a vehicle changes lanes or turns, and most of the reason is that the marking of training data is that the vehicle is used as a reference point, and lane lines on two sides of the vehicle are assigned to different categories, so that the model is greatly limited by the positions of the vehicle and the lane lines.
Based on this, an embodiment of the present application provides a method for training a lane line recognition model, and as shown in fig. 1, a flow diagram of the method for training a lane line recognition model in the embodiment of the present application is provided, where the method at least includes the following steps S110 to S140:
step S110, a lane line image is acquired.
When the lane line recognition model is trained, a certain number of lane line images need to be obtained first and serve as original training samples. The specific method for acquiring the lane line image may be flexibly set by a person skilled in the art according to actual requirements, and is not specifically limited herein.
And step S120, marking the lane line type of each lane line in the lane line image according to the relative position relationship between the lane lines in the lane line image to obtain a marked lane line image.
Different from the prior art, when marking the category of each lane line in the lane line image, the embodiment of the present application needs to mark according to the relative position relationship of each lane line in the lane line image, regardless of the relative position relationship of the vehicle. The advantage of such marking is that no matter how the vehicle is driven, whether lane change or turning, etc., the lane line recognition model can accurately distinguish lane lines of the same or different types without the occurrence of the situation that the same lane line is divided into two types.
For example, if there are four lane lines a1, a2, A3 and a4 in the lane line image a, and the relative positions of the four lane lines in the image are sequentially arranged from left to right, the category of the lane line a1 may be labeled as category 1, the category of the lane line a2 may be labeled as category 2, the category of the lane line A3 may be labeled as category 3, and the category of the lane line a4 may be labeled as category 4.
And step S130, carrying out lane line identification on the marked lane line image by using a lane line identification model to obtain a lane line identification result and a loss function value.
Step S140, updating the parameters of the lane line identification model using the loss function values.
After the marked lane line image is obtained, the marked lane line image can be input into a lane line recognition model for lane line recognition, so that a lane line recognition result and a loss function value of the lane line recognition model can be obtained, and finally, parameters of the lane line recognition model are reversely updated by using the loss function value of the lane line recognition model, so that the trained lane line recognition model is obtained.
According to the lane line recognition model training method, the lane line images used for model training are reconstructed, so that the influence of the vehicle position on lane line recognition is avoided, and the trained lane line recognition model has strong robustness and high recognition accuracy for the lane line recognition under the scenes that the vehicle turns, is about to change lanes and the like.
In an embodiment of the application, the marking the lane line category of each lane line in the lane line image according to a relative positional relationship between the lane lines in the lane line image includes: converting the lane line image into a lane line binary image; marking the lane line type of each lane line in the lane line binary image according to the relative position relation of each lane line in the lane line binary image.
When the lane line type of each lane line in the lane line Image is marked, the embodiment of the application may perform Image Binarization (Image Binarization) on the original lane line Image, so as to convert the original lane line Image into a lane line binary Image, and then mark the lane line type of each lane line in the lane line binary Image according to the relative position relationship of each lane line in the lane line binary Image.
The image binarization is a process of setting the gray value of a pixel point on an image to be 0 or 255, namely, the whole image presents an obvious black and white effect. According to the lane line image binarization processing method and device, the lane line image can be simpler through image binarization processing of the lane line image, the data volume is reduced, the outline of the lane line can be highlighted, and follow-up processing is facilitated.
In an embodiment of the application, the performing lane line recognition on the marked lane line image by using a lane line recognition model to obtain a lane line recognition result and a loss function value includes: carrying out feature extraction on the marked lane line image by using the lane line identification model to obtain a feature extraction result; and calculating a loss function value of the feature extraction result by using a preset loss function according to the feature extraction result.
When the lane line recognition model is used for recognizing the lane lines of the marked lane line images, the marked lane line images can be subjected to feature extraction, and the network for feature extraction can adopt a U-Net network. Of course, those skilled in the art may select other types of feature extraction networks according to actual needs, and the feature extraction networks are not limited in this embodiment.
And performing feature extraction on the marked lane line image by using a U-Net network to obtain a feature extraction result, and comparing the feature extraction result with an original feature extraction result of the lane line image to calculate a loss function value.
In an embodiment of the application, the feature extraction result includes a feature map of a lane line image and a lane line position, the preset loss function includes a first loss function and a second loss function, and calculating the loss function value of the feature extraction result by using the preset loss function according to the feature extraction result includes: calculating a loss function value of the feature map of the lane line image by using the first loss function according to the feature map of the lane line image; and calculating a loss function value of the lane line position by using the second loss function according to the lane line position.
As described above, the network for feature extraction in the embodiment of the present application may adopt a U-Net network, and as shown in fig. 2, a schematic structural diagram of a U-Net network in the embodiment of the present application is provided, a basic architecture of the U-Net network mainly includes two parts, namely, encoding (encoding) and decoding (decoding), in order to increase timeliness of the network and reduce network computation amount, all convolutional layers in the encoding part of the embodiment of the present application all use deep separable convolution, two branches are output in the last layer of the decoding part, one branch (feature) is responsible for extracting lane line features, that is, outputting a feature map of a lane line image, and the other branch (binary) is responsible for semantic segmentation, that is, for distinguishing a foreground part and a background part of the lane line image, and outputting a lane line position.
For the results output by the two branches, the embodiment of the application respectively adopts different loss functions to calculate the loss function values, and for the lane line feature extraction branches, supervised learning is mainly adopted, and the same class value is given to the pixel points on the same lane line, so that the design of the loss function adopts two-dimensional distance norm fusion multi-classification cross entropy, the feature representation vector distance belonging to the same lane line is as small as possible, the feature representation vector distance belonging to different lane lines is as large as possible, and the class labels belonging to the same lane line are the same.
For the semantic segmentation branch, considering that the number of pixels belonging to the lane line is far less than that of pixels belonging to the background, the Loss function of the semantic segmentation branch can use the Focal local, which is mainly used for solving the problem of serious imbalance of the proportion of positive and negative samples in one-stage target detection, and the Loss function reduces the weight occupied by a large number of simple negative samples in training, so that the problem of unbalanced distribution of the lane line and the background sample is solved.
Based on the designed loss function, the loss function value of the feature map of the lane line image and the loss function value of the lane line position can be respectively calculated and used as the basis for updating the model parameters.
In one embodiment of the present application, the acquiring the lane line image includes: and randomly generating blocking blocks in the lane line image, wherein the blocking blocks are used for randomly blocking the lane lines in the lane line image.
In order to improve the robustness and the recognition accuracy of a lane line recognition model, the embodiment of the application performs data enhancement processing on an original lane line image, specifically, a blocking block can be randomly generated in the lane line image, the size of the blocking block can be randomly selected within a range of 0.05-0.1 times the width and the height of the whole image, and the position of a lane line in the lane line image is randomly blocked to enrich the blocking data of the lane line when a vehicle changes lanes, as shown in fig. 3, an effect schematic diagram that the blocking block randomly blocks the lane line is provided.
The embodiment of the application can obtain new image data of the lane line by randomly shielding the position of the lane line in the image of the lane line through the shielding block, further train the recognition model of the lane line by utilizing the new image data of the lane line, greatly improve the robustness and recognition accuracy of the recognition model of the lane line for the lane line under the scenes of vehicle lane change and the like,
for the existing lane line identification method, a vehicle can shield part of lane lines when the vehicle is about to change lanes and turn, so that the same lane line is cut off, the lane line is also easily identified into two different lane lines by mistake when the lane line is identified, and meanwhile, the identification result is not subjected to subsequent correction, so that the accuracy is insufficient.
Based on this, an embodiment of the present application further provides a lane line identification method, and as shown in fig. 4, a flow chart of the lane line identification method in the embodiment of the present application is provided, where the method at least includes the following steps S410 to S430:
step S410, a lane line image to be recognized is acquired.
When the lane line is identified, the lane line image to be identified can be acquired first, and the specific acquisition mode can be real-time acquisition or acquisition once every a period of time, for example, in an automatic driving scene, the lane line image to be identified can be acquired in real time for identification.
And step S420, utilizing the lane line recognition model to perform feature extraction on the lane line image to be recognized to obtain a feature extraction result.
After the lane line image to be recognized is obtained, feature extraction needs to be performed on the lane line image to be recognized by using the lane line recognition model trained in the above embodiment, so as to obtain a feature extraction result of the lane line image to be recognized.
Step S430, clustering the feature extraction results by using a clustering algorithm, and obtaining lane line identification results according to the clustering results; the lane line recognition model is obtained by training based on any one of the lane line recognition model training methods.
Different from the prior art, the embodiment of the application further performs clustering analysis on the feature extraction result by using a clustering algorithm after obtaining the feature extraction result output by the lane line recognition model, so that lane lines belonging to the same category can be clustered into one category, and finally the number of the lane lines and the position information of each lane line are obtained. Therefore, the embodiment of the application is equivalent to the fact that the feature extraction result output by the lane line identification model is corrected, and the accuracy of the lane line identification result is improved.
In an embodiment of the application, the feature extraction result includes a feature map of a lane line image to be recognized and a lane line position, and the clustering the feature extraction result by using a clustering algorithm and obtaining a lane line recognition result according to the clustering result includes: performing mask operation on the characteristic diagram of the lane line image to be identified and the lane line position by using a mask algorithm to obtain a characteristic diagram of a lane line point; and clustering the characteristic graph of the lane line points by using the clustering algorithm to obtain a point clustering result of the lane line as the lane line identification result.
According to the method and the device, after the characteristic extraction is carried out on the lane line image to be recognized by using the lane line recognition model, two types of information, namely the characteristic diagram and the lane line position of the lane line image to be recognized can be output, in order to improve the accuracy of the lane line recognition result, the characteristic diagram and the lane line position of the lane line image to be recognized can be subjected to mask operation by using a mask algorithm, then the characteristic diagram only containing lane line points is extracted from the characteristic diagram of the lane line image, namely the characteristic information of a background part is removed, the lane line characteristics in the characteristic diagram of the lane line points are irrelevant to the position of a vehicle, then the characteristic diagram of the lane line points is clustered, so that a clustering result irrelevant to the position of the vehicle can be obtained, different lane lines can be clustered more accurately according to the result, and the method and the device have strong robustness on scenes such as vehicle turning and lane changing.
It should be noted that, the relationship between the lane line identification model and the clustering algorithm in the present application is: the method comprises the steps of firstly utilizing a lane line recognition model to extract features of lane line images, wherein the lane line recognition model can also endow the features of pixel points on the same lane line with the same category value, but the recognition results of the pixel points have certain deviation, so that the recognition results of the lane line recognition model are further modified by utilizing a clustering algorithm, the feature information except the lane line in the feature map of the lane line images output by the lane line recognition model is removed, only the feature map of the lane line points is reserved for clustering analysis, and the robustness and the recognition accuracy of the lane line recognition model in the scenes of vehicle turning, lane change and the like are further improved.
The embodiment of the present application further provides a lane line recognition model training device 500, as shown in fig. 5, which provides a schematic structural diagram of the lane line recognition model training device in the embodiment of the present application, and the device includes: a first obtaining unit 510, a marking unit 520, a recognition unit 530 and an updating unit 540, wherein:
a first acquisition unit 510 for acquiring a lane line image;
a marking unit 520, configured to mark lane line categories of each lane line in the lane line image according to a relative position relationship between lane lines in the lane line image, to obtain a marked lane line image;
an identifying unit 530, configured to perform lane line identification on the marked lane line image by using a lane line identification model, so as to obtain a lane line identification result and a loss function value;
an updating unit 540, configured to update the parameters of the lane line identification model with the loss function values.
In an embodiment of the present application, the marking unit 520 is specifically configured to: converting the lane line image into a lane line binary image; marking the lane line type of each lane line in the lane line binary image according to the relative position relation of each lane line in the lane line binary image.
In an embodiment of the present application, the identifying unit 530 is specifically configured to: carrying out feature extraction on the marked lane line image by using the lane line identification model to obtain a feature extraction result; and calculating a loss function value of the feature extraction result by using a preset loss function according to the feature extraction result.
In an embodiment of the application, the feature extraction result includes a feature map of a lane line image and a lane line position, the preset loss function includes a first loss function and a second loss function, and the identification unit 530 is specifically configured to: calculating a loss function value of the feature map of the lane line image by using the first loss function according to the feature map of the lane line image; and calculating a loss function value of the lane line position by using the second loss function according to the lane line position.
In an embodiment of the present application, the first obtaining unit 510 is specifically configured to: and randomly generating blocking blocks in the lane line image, wherein the blocking blocks are used for randomly blocking the lane lines in the lane line image.
It can be understood that the above lane line recognition model training device can implement the steps of the lane line recognition model training method provided in the foregoing embodiment, and the related explanations about the lane line recognition model training method are applicable to the lane line recognition model training device, and are not described herein again.
The embodiment of the present application further provides a lane line identification apparatus 600, as shown in fig. 6, which provides a schematic structural diagram of the lane line identification apparatus in the embodiment of the present application, and the apparatus 600 includes:
a second acquiring unit 610 for acquiring a lane line image to be recognized;
a feature extraction unit 620, configured to perform feature extraction on the lane line image to be identified by using a lane line identification model, so as to obtain a feature extraction result;
a clustering unit 630, configured to cluster the feature extraction results by using a clustering algorithm, and obtain lane line identification results according to the clustering results;
the lane line recognition model is obtained by training based on any one of the lane line recognition model training methods.
In an embodiment of the application, the feature extraction result includes a feature map of a lane line image to be recognized and a lane line position, and the clustering unit 630 is specifically configured to: performing mask operation on the characteristic diagram of the lane line image to be identified and the lane line position by using a mask algorithm to obtain a characteristic diagram of a lane line point; and clustering the characteristic graph of the lane line points by using the clustering algorithm to obtain a point clustering result of the lane line as the lane line identification result.
It can be understood that the lane line identification device can implement the steps of the lane line identification method provided in the foregoing embodiment, and the explanations related to the lane line identification method are applicable to the lane line identification device, and are not repeated herein.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and fig. 8 is a schematic structural diagram of an electronic device according to another embodiment of the present application. Referring to fig. 7 and 8, at the hardware level, the electronic device includes a processor, and optionally further includes an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory, such as at least 1 disk Memory. Of course, the electronic device may also include hardware required for other services.
The processor, the network interface, and the memory may be connected to each other using an internal bus, which may be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in fig. 7 and 8, but this does not indicate only one bus or one type of bus.
And the memory is used for storing programs. In particular, the program may include program code comprising computer operating instructions. The memory may include both memory and non-volatile storage and provides instructions and data to the processor.
The processor reads a corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to form a lane line recognition model training device or a lane line recognition device on a logic level. The processor is used for executing the program stored in the memory and is specifically used for executing the following operations:
acquiring a lane line image;
marking the lane line type of each lane line in the lane line image according to the relative position relationship between the lane lines in the lane line image to obtain a marked lane line image;
carrying out lane line recognition on the marked lane line image by using a lane line recognition model to obtain a lane line recognition result and a loss function value;
and updating the parameters of the lane line identification model by using the loss function values.
Or, to perform the following operations:
acquiring a lane line image to be identified;
carrying out feature extraction on the lane line image to be identified by using a lane line identification model to obtain a feature extraction result;
clustering the feature extraction results by using a clustering algorithm, and obtaining lane line identification results according to the clustering results;
the lane line recognition model is obtained by training based on any one of the lane line recognition model training methods.
The method executed by the lane line recognition model training device disclosed in the embodiment of fig. 1 or the lane line recognition device disclosed in the embodiment of fig. 4 may be applied to or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be implemented by using integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
The electronic device may further execute the method executed by the lane line recognition model training device in fig. 1 or the lane line recognition device in fig. 4, and implement the function of the lane line recognition model training device in the embodiment shown in fig. 1 or the function of the lane line recognition device in the embodiment shown in fig. 4, which is not described herein again in this application embodiment.
An embodiment of the present application further provides a computer-readable storage medium storing one or more programs, where the one or more programs include instructions, which, when executed by an electronic device including a plurality of application programs, enable the electronic device to perform a method performed by a lane line recognition model training apparatus in an embodiment shown in fig. 1 or a lane line recognition apparatus in an embodiment shown in fig. 4, and are specifically configured to perform:
acquiring a lane line image;
marking the lane line type of each lane line in the lane line image according to the relative position relationship between the lane lines in the lane line image to obtain a marked lane line image;
carrying out lane line recognition on the marked lane line image by using a lane line recognition model to obtain a lane line recognition result and a loss function value;
and updating the parameters of the lane line identification model by using the loss function values.
Or, to perform the following operations:
acquiring a lane line image to be identified;
carrying out feature extraction on the lane line image to be identified by using a lane line identification model to obtain a feature extraction result;
clustering the feature extraction results by using a clustering algorithm, and obtaining lane line identification results according to the clustering results;
the lane line recognition model is obtained by training based on any one of the lane line recognition model training methods.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (8)

1. A lane line recognition model training method, wherein the method comprises:
acquiring a lane line image;
marking the lane line type of each lane line in the lane line image according to the relative position relationship between the lane lines in the lane line image to obtain a marked lane line image;
carrying out lane line recognition on the marked lane line image by using a lane line recognition model to obtain a lane line recognition result and a loss function value;
updating the parameters of the lane line identification model by using the loss function values, wherein the step of performing lane line identification on the marked lane line images by using the lane line identification model to obtain lane line identification results and the loss function values comprises the following steps:
carrying out feature extraction on the marked lane line image by using the lane line identification model to obtain a feature extraction result;
calculating a loss function value of the feature extraction result by using a preset loss function according to the feature extraction result, wherein the feature extraction result comprises a feature map of a lane line image and a lane line position, the preset loss function comprises a first loss function and a second loss function, and calculating the loss function value of the feature extraction result by using the preset loss function according to the feature extraction result comprises:
calculating a loss function value of the feature map of the lane line image by using the first loss function according to the feature map of the lane line image;
and calculating a loss function value of the lane line position by using the second loss function according to the lane line position.
2. The method of claim 1, wherein the marking of the lane line category of each lane line in the lane line image according to the relative positional relationship between the lane lines in the lane line image comprises:
converting the lane line image into a lane line binary image;
marking the lane line type of each lane line in the lane line binary image according to the relative position relation of each lane line in the lane line binary image.
3. The method of claim 1, wherein said acquiring lane line images comprises:
and randomly generating blocking blocks in the lane line image, wherein the blocking blocks are used for randomly blocking the lane lines in the lane line image.
4. A lane line identification method, wherein the method comprises:
acquiring a lane line image to be identified;
carrying out feature extraction on the lane line image to be identified by using a lane line identification model to obtain a feature extraction result;
clustering the feature extraction results by using a clustering algorithm, and obtaining lane line identification results according to the clustering results;
the lane line recognition model is obtained by training based on the lane line recognition model training method according to any one of claims 1 to 3.
5. The method of claim 4, wherein the feature extraction result comprises a feature map of a lane line image to be recognized and a lane line position, and the clustering the feature extraction result by using a clustering algorithm and obtaining a lane line recognition result according to the clustering result comprises:
performing mask operation on the characteristic diagram of the lane line image to be identified and the lane line position by using a mask algorithm to obtain a characteristic diagram of a lane line point;
and clustering the characteristic graph of the lane line points by using the clustering algorithm to obtain a point clustering result of the lane line as the lane line identification result.
6. A lane line recognition model training device, wherein the device is used for realizing the lane line recognition model training method of any one of claims 1 to 3.
7. A lane marking recognition apparatus, wherein the apparatus is used for realizing the lane marking recognition method according to any one of claims 4 to 5.
8. A computer-readable storage medium storing one or more programs which, when executed by an electronic device including a plurality of application programs, cause the electronic device to execute the lane line recognition model training method of any one of claims 1 to 3 or execute the lane line recognition method of any one of claims 4 to 5.
CN202110822081.3A 2021-07-21 2021-07-21 Lane line recognition model training method and device and lane line recognition method and device Active CN113298050B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110822081.3A CN113298050B (en) 2021-07-21 2021-07-21 Lane line recognition model training method and device and lane line recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110822081.3A CN113298050B (en) 2021-07-21 2021-07-21 Lane line recognition model training method and device and lane line recognition method and device

Publications (2)

Publication Number Publication Date
CN113298050A CN113298050A (en) 2021-08-24
CN113298050B true CN113298050B (en) 2021-11-19

Family

ID=77330812

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110822081.3A Active CN113298050B (en) 2021-07-21 2021-07-21 Lane line recognition model training method and device and lane line recognition method and device

Country Status (1)

Country Link
CN (1) CN113298050B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113762482B (en) * 2021-09-15 2024-04-16 智道网联科技(北京)有限公司 Training method and related device for neural network model for automatic driving
WO2023096581A2 (en) * 2021-11-29 2023-06-01 Agency For Science, Technology And Research Method and system for detecting a lane
CN115482478B (en) * 2022-09-14 2023-07-18 北京远度互联科技有限公司 Road identification method, device, unmanned aerial vehicle, equipment and storage medium
CN115731525B (en) * 2022-11-21 2023-07-25 禾多科技(北京)有限公司 Lane line identification method, lane line identification device, electronic equipment and computer readable medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046235B (en) * 2015-08-03 2018-09-07 百度在线网络技术(北京)有限公司 The identification modeling method and device of lane line, recognition methods and device
CN108090456B (en) * 2017-12-27 2020-06-19 北京初速度科技有限公司 Training method for recognizing lane line model, and lane line recognition method and device
CN112036231B (en) * 2020-07-10 2022-10-21 武汉大学 Vehicle-mounted video-based lane line and pavement indication mark detection and identification method
CN112418037A (en) * 2020-11-12 2021-02-26 武汉光庭信息技术股份有限公司 Method and system for identifying lane lines in satellite picture, electronic device and storage medium
CN112861700B (en) * 2021-02-03 2023-11-03 西安仁义智机电科技有限公司 Lane network identification model establishment and vehicle speed detection method based on deep Labv3+

Also Published As

Publication number Publication date
CN113298050A (en) 2021-08-24

Similar Documents

Publication Publication Date Title
CN113298050B (en) Lane line recognition model training method and device and lane line recognition method and device
CN112634209A (en) Product defect detection method and device
CN116168017B (en) Deep learning-based PCB element detection method, system and storage medium
CN112200193B (en) Distributed license plate recognition method, system and device based on multi-attribute fusion
CN112036462A (en) Method and device for model training and target detection
CN114283357A (en) Vehicle detection method and device, storage medium and electronic equipment
CN112052907A (en) Target detection method and device based on image edge information and storage medium
CN110852327A (en) Image processing method, image processing device, electronic equipment and storage medium
CN113804214B (en) Vehicle positioning method and device, electronic equipment and computer readable storage medium
CN112990099B (en) Method and device for detecting lane line
CN113160176B (en) Defect detection method and device
CN112699711A (en) Lane line detection method, lane line detection device, storage medium, and electronic apparatus
CN112651417B (en) License plate recognition method, device, equipment and storage medium
CN113903014B (en) Lane line prediction method and device, electronic device and computer-readable storage medium
CN113111872B (en) Training method and device of image recognition model, electronic equipment and storage medium
CN113591543B (en) Traffic sign recognition method, device, electronic equipment and computer storage medium
CN112733864A (en) Model training method, target detection method, device, equipment and storage medium
CN116309628A (en) Lane line recognition method and device, electronic equipment and computer readable storage medium
CN114066958A (en) Method and device for predicting depth information of target, electronic device and storage medium
CN115170679A (en) Calibration method and device for road side camera, electronic equipment and storage medium
CN114676794A (en) Lane line detection model training method and device and lane line detection method and device
CN116740712A (en) Target labeling method and device for infrared image, electronic equipment and storage medium
CN115546752A (en) Lane line marking method and device for high-precision map, electronic equipment and storage medium
CN114694112B (en) Traffic signal lamp identification method and device and electronic equipment
CN114445804A (en) Lane line identification method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant