CN117132955A - Lane line detection method and device, electronic equipment and storage medium - Google Patents

Lane line detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117132955A
CN117132955A CN202311107922.8A CN202311107922A CN117132955A CN 117132955 A CN117132955 A CN 117132955A CN 202311107922 A CN202311107922 A CN 202311107922A CN 117132955 A CN117132955 A CN 117132955A
Authority
CN
China
Prior art keywords
lane line
image
feature
anchor
vectors
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311107922.8A
Other languages
Chinese (zh)
Inventor
张达明
徐名源
邱璆
王佑星
朱亚旋
宋楠楠
薛鸿
许际晗
王宇凡
金虹羽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Faw Nanjing Technology Development Co ltd
FAW Group Corp
Original Assignee
Faw Nanjing Technology Development Co ltd
FAW Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Faw Nanjing Technology Development Co ltd, FAW Group Corp filed Critical Faw Nanjing Technology Development Co ltd
Priority to CN202311107922.8A priority Critical patent/CN117132955A/en
Publication of CN117132955A publication Critical patent/CN117132955A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a lane line detection method, a lane line detection device, electronic equipment and a storage medium. The method comprises the following steps: inputting an image to be processed into a lane line detection model; extracting features of an image to be processed through a lane line detection model, and determining at least two anchor vectors in the obtained feature image, wherein the anchor vectors are represented through rays; determining global feature vectors matched with the anchor vectors, and determining lane line feature vectors in the global feature vectors; and carrying out lane line marking in the image to be processed according to the lane line feature vector, and outputting the marked image to be processed through a lane line detection model. The technical scheme of the embodiment of the invention solves the problems of low detection efficiency and low detection accuracy in complex scenes in the lane line detection mode in the prior art, and improves the detection accuracy and detection speed of lane line detection in complex scenes.

Description

Lane line detection method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of vehicle control technologies, and in particular, to a lane line detection method, a lane line detection device, an electronic device, and a storage medium.
Background
The automatic driving vehicle senses surrounding environment information through the sensing module and combines the map module, the planning and the control module to cope with various complex road scenes. Lane line detection is an important functional module in an autonomous vehicle, and the lane line detection results provide data support for the motion planning of the vehicle.
Traditional lane line detection is realized based on image processing, but the lane line detection accuracy based on the image processing is lower under the complex scene of strong light, blocked lane lines, continuous curves, ramps and the like. With the development of deep learning technology, a lane line detection method based on deep learning appears: semantic segmentation-based methods, anchor-based methods, and fitting curve formula-based methods, however, semantic segmentation-based methods (e.g., SCNN (Special Convolutional Neural Networks, special convolutional neural network) methods) rely on huge backbone networks, and detection speed is slow; the method based on the anchor and the method based on the fitting curve formula have strong dependence on priori information, and when parameters such as the pitch angle of the camera are changed, the detection performance can be quickly reduced.
Disclosure of Invention
The invention provides a lane line detection method, a lane line detection device, electronic equipment and a storage medium, so as to improve lane line detection precision and detection speed in a complex scene.
In a first aspect, an embodiment of the present invention provides a method for detecting a lane line, where the method includes:
inputting an image to be processed into a lane line detection model;
extracting features of an image to be processed through a lane line detection model, and determining at least two anchor vectors in the obtained feature image, wherein the anchor vectors are represented through rays;
determining global feature vectors matched with the anchor vectors, and determining lane line feature vectors in the global feature vectors;
and carrying out lane line marking in the image to be processed according to the lane line feature vector, and outputting the marked image to be processed through a lane line detection model.
In a second aspect, an embodiment of the present invention further provides a lane line detection apparatus, where the apparatus includes:
the image input module to be processed is used for inputting the image to be processed into the lane line detection model;
the anchor vector determining module is used for extracting characteristics of the image to be processed through the lane line detection model, and determining at least two anchor vectors in the obtained characteristic image, wherein the anchor vectors are represented through rays;
the lane line feature vector determining module is used for determining global feature vectors matched with the anchor vectors and determining lane line feature vectors in the global feature vectors;
the lane marking module is used for marking lane lines in the image to be processed according to the lane line feature vectors and outputting the marked image to be processed through the lane line detection model.
In a third aspect, an embodiment of the present invention further provides an electronic device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor implements a method for detecting a lane line according to any one of the embodiments of the present invention when executing the program.
In a fourth aspect, embodiments of the present invention also provide a storage medium storing computer-executable instructions that, when executed by a computer processor, are configured to perform a method of detecting a lane line as in any of the embodiments of the present invention.
According to the technical scheme, the image to be processed is input into the lane line detection model, the feature extraction is carried out on the image to be processed through the lane line detection model, the anchor vectors are determined in the obtained feature image, the global feature vectors corresponding to the anchor vectors are determined, the lane line feature vectors are determined in the global feature vectors, and the image to be processed after the lane line marking is carried out according to the lane line feature vectors is output. The method solves the problems of low detection efficiency and low detection accuracy in the complex scene of the lane line detection mode in the prior art, and improves the detection accuracy and detection speed of the lane line detection in the complex scene.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a lane line detection method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a structural shorting connection scheme applicable to an embodiment of the present invention;
FIG. 3 is a flowchart of another lane line detection method according to the second embodiment of the present invention;
FIG. 4 is a schematic diagram of a bidirectional feature transfer of a feature residual module according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a convolution operation of a deformable convolution module according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a lane line detection device according to a third embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1 is a flowchart of a lane line detection method according to an embodiment of the present invention, where the method may be applied to a case of lane line detection, and the method may be performed by a lane line detection device, where the lane line detection device may be implemented in hardware and/or software, and the lane line detection device may be configured in an electronic device.
As shown in fig. 1, the method includes:
s110, inputting the image to be processed into a lane line detection model.
The image to be processed can be a road image of the running direction of the vehicle acquired by equipment such as a vehicle or an on-board sensor. The lane line detection model may be a neural network model that processes an image to be processed and further marks a lane line in the image to be processed.
After the device such as a vehicle or a vehicle-mounted sensor acquires the image to be processed, the image to be processed can be input into the lane line detection model, so that the image to be processed is processed through the lane line detection model.
The lane line detection model uses the resnet18 as a backbone network, so that when the image to be processed is processed, the processing efficiency of the image to be processed can be improved under the condition that the accuracy of a processing result is not affected.
In addition, when the resnet is used as a backbone network, if the network is subjected to convolution operation for a plurality of times before the resnet, problems such as information loss and gradient explosion easily occur, and convergence of the network is blocked, so that the problem that the network cannot be trained as the depth of the network is increased is caused.
Fig. 2 is a schematic diagram of a structural short circuit connection mode applicable to the embodiment of the present invention. Referring to fig. 2, the resnet18 will add a direct connection path in the network, shorting the network structure, thereby preserving a proportion of the output of the previous part of the network. For better understanding, the conversion denoted as H (x) to F (x) +x can be performed using mathematical representations. The optimization of F (x) is simpler than H (x), so that the method does not cause the increase of calculation parameters and calculation data volume of the network, and further solves the problem that the network cannot be trained along with the increase of the network layer number.
And S120, carrying out feature extraction on the image to be processed through the lane line detection model, and determining at least two anchor vectors in the obtained feature image.
Wherein the anchor vector is represented by a ray.
The feature image may be an image obtained after feature extraction of the image to be processed. The anchor vector may be a ray that originates from each grid in the feature image and is directed in a different direction.
After the image to be processed is input into the lane line detection model, the lane line detection model performs feature extraction on the image to be processed, and further determines a feature image corresponding to the image to be processed.
When the anchor vector in the characteristic image is determined, the lane lines are displayed as straight lines in the image to be processed, and when the vehicle normally runs, the starting points of the lane lines are all below, right and left of the front view in the view angle of the road image acquisition equipment such as the vehicle-mounted camera, so that the anchor vector needs to be represented by rays, and the determination of the anchor vector is more accurate.
When representing the anchor vector by using rays, it is necessary to divide the region of the feature image in advance, determine a base point in each region of the feature image in the lower, right and left directions, send out a plurality of rays in each direction with the base point as the origin, determine the confidence of each ray, and delete the ray if the ray exceeds the feature range in the feature image. The feature range may be an area where a lane line is located in the feature image.
The method comprises the steps of extracting characteristics of an image to be processed through a lane line detection model, and determining at least two anchor vectors in the obtained characteristic image, so that the determination efficiency of determining the anchor vectors can be improved when determining the at least two anchor vectors, and a certain guarantee is provided for real-time operation of a system.
S130, determining global feature vectors matched with the anchor vectors, and determining lane line feature vectors in the global feature vectors.
The global feature vector may be an anchor vector with global information. The lane line feature vector may be a feature vector in the feature image that is the same as the lane line position.
After obtaining at least two anchor vectors, it is necessary to determine the relationship between the different anchor vectors, since the anchor vectors are independent of each other. Therefore, global feature vectors matched with the anchor vectors are determined according to the feature range, and lane line feature vectors are further determined.
By determining the global feature vector matched with each anchor vector and determining the lane line feature vector in each global feature vector, the more accurate global feature vector can be determined after the anchor vector is determined, and the determined lane line feature vector is higher in accuracy.
And S140, marking lane lines in the image to be processed according to the lane line feature vectors, and outputting the marked image to be processed through a lane line detection model.
After the lane line feature vector is obtained, the position and the represented meaning of the lane line in the image to be processed can be determined, so that the position of the lane line can be directly marked in the image to be processed, and the marked image to be processed can be output through the lane line detection model.
According to the technical scheme, the image to be processed is input into the lane line detection model, the feature extraction is carried out on the image to be processed through the lane line detection model, the anchor vectors are determined in the obtained feature image, the global feature vectors corresponding to the anchor vectors are determined, the lane line feature vectors are determined in the global feature vectors, and the image to be processed after the lane line marking is carried out according to the lane line feature vectors is output. The method solves the problems of low detection efficiency and low detection accuracy in the complex scene of the lane line detection mode in the prior art, and improves the detection accuracy and detection speed of the lane line detection in the complex scene.
Example two
Fig. 3 is a flowchart of another lane line detection method according to the second embodiment of the present invention, where the process of determining the anchor vector and the process of determining the global feature vector of the anchor vector are further embodied and feature enhancement steps are added on the basis of the above embodiment.
As shown in fig. 3, the method includes:
s210, training a preset deep neural network model according to a classification task loss function based on the classification cross entropy and a lane line loss function based on the minimum absolute value deviation to obtain a lane line detection model.
The classification task loss function based on the classification cross entropy may be a loss function to evenly divide the samples. The lane line loss function based on the minimum absolute value deviation may be a loss function for coordinate correcting the determined vector.
Before inputting the image to be processed into the lane line detection model, the lane line detection model needs to be trained. Therefore, training is required to be performed in a preset deep neural network model through a classification task loss function of a classification cross entropy and a lane line loss function based on a minimum absolute value deviation, so that the problem that a large error exists in a calculation result of the lane line detection model due to uneven sample distribution can be avoided in the determined lane line detection model, and the determined lane line detection model can be used for processing an image to be processed, so that an anchor vector, a global feature vector and a lane line feature vector are more accurate when the anchor vector, the global feature vector and the lane line feature vector are determined, and further errors of a system are reduced.
The classification task Loss function of the classification cross entropy can be a Focal Loss function, and can be used for solving the problem of uneven distribution of the number of samples. The lane line Loss function with the minimum absolute value deviation can be a Smooth L1 Loss function, and can be used for realizing regression of the coordinate offset.
S220, inputting the image to be processed into a lane line detection model.
And S230, extracting features of the image to be processed through the lane line detection model.
S240, performing feature enhancement on the feature image through a feature residual error module of the lane line detection model, and determining at least two anchor vectors in the obtained feature image.
Wherein the anchor vector is represented by a ray.
The feature transfer mode of the feature residual error module is bidirectional feature transfer.
The characteristic residual module is used for carrying out characteristic enhancement on the characteristics in the characteristic image, and the characteristic residual module can gather spatial information in different directions of a single channel.
Fig. 4 is a schematic diagram of bidirectional feature transfer of a feature residual module to which the embodiment of the present invention is applied. Referring to fig. 4, after feature extraction is performed on an image to be processed through a lane line detection model, since the lane line is in a slender form, and feature extraction is performed through a backbone network of the lane line detection model, the effect of performing lane line detection on the extracted features is poor, and therefore, feature enhancement is required to be performed on the feature image by using a feature residual module of the lane line detection model for a scene of lane line detection.
The specific method for enhancing the characteristics comprises the following steps:
referring to fig. 4, C represents the number of feature image channels, H represents the height of the feature image, and W represents the width of the feature image. And determining the specific values of C, H and W corresponding to each feature in the feature image, and carrying out feature transfer on the specific values of C, H and W corresponding to each feature in the feature image from top to bottom and from bottom to top respectively to further extract lane line features.
The feature residual error module is used for carrying out bidirectional feature transfer, so that richer spatial features are extracted, the feature image is more suitable for detecting the lane lines, and the accuracy of lane line detection is improved. .
S250, carrying out convolution operation on the characteristic image through a deformable convolution module of the lane line detection model.
The deformable convolution module is a module for carrying out convolution operation on the characteristic image, the position of the convolution operation is deformable, and the deformable convolution module can accurately extract the required characteristic. Specifically, the feature image is divided into at least one part with the same size as the convolution kernel, and then the convolution operation is performed, and the position of the divided part on the feature image has a certain change due to the difference of the feature image.
Fig. 5 is a schematic diagram of convolution operation of a deformable convolution module according to an embodiment of the present disclosure. Referring to fig. 5, fig. 5 (a) is a schematic diagram of a conventional convolution operation, and (b) (c) (d) is a convolution schematic diagram of a deformable convolution module. After determining at least two anchor vectors, the determination of lane lines in the feature image will be affected because the feature image will continually decrease in size as the network depth deepens. In order to solve the above problems, a deformable convolution module is required to perform convolution operation on the image, so as to improve the accuracy of the feature determination result in the feature image and further reduce the overall error of the system.
And S260, determining the characteristic points on the left boundary, the right boundary and the bottom edge of the characteristic image.
S270, respectively determining a preset number of anchor vectors for each characteristic point.
The left boundary may be a boundary on the left side of the feature image. The right boundary may be a boundary to the right of the feature image. The bottom edge may be a boundary on the underside of the feature image. The feature point may be a starting point of a lane line in the feature image boundary.
In determining the anchor vector in the feature image, since the start point of the lane line is generally below, to the left and to the right of the front view when the vehicle is traveling normally on the road surface when the device such as the in-vehicle camera is photographing the road surface ahead, according to the prior indication, it may be determined that the start point of the lane line is generally distributed on the left boundary, the right boundary and the bottom side of the feature image, that is, each pixel point on the left boundary, the right boundary and the bottom side may be taken as the feature point, or one pixel point may be determined every preset number of pixel points for each pixel point on the left boundary, the right boundary and the bottom side.
In an alternative, after determining a preset number of anchor vectors for each feature point, the method further includes:
and screening to obtain a preset number threshold anchor vector according to the maximum value inhibition method and a preset number threshold.
The maximum suppression method may be a method of reducing the number of other independent vectors as much as possible while not affecting the determined anchor vector.
In order to reduce the calculated data quantity as much as possible, thereby improving the calculation efficiency, when the anchor vector is determined, the determined anchor vector is screened through a maximum value inhibition method and a preset quantity threshold value, so that the anchor vector which is irrelevant to the characteristics of the lane lines in the anchor vector is screened, and the preset quantity threshold value anchor vectors are obtained, thereby realizing the reduction of the data quantity and the improvement of the calculation efficiency.
In one alternative, determining a global feature vector that matches each anchor vector may include steps A1-A2:
and A1, determining weights between the target anchor vector and other anchor vectors except the target anchor vector in the anchor vectors according to the activation function.
And A2, taking the sum of products of each other anchor vector and the corresponding weight as a global feature vector matched with the target anchor vector.
The activation function may be a function of the magnitude of the previous weight value used to determine the anchor vector. The target anchor vector may be an anchor vector that requires a determination of a relationship with other anchor vectors.
After a preset number of anchor vectors are obtained, the preset number of anchor vectors can be respectively input into an activation function, weights between each anchor vector and other anchor vectors are determined, and global feature vectors matched with target anchor vectors are determined according to the corresponding weights.
Wherein the activation function may be a softmax function that may be used to determine a weight relationship between a node and other nodes.
S280, determining global feature vectors matched with the anchor vectors, and determining lane line feature vectors in the global feature vectors.
S290, marking lane lines in the image to be processed according to the lane line feature vectors, and outputting the marked image to be processed through a lane line detection model.
According to the technical scheme, the preset deep neural network model is trained according to the classification task loss function based on the classification cross entropy and the lane line loss function based on the minimum absolute value deviation to obtain the lane line detection model, so that the problem that a large error exists in a calculation result of the lane line detection model due to uneven sample distribution can be avoided in the determined lane line detection model, and the determined lane line detection model can be used for processing an image to be processed, so that an anchor vector, a global feature vector and a lane line feature vector are more accurate when the anchor vector, the global feature vector and the lane line feature vector are determined, and further errors of a system are reduced. The characteristic residual error module of the lane line detection model is used for carrying out characteristic enhancement on the characteristic image, and at least two anchor vectors are determined in the obtained characteristic image, so that the characteristics of the determined characteristic image are more obvious, and the determined at least two anchor vectors can be more accurate, and meanwhile, the operation efficiency of the system is improved. And the accuracy of the feature determination result in the feature image is improved by carrying out convolution operation on the feature image through a deformable convolution module of the lane line detection model, so that the overall error of the system is reduced.
Example III
Fig. 6 is a schematic structural diagram of a lane line detection device according to a third embodiment of the present invention.
As shown in fig. 6, the apparatus includes:
the image to be processed input module 310 is configured to input an image to be processed into the lane line detection model;
the anchor vector determining module 320 is configured to perform feature extraction on an image to be processed through a lane line detection model, and determine at least two anchor vectors in the obtained feature image, where the anchor vectors are represented by rays;
a lane line feature vector determining module 330, configured to determine global feature vectors matched with each anchor vector, and determine a lane line feature vector in each global feature vector;
the lane marking module 340 is configured to mark a lane in the image to be processed according to the lane feature vector, and output the marked image to be processed through the lane detection model.
According to the technical scheme, the image to be processed is input into the lane line detection model, the feature extraction is carried out on the image to be processed through the lane line detection model, the anchor vectors are determined in the obtained feature image, the global feature vectors corresponding to the anchor vectors are determined, the lane line feature vectors are determined in the global feature vectors, and the image to be processed after the lane line marking is carried out according to the lane line feature vectors is output. The method solves the problems of low detection efficiency and low detection accuracy in the complex scene of the lane line detection mode in the prior art, and improves the detection accuracy and detection speed of the lane line detection in the complex scene.
Optionally, on the basis of the above embodiment, after the anchor vector determining module 320, the apparatus further includes:
the characteristic enhancement module is used for carrying out characteristic enhancement on the characteristic image through a characteristic residual error module of the lane line detection model;
the feature transfer mode of the feature residual error module is bidirectional feature transfer.
Optionally, on the basis of the above embodiment, after the anchor vector determining module 320, the apparatus further includes:
and the convolution operation module is used for carrying out convolution operation on the characteristic image through the deformable convolution module of the lane line detection model.
Based on the above embodiment, optionally, the anchor vector determining module 320 includes:
a feature point determining unit for determining feature points on a left boundary, a right boundary, and a bottom side of the feature image;
the anchor vector acquisition unit is used for respectively determining a preset number of anchor vectors for each characteristic point.
On the basis of the above embodiment, optionally, after the anchor vector obtaining unit, the module further includes:
and the anchor vector screening unit is used for screening and obtaining a preset number of anchor vectors of the number threshold according to the maximum value inhibition method and the preset number threshold.
Based on the above embodiment, optionally, the lane line feature vector determining module 330 includes:
the anchor vector weight acquisition unit is used for determining weights between the target anchor vector and other anchor vectors except the target anchor vector in the anchor vectors respectively according to the activation function;
and the global feature vector acquisition unit is used for taking the sum of products of other anchor vectors and corresponding weights as a global feature vector matched with the target anchor vector.
Optionally, on the basis of the above embodiment, before the image input module 310 to be processed, the apparatus further includes:
the model training module is used for training a preset deep neural network model according to the classification task loss function based on the classification cross entropy and the lane line loss function based on the minimum absolute value deviation to obtain a lane line detection model.
The lane line detection device provided by the embodiment of the invention can execute the lane line detection method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example IV
Fig. 7 shows a schematic diagram of the structure of an electronic device 10 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 7, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a central processing unit (central processor), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the respective methods and processes described above, such as a lane line detection method.
In some embodiments, the lane line detection method may be implemented as a computer program, which is tangibly embodied on a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into the RAM 13 and executed by the processor 11, one or more steps of the lane line detection method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the lane line detection method in any other suitable way (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. A method for detecting a lane line, comprising:
inputting an image to be processed into a lane line detection model;
extracting features of the image to be processed through the lane line detection model, and determining at least two anchor vectors in the obtained feature image, wherein the anchor vectors are represented through rays;
determining global feature vectors matched with the anchor vectors, and determining lane line feature vectors in the global feature vectors;
and carrying out lane line marking in the image to be processed according to the lane line feature vector, and outputting the marked image to be processed through the lane line detection model.
2. The method according to claim 1, further comprising, after feature extraction of the image to be processed by the lane line detection model:
carrying out characteristic enhancement on the characteristic image through a characteristic residual error module of the lane line detection model;
the feature transfer mode of the feature residual error module is bidirectional feature transfer.
3. The method according to claim 1, further comprising, after feature extraction of the image to be processed by the lane line detection model:
and carrying out convolution operation on the characteristic image through a deformable convolution module of the lane line detection model.
4. The method of claim 1, wherein determining at least two anchor vectors in the resulting feature image comprises:
determining the left boundary, the right boundary and the feature points on the bottom edge of the feature image;
and respectively determining a preset number of anchor vectors for each characteristic point.
5. The method of claim 4, further comprising, after determining a predetermined number of anchor vectors for each feature point, respectively:
and screening to obtain a preset number threshold anchor vector according to the maximum value inhibition method and a preset number threshold.
6. The method of claim 5, wherein determining a global feature vector that matches each anchor vector comprises:
according to the activation function, determining weights between the target anchor vector and other anchor vectors except the target anchor vector in each anchor vector;
and taking the sum of products of each other anchor vector and the corresponding weight as a global feature vector matched with the target anchor vector.
7. The method according to claim 1, further comprising, before inputting the image to be processed into the lane line detection model:
training a preset deep neural network model according to the classification task loss function based on the classification cross entropy and the lane line loss function based on the minimum absolute value deviation to obtain a lane line detection model.
8. A lane line detection device, characterized by comprising:
the image input module to be processed is used for inputting the image to be processed into the lane line detection model;
the anchor vector determining module is used for extracting the characteristics of the image to be processed through the lane line detection model, and determining at least two anchor vectors in the obtained characteristic image, wherein the anchor vectors are represented through rays;
the lane line feature vector determining module is used for determining global feature vectors matched with the anchor vectors and determining lane line feature vectors in the global feature vectors;
the lane marking module is used for marking lane lines in the image to be processed according to the lane characteristic vector and outputting the marked image to be processed through the lane detection model.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the lane line detection method of any one of claims 1-7 when the program is executed by the processor.
10. A storage medium storing computer-executable instructions which, when executed by a computer processor, are adapted to perform the lane line detection method of any one of claims 1 to 7.
CN202311107922.8A 2023-08-30 2023-08-30 Lane line detection method and device, electronic equipment and storage medium Pending CN117132955A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311107922.8A CN117132955A (en) 2023-08-30 2023-08-30 Lane line detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311107922.8A CN117132955A (en) 2023-08-30 2023-08-30 Lane line detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117132955A true CN117132955A (en) 2023-11-28

Family

ID=88856044

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311107922.8A Pending CN117132955A (en) 2023-08-30 2023-08-30 Lane line detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117132955A (en)

Similar Documents

Publication Publication Date Title
CN112966587B (en) Training method of target detection model, target detection method and related equipment
CN112560684B (en) Lane line detection method, lane line detection device, electronic equipment, storage medium and vehicle
CN114419165B (en) Camera external parameter correction method, camera external parameter correction device, electronic equipment and storage medium
CN112597837A (en) Image detection method, apparatus, device, storage medium and computer program product
CN109658453B (en) Circle center determining method, device, equipment and storage medium
CN117372663A (en) Method, device, equipment and storage medium for supplementing log end face shielding
CN117132955A (en) Lane line detection method and device, electronic equipment and storage medium
CN113706705B (en) Image processing method, device, equipment and storage medium for high-precision map
CN115995075A (en) Vehicle self-adaptive navigation method and device, electronic equipment and storage medium
CN116129422A (en) Monocular 3D target detection method, monocular 3D target detection device, electronic equipment and storage medium
CN116469073A (en) Target identification method, device, electronic equipment, medium and automatic driving vehicle
CN115424441B (en) Road curve optimization method, device, equipment and medium based on microwave radar
CN117351043A (en) Tracking matching method and device, electronic equipment and storage medium
CN114612544B (en) Image processing method, device, equipment and storage medium
CN114694138B (en) Road surface detection method, device and equipment applied to intelligent driving
CN117148837A (en) Dynamic obstacle determination method, device, equipment and medium
CN117953451A (en) Road boundary detection method, device, equipment and medium
CN118262057A (en) Geological boundary tracking method, device, equipment and readable medium
CN118035788A (en) Target vehicle relative position classification method, device, equipment and storage medium
CN118262313A (en) Road area detection method and device and electronic equipment
CN117671242A (en) Dense target detection method, device, equipment and medium of self-adaptive density
CN117315607A (en) Image processing method, device, equipment and medium
CN116977524A (en) Three-dimensional map construction method and device, electronic equipment and storage medium
CN116883969A (en) Ground point cloud identification method and device, electronic equipment and storage medium
CN118010003A (en) Map construction method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination