CN115880658A - Automobile lane departure early warning method and system under night scene - Google Patents

Automobile lane departure early warning method and system under night scene Download PDF

Info

Publication number
CN115880658A
CN115880658A CN202211621454.1A CN202211621454A CN115880658A CN 115880658 A CN115880658 A CN 115880658A CN 202211621454 A CN202211621454 A CN 202211621454A CN 115880658 A CN115880658 A CN 115880658A
Authority
CN
China
Prior art keywords
lane
early warning
vehicle
line
lane line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211621454.1A
Other languages
Chinese (zh)
Inventor
游峰
肖智豪
吴镇江
杨敬姗
杨家夏
谭云龙
王海玮
黄玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Provincial Laboratory Of Artificial Intelligence And Digital Economy Guangzhou
South China University of Technology SCUT
Original Assignee
Guangdong Provincial Laboratory Of Artificial Intelligence And Digital Economy Guangzhou
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Provincial Laboratory Of Artificial Intelligence And Digital Economy Guangzhou, South China University of Technology SCUT filed Critical Guangdong Provincial Laboratory Of Artificial Intelligence And Digital Economy Guangzhou
Priority to CN202211621454.1A priority Critical patent/CN115880658A/en
Publication of CN115880658A publication Critical patent/CN115880658A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a lane departure early warning method and a system used in night scenes, which are developed mainly based on a monocular vision and deep learning method and comprise the following steps: acquiring a front road image by a vehicle-mounted camera, and detecting characteristic points of lane lines on two sides of a current driving lane by using a Res2Net50-VHA network; tracking the lane line characteristic points of the front frame image and the rear frame image by using Kalman filtering; fitting the upper part and the lower part of the lane line respectively by using a combined fitting mode of a quadratic polynomial and a linear equation; calculating related parameters including road curvature, a lane center line equation, a transverse distance of the vehicle relative to the lane center line, a driving yaw angle and the like; and (4) integrating parameters such as the transverse distance between the vehicle and the lane central line, the driving yaw angle and the like, judging the driving state of the vehicle, and executing a departure early warning strategy. The algorithm used by the invention has the advantages of high identification accuracy, high calculation efficiency, strong anti-interference capability and the like, and the program outputs visual visualization effect in real time.

Description

Automobile lane departure early warning method and system under night scene
Technical Field
The invention relates to the technical field of driving safety assistance, in particular to a method and a system for early warning lane departure of an automobile in a night scene.
Background
According to statistics, the number of deaths caused by traffic accidents in China is about 9 thousands, and accounts for 1.5% of the total number of deaths. The causes of traffic accidents are numerous, with the driver-related factors being the majority. In the driving process, if a driver is tired or not focused, the perception capability of the driver is limited, and misoperation is easily caused due to deviation of subjective factors, so that traffic accidents occur. The relevant data indicates that over 80% of traffic accidents are due to drivers.
With the development of technology, some automobiles are equipped with lane departure warning systems. When the system is started, a camera arranged on the vehicle acquires a front road image, and a video stream is input into the computing platform for processing, so that the position parameter of the vehicle in the current lane is obtained. The vehicle state sensor can timely acquire parameters such as vehicle speed and vehicle steering state, and the decision algorithm of the control unit can judge whether the parameters deviate. If a deviation occurs, an alarm will be triggered.
Research finds that the current lane departure early warning system has the following problems:
(1) A plurality of sensors are used in part of lane departure early warning systems, so that the cost is high;
(2) The lane line detection accuracy is higher under the condition of better illumination, but the lane line detection accuracy is greatly reduced under the night scene condition;
(3) Aiming at the driving conditions of lane changing, overtaking and the like, the early warning system does not provide related coping strategies, and partial lane departure early warning systems only use a single early warning mode, so that the false alarm rate is high, the early warning signal is smelly by a driver, and the early warning effect is poor.
Disclosure of Invention
In order to overcome the defects and shortcomings of the prior art, the invention provides a method and a system for early warning of lane departure of an automobile in a night scene, and solves the problems of poor early warning effect and incomplete decision making in the night scene.
The invention adopts the following technical scheme:
a lane departure early warning method for an automobile in a night scene comprises the following steps:
collecting a front road image, preprocessing the front road image, and detecting characteristic points of lane lines on two sides of a current driving lane by using a Res2Net50-VHA network;
supplementing feature points of lane lines on two sides by using a linear difference value, and tracking the feature points of the lane lines of the front and rear frame images by using Kalman filtering;
fitting the upper part and the lower part of the lane line respectively by using a combined fitting mode of a linear equation and a quadratic polynomial to obtain a left lane line equation and a right lane line equation;
calculating relevant parameters according to the obtained left and right lane lines, wherein the relevant parameters comprise road curvature, a lane center line equation, a transverse distance of the vehicle relative to the lane center line, a driving yaw angle and the like;
and judging the running state of the vehicle according to the obtained parameters, and executing a deviation early warning strategy.
Further, the detecting the characteristic points of the lane lines on two sides of the current driving lane by using the Res2Net50-VHA network comprises the following steps:
constructing a Res2Net50 network as a main feature extraction network for extracting features of an input image; the Res2Net50 network is formed by connecting a plurality of Res2Net modules in series, and each Res2Net module constructs layered residual connection;
embedding the VHA longitudinal and transverse attention modules into the last three output levels of the Res2Net50 network to obtain a Res2Net50-VHA network, performing network downsampling to extract the characteristics of lane line images, splicing into a probability matrix of h (w + 1) x n size, and solving the position of each expected lane line characteristic point in each row;
and (3) detecting the characteristic points by adopting a grid classification method, dividing the lane line area in the picture into a plurality of grids, and if the pixels of the lane lines existing in the grids are greater than a given threshold value, taking the center points of the grids as the coordinates of the characteristic points of the lane lines.
Further, each Res2Net module constructs a hierarchical residual connection, specifically:
after the first 1 × 1 convolution, the feature map is divided into s subsets according to the channels, defining the subsets as x i I ∈ {1, 2.., s }, the output y is calculated as follows i
Figure BDA0004002416880000021
The feature size of each subset is the same, but the channel is 1/s of the input feature, divided by x 1 Besides, all other sub-features are subjected to 3 × 3 convolution calculation, and defined as K i (x) Sub-feature x i And K i-1 (x) Added and input to K i (x) In that respect Namely: and (3) substituting the convolution kernel of the n channel (n = s × w) by the convolution kernels of the s channels w, embedding the Res2Net module into ResNet50 to obtain Res2Net50, and taking the Res2Net50 as a main feature extraction network for extracting the lane line features in the night scene.
Further, the feature extraction capability is enhanced by adopting a VHA (very high aperture ratio) vertical and horizontal attention module, and the output feature diagram of the nth layer of the network is Y n ∈R h×w×c Generating an attention profile A by VHA n ∈R h×w×c
Specifically, for Y n Respectively performing maximum pooling and average pooling operations in the longitudinal and transverse directions to obtain H n ∈R h ×1×c And W n ∈R 1×w×c
Figure BDA0004002416880000031
Figure BDA0004002416880000032
Wherein, K h =[1,w],K w =[h,1]And represents the pooling window size.
Then, H n And W n Sent to a sharing module consisting of two convolution layers, and transverse and longitudinal characteristics are generated into H 'through the sharing module and a sigmoid activation function respectively' n And W n ' where adding a 1 × 1 convolutional layer and dimensionality reduction factor r is used to reduce the number of channels:
H′ n =f(Θ(H n ))
W n ′=f(Θ9W n ))
wherein f is a sigmoid activation function, and theta is a sharing module;
finally, feature map (H' n And W n ') and Y n Multiplying to obtain attention feature map A n
Α n =Y n ×H′ n ×W n ′。
Further, the image is divided into a grid of 48 rows by 300 columns.
Further, the feature point supplementing and tracking specifically includes:
sequentially judging whether the feature points are detected in the 48-line grids or not for the obtained lane line feature points, and if not, performing linear interpolation supplement by using the detected feature points of the lane line;
after the characteristic points are supplemented, kalman filtering is applied to track the characteristic points of the two lane lines, and the characteristic points are divided into a prediction part and an updating part: in the prediction phase, the system passesOptimum state of last frame image t-1
Figure BDA0004002416880000033
Optimum status ≥ for the next frame image t>
Figure BDA0004002416880000034
And predicting, updating the filter by using the coordinates of the feature points detected by each frame of image, and iteratively calculating the optimal position of the feature points at the time t for the next frame of image to obtain the input value of the optimal feature points.
Further, a combined fitting mode of a linear equation and a quadratic polynomial is applied to fit the upper part and the lower part of the lane line respectively to obtain a left lane line equation and a right lane line equation, and the method specifically comprises the following steps:
fitting characteristic points of 30 lane lines above the grid by using a quadratic curve, selecting a linear equation to fit the characteristic points of 18 lines at the bottom of the grid, outputting the polar diameter and polar angle of a lane line equation and the coordinates of the lower end point of each lane line, and accordingly, calculating a yaw angle beta and the transverse distance D between the left lane line and the central axis of the vehicle L And the transverse distance D between the right lane line and the central axis of the vehicle R And obtaining an equation of the central line of the lane by combining the equations of the left lane line and the right lane line.
Further, the departure warning strategy specifically includes:
when a steering lamp of the vehicle is turned on, the lane departure early warning is not carried out by default; when the turn signal lamp is not turned on, the system can give an early warning to the behavior of crossing the lane and deviating the lane in the driving process of the vehicle, and the method specifically comprises the following steps:
adopting the transverse distance as a main early warning parameter, comparing the transverse distance of the central axis of the vehicle deviating from the central line of the lane with a safety threshold T, and giving early warning aiming at the following conditions:
when D is present L If the value is more than or equal to T, the system gives a left deviation early warning;
when D is present R If the value is more than or equal to T, the system gives a right deviation early warning;
when D is present L 、D R Entering an auxiliary early warning process when the T is less than or equal to T;
using the yaw angle beta of the vehicle as an aid to the systemEarly warning parameters, comparing yaw angle with safety threshold beta i And giving early warning aiming at the following conditions:
when more than two continuous frames of images beta is detected to be more than beta ≧ beta i The system gives a left deviation early warning;
when more than two continuous frames of images are detected, beta is less than or equal to-beta i The system gives a right deviation early warning;
when-beta i ≤β≤β i At this time, the vehicle is considered to be in a safe driving state, and lane departure is not caused.
Further, after the vehicle does not turn on the steering lamp to change lanes, the lane changing lane is not adjusted to a safe state, the system still judges that the vehicle is in a departure state, and therefore early warning is triggered until the vehicle is adjusted to the safe state, and the system turns off early warning information.
A system for realizing the night scene lane departure early warning method,
lane line feature extraction module: the method comprises the steps of extracting feature points of lane lines on two sides in a front road image;
a characteristic point position judging module: the system is used for detecting the positions of the characteristic points of the lane lines on two sides of the lane lines;
the lane line characteristic point tracking module: supplementing the feature points of the lane lines on the two sides by using the linear difference values, and then tracking the feature points of the two lane lines by applying Kalman filtering;
the lane line characteristic point fitting module: fitting the characteristic points of the lane lines by using a quadratic polynomial and a linear equation to obtain left and right lane lines;
a parameter calculation module: calculating parameters including lane curvature, road direction, transverse distance of the vehicle relative to a lane center line and a yaw angle according to the obtained left and right lane lines;
the lane departure judging and early warning module comprises: and judging the running state of the vehicle and executing a deviation early warning strategy.
The beneficial effects are as follows:
1. the invention adopts a method of deep learning and vertical and horizontal attention, can accurately identify lane lines and deduce the position of the lane lines in night environment.
2. According to the method, the Kalman filter is adopted to track the characteristic points of the lane line, so that the accuracy of lane line detection is higher;
3. the invention adopts a mode of combining main early warning and auxiliary early warning, comprehensively utilizes the yaw angle and the transverse distance when the vehicle runs to formulate an early warning strategy and provides more comprehensive early warning information.
4. The algorithm used by the invention has the advantages of high identification accuracy, high calculation efficiency, strong anti-interference capability and the like, and the program outputs visual visualization effect in real time.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a block diagram of Res2Net module according to the present invention;
FIG. 3 is a diagram of the Res2Net50-VHA network architecture of the present invention;
FIG. 4 is a schematic diagram of a VHA module according to the present invention;
FIG. 5 is a schematic diagram illustrating a detection principle of a characteristic point of a lane line according to the present invention;
FIG. 6 is a schematic view of the lane departure center line of the present invention;
FIG. 7 is a schematic illustration of the yaw angle of the vehicle of the present invention;
FIG. 8 is a schematic diagram of the lane departure warning strategy of the present invention;
fig. 9 is a schematic interface diagram of the lane departure warning system of the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited to these examples.
Examples
The vehicle-mounted camera used by the invention is arranged in front of the vehicle and is flush with the central axis of the vehicle. The whole flow is shown in fig. 1, the invention relates to an intelligent networking automobile lane departure early warning method in a night scene, which comprises the following steps:
s1, acquiring a front road image by a vehicle-mounted camera, and detecting characteristic points of lane lines on two sides of a current driving lane by using a Res2Net50-VHA network after preprocessing;
s11, constructing Res2Net50 as a backbone feature extraction network, wherein Res2N is used for extracting the backbone featureThe et module is connected in series, and the et module is used for extracting the characteristics of the input image. As shown in fig. 2, in a single residual block, the Res2Net module constructs a hierarchical residual connection, so that the network obtains details and global features at a finer granularity level, which is beneficial to extracting and estimating the features of the lane lines in the environments of lane line occlusion, loss and night, and improves the feature expression capability of the network. Specifically, after the first 1 × 1 convolution, the input feature map is divided into s subsets, defined as x, by channels i I ∈ {1,2,..., s }. Each feature has the same dimension, but the channel is 1/s of the input feature, except for x 1 Other sub-features have corresponding 3 × 3 convolution kernels, defined as K i (x) With an output of y i . Sub-feature x i And K i-1 (x) Added and input to K i (x)。
Calculating output y according to equation (1) i
Figure BDA0004002416880000061
Res2Net50 is used to extract features of the input image. The network structure is shown in fig. 3, assuming that the resolution of the input feature map is 288 × 800, the input feature map is firstly down-sampled to 1/2 times to obtain a 144 × 400 feature map, and then four stages are performed, wherein the feature maps extracted at four different stages correspond to the resolutions of 1/4, 1/8, 1/16 and 1/32 of the input feature map.
The feature scale of each subset is the same, but the channel is 1/s of the input feature. Except for x 1 Besides, other sub-features have corresponding 3 × 3 convolution kernels, defined as K i (x) Sub-feature x i And K i-1 (x) Added and input to K i (x) In that respect Namely: the convolution kernel for n channels (n = s × w) is replaced by the convolution kernel for s w channels.
And embedding the Res2Net module into the ResNet50 to obtain the Res2Net50, and taking the Res2Net50 as a main feature extraction network for extracting the lane line features in the night scene.
S12, embedding a Vertical and Horizontal Attention (VHA) module to the last 3 output levels of Res2Net50 to obtain Res2Net50-VHA, as shown in FIG. 3, performing network downsampling to extract the features of lane line images, splicing into a probability matrix of h x (w + 1) x n, and solving the position of each expected lane line feature point on each line.
The VHA integrates the texture and location information of the lane lines from the vertical and horizontal pixels without introducing a large amount of computation. In addition, the accuracy of lane line detection in night environment is improved by embedding the lane line detection device in the proper position of Res2Net 50.
Further: the VHA crossbar attention module, the details of which include: the output characteristic diagram of the nth layer of the network is Y n ∈R h ×w×c An attention profile A may be generated by the VHA n ∈R h×w×c× ,A n And Y n With the same number of channels. The specific calculation process is to Y n Respectively performing maximum pooling and average pooling operations in the longitudinal and transverse directions to obtain H n ∈R h×1×c And W n ∈R 1×w×c . As shown in formula (2) and formula (3), wherein K h =[1,w],K w =[h,1]And represents the pooling window size. Then, H n And W n And (4) sending the information to a sharing module consisting of two convolution layers, wherein f is a sigmoid activation function and theta is the sharing module as shown in formulas (4) and (5). H 'is generated by the transverse and longitudinal features through a sharing module and a sigmoid activation function respectively' n And W' n Where a 1 x 1 convolutional layer is added and the dimensionality reduction factor r is used to reduce the number of channels. Finally, feature map (H' n And W' n ) And Y n Multiplying to obtain attention feature map A n
Figure BDA0004002416880000062
Figure BDA0004002416880000063
H′ n =f(Θ(H n )) (4)
W′ n =f(Θ(W n )) (5)
Α n =Y n ×H′ n ×W′ n (6)
The lane line detection principle is shown in fig. 5. And dividing the fixed area of the image into hw grids, classifying each grid on each line, and searching for the characteristic points of the lane line. With the behavior example in the dashed box on the left side of fig. 5, the positions of n characteristic points of the lane line on the row are searched n times, and the middle dark part on the right side represents the position of the characteristic point of the lane line on the row (black block).
Further: the grid classification method adopted in the embodiment specifically comprises the following steps: the lane line detection is regarded as a process of line-by-line classification and grid-by-grid selection by utilizing the global features of the image. The searching process comprises the following steps: dividing a fixed area of the image into h multiplied by w grids, carrying out target classification on each grid on each line, and searching the position of a lane line characteristic point. The invention only focuses on the lane lines at two sides of the lane, and for each line of grid, respectively traverses 2 times to find the positions of 2 lane line characteristic points on the line. In practical process, the lane line area in the image is divided into grids of 48 rows × 300 columns, and the grid center point coordinate is used as the feature point position coordinate.
The grid classification method divides the lane line area in the picture into a plurality of grids, judges the positions of the characteristic points of the lane line by classifying the grids, and can greatly reduce the calculated amount and improve the detection speed compared with semantic segmentation.
S13, screening night scene images in the source data set CULane, and building a lane line data set under the night scene as training data of the network to enable the network to learn lane line characteristics under the night environment. In the network training phase, the auxiliary training branch is activated. As shown by a dotted line box in fig. 3, res2Net outputs feature maps of four levels and different sizes, and extracts the feature maps of the last three levels, and performs fusion after upsampling, thereby calculating semantic loss of a lane line, and not activating the branch in prediction.
S2, after supplementing the characteristic points, applying Kalman filtering to track the characteristic points of the two lane lines, specifically:
sequentially judging whether the feature points are detected in the 48-line grids or not for the obtained lane line feature points, and if not, performing linear interpolation supplement by using the detected feature points of the lane line;
after the characteristic points are supplemented, kalman filtering is applied to track the characteristic points of the two lane lines, and the characteristic points are divided into a prediction part and an updating part: in the prediction stage, the system passes through the optimal state of the last frame of image t-1
Figure BDA0004002416880000071
Optimum status ≥ for the next frame image t>
Figure BDA0004002416880000072
And predicting, updating the filter by using the coordinates of the feature points detected by each frame of image, and iteratively calculating the optimal position of the feature points at the time t for the next frame of image to obtain the input value of the optimal feature points.
The calculation process is shown in formulas (7) to (9).
Figure BDA0004002416880000081
Figure BDA0004002416880000082
z t =Hx t +v() (9)
Wherein, F is a state transition matrix which represents the state of the current time presumed from the state of the last time; b is a control matrix which represents the mode of the control quantity u acting on the current state; t represents the system noise, following a gaussian distribution. Equation (8) represents the transfer of the noise covariance matrix, where
Figure BDA0004002416880000089
Representing the covariance corresponding to the t moment; p is t-1 Representing the covariance corresponding to the t-1 moment; q is the covariance of the systematic process noise. Formula (9) is an observation sideDistance, z t Is an observed value, H is an observation matrix, x t And v is measurement noise which is the current state of the system and follows Gaussian distribution.
In the updating stage, firstly, the Kalman coefficient K is calculated t To determine the weight of the prediction model and the observation model. Calculating an optimum update value by equations (11) and (12)
Figure BDA0004002416880000088
P t
Figure BDA0004002416880000083
Figure BDA0004002416880000084
Figure BDA0004002416880000085
Wherein, R is a covariance matrix of the measurement noise. In equation (11), the actual observed value z is calculated t And expected observed values
Figure BDA0004002416880000086
The residual error between and multiplied by the kalman coefficient corrects the prediction value->
Figure BDA0004002416880000087
And updating the filter by using the coordinates of the feature points detected by each frame of image, and iteratively calculating the optimal position of the feature points at the time t for the next frame of image to obtain the input value of the optimal feature points.
And S3, fitting the characteristic points of the lane line. Because the lane line near the bottom of the image has small distortion and the curvature of the lane line is not large, a straight line fitting model is selected to fit the 18 rows of feature points at the bottom of the grid. The calculation formulas of the left lane line and the right lane line are shown as formula (13) and formula (14).
Left lane line: rho 1 =x cosθ 1 +y sinθ 1 (13)
Right lane line: rho 2 =x cosθ 2 +y sin θ 2 (14)
And the lane line positioned above the grid for 30 rows has obvious perspective effect, so that a quadratic curve is used for fitting a curve lane. The approximation equation of the lane line is found by using the least square method, as shown in equation (15). The coordinates of each feature point on the lane line are (x) i ,y i ) And calculating the deviation of all the characteristic points to the target curve, and solving the square sum of the deviations, as shown in formula (16). In order to make the target curve close to the actual road curve, the sum of squared deviations is required to be as small as possible. In that
Figure BDA00040024168800000810
When the value reaches the minimum value, the target curve is the curve closest to the actual lane line.
y=a 0 +a 1 x+a 2 x 2 (15)
Figure BDA0004002416880000091
And S4, calculating relevant parameters including lane curvature, road direction, transverse distance and yaw angle of the vehicle relative to a lane center line and the like.
Road curvature radius: the equation of the center line of the lane can be obtained by combining the equations of the left lane line and the right lane line as shown in the formula (17). The reciprocal of the curvature at a point on the curve is referred to as the radius of curvature at that point of the curve, and the radius of curvature is calculated as shown in equation (18). And substituting the road lane central line equation into the curvature radius calculation formula to obtain a formula (19).
Figure BDA0004002416880000092
Figure BDA0004002416880000093
Figure BDA0004002416880000094
Yaw angle: the yaw angle is the angle between the longitudinal central axis of the vehicle and the bisector of the included angles of the two lane lines, and can reflect the direction deviation of the vehicle in the driving process. And (3) according to the lane line characteristic points detected by the algorithm, adopting linear equation fitting to the characteristic points of the 18-row grid below, and outputting the polar diameter and polar angle of a lane line equation and the coordinates of the lower end point of each lane line. Thus, the yaw angle beta and the lateral distance D between the left lane line and the central axis of the vehicle can be obtained L And the transverse distance D between the right lane line and the central axis of the vehicle R . The equation of the bisector of the included angle between the left lane line and the right lane line is shown as the formula (20), and the yaw angle is shown as the formula (21).
Figure BDA0004002416880000095
Wherein the content of the first and second substances,
Figure BDA0004002416880000096
Figure BDA0004002416880000097
s5, the transverse distance between the vehicle and the lane center line and the yaw angle of the vehicle running are integrated to judge the running state of the vehicle, and the method specifically comprises the following steps:
the invention combines the main early warning and the auxiliary early warning, and makes early warning measures by comprehensively utilizing two parameters of the running yaw angle and the transverse distance of the automobile. When the steering lamp of the vehicle is turned on, lane departure early warning is not performed by default. When the steering lamp is not turned on, the system can give an early warning to the lane crossing and lane departure behaviors in the driving process of the vehicle, so that the driving safety of a driver is guaranteed.
First, the actual distance is represented by the pixel distance using the lateral distance as the main pre-warning parameter. Comparing the transverse distance of the central axis of the vehicle deviating from the central line of the lane with a safety threshold T, and giving an early warning aiming at the following conditions:
● When D is L If the value is more than or equal to T, the system gives a left deviation early warning;
● When D is present R If the value is more than or equal to T, the system gives a right deviation early warning;
● When D is L 、D R And (5) entering an auxiliary early warning process when the T is less than or equal to T.
Taking the yaw angle beta of the vehicle as an auxiliary early warning parameter of the system, and comparing the yaw angle with a safety threshold beta i And giving early warning aiming at the following conditions:
● When more than two continuous frames of images beta is detected to be more than beta ≧ beta i The system gives a left deviation early warning;
● When more than two continuous frames of images are detected, beta is less than or equal to-beta i The system gives out a right deviation early warning;
● When is-beta i ≤β≤β i At this time, the vehicle is considered to be in a safe driving state, and lane departure is not caused.
After the vehicle does not turn on the steering lamp to change lanes, the lane changing lane is not adjusted to a safe state, and the system still judges that the vehicle is in a departure state, so that early warning is triggered until the vehicle is adjusted to the safe state, and the system closes early warning information.
The vehicle lateral distance and yaw angle calculations are shown in fig. 6 and 7, and the departure decision flow is shown in fig. 8.
S6, displaying the vehicle, the detected lane lines and the detected road conditions in real time in a program, and giving a voice prompt when the vehicle is about to deviate, wherein the method specifically comprises the following steps:
the method is integrated in a lane departure early warning system based on PyQt, and mainly comprises 5 functional areas, namely a video display area, a state prompt area, a parameter selection area, a parameter display area and a system control area. The video display area displays images acquired by the camera in the driving process and processed lane line images in real time; the state prompt area represents the driving state of the vehicle in an intuitive schematic diagram; the parameter display area is used for selecting and setting the current parameters; the parameter display area is used for displaying the system analysis result; the system control area is used for controlling the system and mainly comprises interactive buttons of system starting and closing, camera calibration, video testing, a simple mode, data storage and the like. The system interface is shown in fig. 9.
The present embodiment also provides a system for implementing the method, including:
lane line feature extraction module: the method comprises the steps of extracting feature points of lane lines on two sides in a front road reason image; the method is characterized in that a vehicle-mounted camera is adopted, the camera is installed in front of a motor vehicle, and at least one camera is adopted.
A characteristic point position judging module: the lane line feature point positions are used for aligning lane lines on two sides;
the lane line characteristic point tracking module: supplementing the feature points of the lane lines on the two sides by using the linear difference values, and then tracking the feature points of the two lane lines by applying Kalman filtering;
the lane line characteristic point fitting module: fitting the characteristic points of the lane lines by using a quadratic polynomial and a linear equation to obtain left and right lane lines;
a parameter calculation module: calculating parameters including lane curvature, road direction, transverse distance of the vehicle relative to a lane center line and yaw angle according to the obtained left lane line and right lane line;
the lane departure judging and early warning module: and judging the running state of the vehicle and executing a deviation early warning strategy.
The invention solves the safety problem of driving the intelligent internet automobile on the road at night, considers the characteristics of the lane line under the night scene, and applies Res2Net50-VHA to effectively improve the accuracy of detecting the lane line; meanwhile, the grid classification is used for replacing semantic segmentation, so that the detection speed of the characteristic points of the lane lines is increased; tracking the characteristic points of the lane line by using a Kalman filter to ensure the stability of the detection of the lane line; providing lane departure early warning information by adopting a comprehensive early warning strategy based on a vehicle yaw angle and a transverse distance, and reducing the risk of lane departure; and finally, outputting a visual visualization effect by the program in real time.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (10)

1. The method for early warning the lane departure of the automobile in the night scene is characterized by comprising the following steps of:
collecting a front road image, preprocessing the front road image, and detecting characteristic points of lane lines on two sides of a current driving lane by using a Res2Net50-VHA network;
supplementing feature points of lane lines on two sides by using a linear difference value, and tracking the feature points of the lane lines of the front and rear frame images by using Kalman filtering;
fitting the upper part and the lower part of the lane line respectively by using a combined fitting mode of a linear equation and a quadratic polynomial to obtain a left lane line equation and a right lane line equation;
calculating relevant parameters according to the obtained left and right lane lines, wherein the relevant parameters comprise road curvature, a lane center line equation, a transverse distance of the vehicle relative to the lane center line, a driving yaw angle and the like;
and judging the running state of the vehicle according to the obtained parameters, and executing a deviation early warning strategy.
2. The method for warning lane departure of a vehicle in a night scene according to claim 1, wherein the detecting the feature points of the lane lines on both sides of the current driving lane using Res2Net50-VHA network comprises:
constructing a Res2Net50 network as a main feature extraction network for extracting features of an input image; the Res2Net50 network is formed by connecting a plurality of Res2Net modules in series, and each Res2Net module constructs layered residual connection;
embedding the VHA longitudinal and transverse attention modules into the last three output levels of the Res2Net50 network to obtain a Res2Net50-VHA network, sampling by the network to extract the characteristics of lane images, splicing into a probability matrix of hx (w + 1) x n size, and solving the expected position of each lane characteristic point in each row;
and (3) detecting the characteristic points by adopting a grid classification method, dividing the lane line area in the picture into a plurality of grids, and if the pixels of the lane lines existing in the grids are greater than a given threshold value, taking the center points of the grids as the coordinates of the characteristic points of the lane lines.
3. The early warning method for the lane departure of the automobile in the night scene according to claim 2, wherein each Res2Net module constructs a layered residual connection, specifically:
after the first 1 × 1 convolution, the feature map is divided into s subsets according to the channels, defining the subsets as x i I ∈ {1, 2.., s }, the output y is calculated as follows i
Figure FDA0004002416870000011
The feature size of each subset is the same, but the channel is 1/s of the input feature, divided by x 1 Besides, other sub-features are all subjected to 3 × 3 convolution calculation, defined as K i (x) Sub-feature x i And K i-1 (x) Added and input to K i (x) (ii) a Namely: and (3) substituting the convolution kernel n = s × w of the n channel by the convolution kernels of s w channels, embedding the Res2Net module into ResNet50 to obtain Res2Net50, and taking the Res2Net50 as a main feature extraction network for extracting the lane line features in the night scene.
4. The method of claim 1, wherein the VHA vertical and horizontal attention module is used to enhance feature extraction capability, and the output feature map of the nth layer of the network is Y n ∈R h×w×c Generating an attention profile A by VHA n ∈R h×w×c
Specifically, for Y n Respectively performing maximum pooling and average pooling operations in the longitudinal and transverse directions to obtain H n ∈R h×1×c And W n ∈R 1×w×c
Figure FDA0004002416870000021
Figure FDA0004002416870000022
Wherein, K h =[1,w],K w =[h,1]Representing the pooling window size;
then, H n And W n Sent to a sharing module consisting of two convolution layers, and transverse and longitudinal characteristics are generated into H 'through the sharing module and a sigmoid activation function respectively' n And W' n Where a 1 x 1 convolutional layer and dimensionality reduction factor r is added to reduce the number of channels:
H′ n =f(Θ(H n ))
W′ n =f(Θ(W n ))
wherein f is a sigmoid activation function, and theta is a sharing module;
finally, feature mapping and Y n Multiplying to obtain attention feature map A n
A n =Y n ×H′ n ×W′ n
5. The method as claimed in claim 2, wherein the image is divided into 48 rows by 300 columns.
6. The early warning method for lane departure of an automobile under a night scene according to claim 5, wherein the feature point supplementing and tracking specifically comprises:
sequentially judging whether the feature points are detected in the 48-line grids or not for the obtained lane line feature points, and if not, performing linear interpolation supplement by using the detected feature points of the lane line;
after the characteristic points are supplemented, kalman filtering is applied to track the characteristic points of the two lane lines, and the method comprises the following two parts of prediction and updating: in the prediction stage, the system passes through the optimal state of the last frame of image t-1
Figure FDA0004002416870000023
Optimum status ≥ for the next frame image t>
Figure FDA0004002416870000024
And predicting, updating the filter by using the coordinates of the feature points detected by each frame of image, and iteratively calculating the optimal position of the feature points at the time t for the next frame of image to obtain the input value of the optimal feature points.
7. The early warning method for the lane departure of the automobile under the night scene according to claim 1, wherein a combined fitting mode of a linear equation and a quadratic polynomial is applied to respectively fit the upper part and the lower part of the lane line to obtain a left lane line equation and a right lane line equation, and specifically comprises the following steps:
fitting characteristic points of 30 lane lines above the grid by using a quadratic curve, selecting a linear equation to fit the characteristic points of 18 lines at the bottom of the grid, outputting the polar diameter and polar angle of a lane line equation and the coordinates of the lower end point of each lane line, and accordingly, calculating a yaw angle beta and the transverse distance D between the left lane line and the central axis of the vehicle L And the transverse distance D between the right lane line and the central axis of the vehicle R And obtaining an equation of the central line of the lane by combining the equations of the left lane line and the right lane line.
8. The method for early warning of the lane departure of an automobile in a night scene according to claim 1, wherein the departure early warning strategy specifically comprises:
when a steering lamp of the vehicle is turned on, the lane departure early warning is not carried out by default; when the turn signal lamp is not turned on, the system can give an early warning to the lane crossing and lane departure behaviors in the driving process of the vehicle, and the early warning is specifically as follows:
the transverse distance is used as a main early warning parameter, the transverse distance of the central axis of the vehicle deviating from the central line of the lane is compared with a safety threshold T, and early warning is given aiming at the following conditions:
when D is present L If the value is more than or equal to T, the system gives a left deviation early warning;
when D is present R Not less than T, system feedGiving out right deviation early warning;
when D is present L 、D R Entering an auxiliary early warning process when the T is less than or equal to T;
taking the yaw angle beta of the vehicle as an auxiliary early warning parameter of the system, and comparing the yaw angle with a safety threshold beta i And giving early warning aiming at the following conditions:
when more than two continuous frames of images beta is detected to be more than beta ≧ beta i The system gives a left deviation early warning;
when more than two continuous frames of images are detected, beta is less than or equal to-beta i The system gives a right deviation early warning;
when is-beta i ≤β≤β i At this time, the vehicle is considered to be in a safe driving state, and lane departure is not caused.
9. The night scene lane departure warning method according to claim 8, wherein after the vehicle switches lanes without turning on the turn signal, the system determines that the vehicle is in a departure state without adjusting to a safe state in the lane switching lane, thereby triggering warning until the vehicle is adjusted to a safe state and the system turns off the warning information.
10. A system for implementing the night scene lane departure warning method according to any one of claims 1 to 9,
lane line feature extraction module: the method comprises the steps of extracting feature points of lane lines on two sides in a front road image;
a characteristic point position judging module: the system is used for detecting the positions of the characteristic points of the lane lines on two sides of the lane lines;
the lane line characteristic point tracking module: supplementing the feature points of the lane lines on the two sides by using the linear difference values, and then tracking the feature points of the two lane lines by applying Kalman filtering;
the lane line characteristic point fitting module: fitting the characteristic points of the lane lines by using a quadratic polynomial and a linear equation to obtain left and right lane lines;
a parameter calculation module: calculating parameters including lane curvature, road direction, transverse distance of the vehicle relative to a lane center line and yaw angle according to the obtained left lane line and right lane line;
the lane departure judging and early warning module: and judging the running state of the vehicle and executing a deviation early warning strategy.
CN202211621454.1A 2022-12-16 2022-12-16 Automobile lane departure early warning method and system under night scene Pending CN115880658A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211621454.1A CN115880658A (en) 2022-12-16 2022-12-16 Automobile lane departure early warning method and system under night scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211621454.1A CN115880658A (en) 2022-12-16 2022-12-16 Automobile lane departure early warning method and system under night scene

Publications (1)

Publication Number Publication Date
CN115880658A true CN115880658A (en) 2023-03-31

Family

ID=85755047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211621454.1A Pending CN115880658A (en) 2022-12-16 2022-12-16 Automobile lane departure early warning method and system under night scene

Country Status (1)

Country Link
CN (1) CN115880658A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116311140A (en) * 2023-05-11 2023-06-23 吉咖智能机器人有限公司 Method, apparatus and storage medium for detecting lane lines
CN117670888A (en) * 2024-02-01 2024-03-08 天津滨海雷克斯激光科技发展有限公司 Pipeline inner wall defect detection method, device, equipment and medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116311140A (en) * 2023-05-11 2023-06-23 吉咖智能机器人有限公司 Method, apparatus and storage medium for detecting lane lines
CN116311140B (en) * 2023-05-11 2023-08-15 吉咖智能机器人有限公司 Method, apparatus and storage medium for detecting lane lines
CN117670888A (en) * 2024-02-01 2024-03-08 天津滨海雷克斯激光科技发展有限公司 Pipeline inner wall defect detection method, device, equipment and medium
CN117670888B (en) * 2024-02-01 2024-05-17 天津滨海雷克斯激光科技发展有限公司 Pipeline inner wall defect detection method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN109977812B (en) Vehicle-mounted video target detection method based on deep learning
CN111368687B (en) Sidewalk vehicle illegal parking detection method based on target detection and semantic segmentation
CN115880658A (en) Automobile lane departure early warning method and system under night scene
CN112349144B (en) Monocular vision-based vehicle collision early warning method and system
CN109919074B (en) Vehicle sensing method and device based on visual sensing technology
CN111738037B (en) Automatic driving method, system and vehicle thereof
CN110415544B (en) Disaster weather early warning method and automobile AR-HUD system
CN111208818B (en) Intelligent vehicle prediction control method based on visual space-time characteristics
CN112731436B (en) Multi-mode data fusion travelable region detection method based on point cloud up-sampling
CN112990065B (en) Vehicle classification detection method based on optimized YOLOv5 model
CN107031661A (en) A kind of lane change method for early warning and system based on blind area camera input
CN111256693B (en) Pose change calculation method and vehicle-mounted terminal
CN114418895A (en) Driving assistance method and device, vehicle-mounted device and storage medium
US20200394838A1 (en) Generating Map Features Based on Aerial Data and Telemetry Data
CN113095152A (en) Lane line detection method and system based on regression
CN111738071A (en) Inverse perspective transformation method based on movement change of monocular camera
CN114763136A (en) Guide vehicle driving auxiliary system based on deep learning
CN114048536A (en) Road structure prediction and target detection method based on multitask neural network
CN117115690A (en) Unmanned aerial vehicle traffic target detection method and system based on deep learning and shallow feature enhancement
CN112597996A (en) Task-driven natural scene-based traffic sign significance detection method
CN115187959B (en) Method and system for landing flying vehicle in mountainous region based on binocular vision
CN117372991A (en) Automatic driving method and system based on multi-view multi-mode fusion
CN116630920A (en) Improved lane line type identification method of YOLOv5s network model
CN116311136A (en) Lane line parameter calculation method for driving assistance
CN116740657A (en) Target detection and ranging method based on similar triangles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination