CN118015567B - Lane dividing method and related device suitable for highway roadside monitoring - Google Patents

Lane dividing method and related device suitable for highway roadside monitoring Download PDF

Info

Publication number
CN118015567B
CN118015567B CN202410404743.9A CN202410404743A CN118015567B CN 118015567 B CN118015567 B CN 118015567B CN 202410404743 A CN202410404743 A CN 202410404743A CN 118015567 B CN118015567 B CN 118015567B
Authority
CN
China
Prior art keywords
lane
area
space
abscissa
ordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410404743.9A
Other languages
Chinese (zh)
Other versions
CN118015567A (en
Inventor
郭延永
江典峰
吕浩
周继彪
岳全胜
罗元炜
陈晓薇
吴秀梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202410404743.9A priority Critical patent/CN118015567B/en
Publication of CN118015567A publication Critical patent/CN118015567A/en
Application granted granted Critical
Publication of CN118015567B publication Critical patent/CN118015567B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The invention discloses a lane dividing method and a related device suitable for expressway road side monitoring.

Description

Lane dividing method and related device suitable for highway roadside monitoring
Technical Field
The invention relates to a lane dividing method and a related device suitable for highway roadside monitoring, and belongs to the technical field of traffic management control and lane recognition.
Background
Lane division is the basis for lane-level traffic information extraction and event judgment based on video information, and is generally used in highway management control. At present, a manual marking method is often adopted for lane division, however, because a plurality of expressway cameras are arranged, the manual marking method is time-consuming and labor-consuming, and manual marking information can not be used once the angles of the cameras are switched, and re-marking is needed.
Disclosure of Invention
The invention provides a lane dividing method and a related device suitable for highway roadside monitoring, which solve the problems disclosed in the background art.
According to one aspect of the present disclosure, there is provided a lane dividing method suitable for highway roadside monitoring, including: acquiring the track of each vehicle from the traffic operation video; clustering the tracks, and performing space division according to the clustering result to obtain lane space regions corresponding to various tracks; obtaining a road space region in a traffic operation video according to the lane space region; shielding a lane space region in the road space region, and identifying edge points of the rest region; and obtaining a lane dividing result according to the edge point recognition result.
In some embodiments of the present disclosure, clustering tracks, and performing spatial division according to a clustering result to obtain lane space regions corresponding to various tracks, including: dividing the track of each vehicle into a plurality of sub-tracks; and clustering all the sub-tracks, and obtaining lane space areas corresponding to all the tracks according to the clustering result and the neural network.
In some embodiments of the present disclosure, prior to sub-trajectory splitting, converting the vehicle trajectory into a sequence of vehicle flow vectors is further included; wherein the vehicle flow vector reflects the position of the vehicle at time t and the displacement from time t-1 to time t.
In some embodiments of the present disclosure, obtaining a road space region in a traffic run video from a lane space region includes: and calculating the vertex angle coordinates of the road space region in the traffic operation video according to the vertex angle coordinates of the lane space region.
In some embodiments of the present disclosure, the lane space region and the road space region are both quadrilateral regions; the vertex angle coordinates of the road space area are:
x1area=max(min(x11,x12,…,x1n)-w/m,0);
y1area=max(min(y11,y12,…,y1n)-h/m,0);
x2area=min(max(x21,x22,…,x2n)+w/m,w);
y2area=max(min(y21,y22,…,y2n)-h/m,0);
x3area=max(min(x31,x32,…,x3n)-w/m,0);
y3area=min(max(y31,y32,…,y3n)+h/m,h);
x4area=min(max(x41,x42,…,x4n)+w/m,w);
y4area=min(max(y41,y42,…,y4n)+h/m,h);
Wherein x1 area、y1area、x2area、y2area、x3area、y3area、x4area、y4area is the abscissa of the upper left corner, the ordinate of the upper left corner, the abscissa of the upper right corner, the ordinate of the upper right corner, the abscissa of the lower left corner, the ordinate of the lower left corner, the abscissa of the lower right corner, the ordinate of the lower right corner, x1 i、y1i、x2i、y2i、x3i、y3i、x4i、y4i is the abscissa of the upper left corner, the ordinate of the upper left corner, the abscissa of the upper right corner, the ordinate of the upper right corner, the abscissa of the lower left corner, the ordinate of the lower left corner, the abscissa of the lower right corner, the ordinate of the lower right corner, 1.ltoreq.i.ltoreq.n, w is the width pixel value of the traffic operation video frame, h is the length pixel value of the traffic operation video frame, m is the expansion threshold, and n is the number of lane space regions in the lane space region.
In some embodiments of the present disclosure, obtaining a lane division result according to an edge point recognition result includes: converting the identified edge points into line segments, and screening the line segments according to lane line characteristics; determining lane lines of the road space area according to the screening result; and dividing the road space region according to the lane lines to obtain lane dividing results.
In some embodiments of the present disclosure, determining lane lines of a road space region according to a screening result includes: if line segments conforming to the characteristics of the lane lines are not screened out between the adjacent lane space areas, taking the central line between the adjacent lane space areas as the lane line; if the line segments meeting the characteristics of the lane lines are screened out between the adjacent lane space areas, connecting the line segments on the same lane direction straight line into the lane lines, and reserving the longest lane line.
According to another aspect of the present disclosure, there is provided a lane dividing apparatus adapted for highway roadside monitoring, comprising:
the track acquisition module acquires the track of each vehicle from the traffic operation video;
The lane space region acquisition module clusters the tracks, and performs space division according to the clustering result to obtain lane space regions corresponding to various tracks;
The road space region acquisition module is used for acquiring a road space region in the traffic operation video according to the lane space region;
The edge point identification module is used for shielding a lane space area in the road space area and carrying out edge point identification on the rest area;
and the lane dividing module is used for obtaining a lane dividing result according to the edge point identification result.
According to another aspect of the present disclosure, there is provided a computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform a lane-dividing method suitable for highway roadside monitoring.
According to another aspect of the present disclosure, there is provided a computer device comprising one or more processors, and one or more memories, one or more programs stored in the one or more memories and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing a lane-dividing method suitable for highway roadside monitoring.
The invention has the beneficial effects that: according to the method, the lane space region is obtained by clustering and space division of the vehicle tracks, the road space region is obtained according to the lane space region, the lane space region is shielded in the road space region, the lane division result is obtained through edge point identification, and time and labor-consuming manual labeling is not needed.
Drawings
FIG. 1 is a flow chart of a lane splitting method suitable for highway roadside monitoring;
FIG. 2 is a diagram of edge point recognition results;
FIG. 3 is a lane division result diagram;
fig. 4 is a block diagram of a lane dividing apparatus suitable for highway roadside monitoring.
Detailed Description
The following description of the technical solutions in the embodiments of the present disclosure will be made clearly and completely with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, not all embodiments. The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses. Based on the embodiments in this disclosure, all other embodiments that a person of ordinary skill in the art would obtain without making any inventive effort are within the scope of protection of this disclosure.
The relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless it is otherwise stated.
Meanwhile, it should be understood that the sizes of the respective parts shown in the drawings are not drawn in actual scale for convenience of description.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of the exemplary embodiments may have different values.
In order to solve the problems of lane division by a manual labeling method, the disclosure provides a lane division method and a related device suitable for highway roadside monitoring, and particularly discloses a lane division method and a related device suitable for highway roadside monitoring, which are based on an image recognition and deep learning method, and realize lane line recognition and lane division on the basis of vehicle track extraction by utilizing a traditional interaction mode between vehicles and lane lines on a road, so that the lane detection precision can be greatly enhanced, and the problems of the existing method are overcome.
Fig. 1 is a schematic diagram of an embodiment of a lane splitting method of the present disclosure suitable for highway roadside monitoring, and the embodiment of fig. 1 may be performed by a server of a highway management control system.
As shown in fig. 1, in step 1 of the embodiment, the track of each vehicle is acquired from the traffic video.
The highway road side monitoring camera can collect traffic operation video in real time and upload the collected video to a system server for storage. When lane dividing is implemented, the video of about 1 hour under normal traffic condition can be used as the video for lane dividing, and the adaptive time length can be determined according to practical conditions.
The video is read frame by frame, the YOLOv convolutional neural network with the target detector can be used for detecting the target of the initial frame image, the initial position of the vehicle (i.e. the target) in the image is obtained, each frame of image is used for detecting the position of the vehicle, the positions of the vehicles in the adjacent frame images are compared, the target tracking algorithm (such as DeepSort algorithm) can be further used for associating the detected targets (i.e. the vehicles) of the front frame and the rear frame, and the obtained results are integrated to obtain the track of each vehicle in the video.
Returning to fig. 1, in step 2 of the embodiment, the tracks are clustered, and space division is performed according to the clustering result, so as to obtain lane space regions corresponding to various tracks.
It should be noted that the tracks of vehicles running in the same lane are often relatively close or have cross overlapping, so that the tracks of all vehicles can be clustered to obtain different types of tracks, namely, the tracks of the same lane are divided into the same type, and space division can be further performed according to the type to obtain lane space regions corresponding to the various tracks.
It should be noted that, in order to ensure the accuracy of the subsequent clustering, before the clustering, the tracks need to be screened and removed based on the track length and whether the track relative displacement has obvious errors.
Because the vehicles can change lanes, and the number of track points contained in different tracks can be different due to different time lengths of different vehicles passing through the video area, if the whole tracks are clustered directly, the whole tracks are often inaccurate, in some embodiments, the tracks of each vehicle can be divided into a plurality of sub-tracks, then all the sub-tracks are clustered, and lane space areas corresponding to various tracks are obtained according to the clustering result and the neural network.
It should be noted that, the clustering may be performed by using a neural network; the fully-connected neural network can also be adopted for obtaining the lane space region, 200 tracks (if less than 200 tracks are repeatedly sampled) which are randomly sampled in various types are input, and the output result is the vertex of the lane space region, so that the situation that a small part of tracks in part of track types exceed the edge of a single road can be eliminated, and a more reliable road space region is obtained; the neural network is not only rapid, but also the accuracy of the neural network can be improved if the neural network is trained periodically, so that the accuracy of clustering and obtaining the lane space region is ensured.
Generally, under the high-speed link, the lane change happens rarely in 2 seconds, so the sub-track can be a vehicle motion length comprising 2 seconds; and after clustering, certain sparse classes are generated, so that the sparse classes need to be removed before the lane space region is acquired, and the subsequent accuracy is ensured.
In some embodiments, prior to sub-track division, converting the vehicle track into a sequence of vehicle flow vectors is further included, as follows:
the vehicle flow vector reflects the position of the vehicle at time t and the displacement from time t-1 to time t, and can be expressed as:
ft=(t,xt,yt,△xt,△yt);
Where f t is a vehicle flow vector at time t, t corresponds to the number of frames in which the flow vector is located, x t and y t are the abscissa and ordinate of the vehicle at time t, and Δx t and Δy t are the abscissa and ordinate displacement of the vehicle from time t-1 to time t, respectively.
Because the video frame rate is higher, in order to ensure that Deltax t and Deltay t have strong enough robustness, a sampling point is taken every 3 frames, a Kalman filtering algorithm is utilized to carry out filtering smoothing processing on the track, and the vehicle flow vector sequence obtained after finishing can be expressed as F= { F 1,f2, … }. Since the vehicle churn quantity characterizes the position and movement of the vehicle at different moments, the clustering can be facilitated by representing the track with the vehicle churn quantity.
Returning to fig. 1, in step 3 of the embodiment, a road space region in the traffic operation video is obtained according to the lane space region.
In some embodiments, the vertex coordinates of the road space region in the traffic run video may be calculated from the vertex coordinates of the lane space region.
For convenience in determining the area, the lane space area and the road space area are both quadrilateral areas, and the road space area is assumed to have n lanes, namely n lane space areas, the i-th lane space area is denoted as area i, and the pixel coordinates of each vertex are { x1 i,y1i,x2i,y2i,x3i,y3i,x4i,y4i }; ,1≤i≤n,x1i、y1i、x2i、y2i、x3i、y3i、x4i、y4i is the abscissa of the upper left corner, the ordinate of the upper left corner, the abscissa of the upper right corner, the ordinate of the upper right corner, the abscissa of the lower left corner, the ordinate of the lower left corner, the abscissa of the lower right corner and the ordinate of the lower right corner of the ith lane space region respectively; then the vertex coordinates of the road space region are:
x1area=max(min(x11,x12,…,x1n)-w/m,0);
y1area=max(min(y11,y12,…,y1n)-h/m,0);
x2area=min(max(x21,x22,…,x2n)+w/m,w);
y2area=max(min(y21,y22,…,y2n)-h/m,0);
x3area=max(min(x31,x32,…,x3n)-w/m,0);
y3area=min(max(y31,y32,…,y3n)+h/m,h);
x4area=min(max(x41,x42,…,x4n)+w/m,w);
y4area=min(max(y41,y42,…,y4n)+h/m,h);
Wherein x1 area、y1area、x2area、y2area、x3area、y3area、x4area、y4area is the abscissa of the upper left corner, the ordinate of the upper left corner, the abscissa of the upper right corner, the ordinate of the upper right corner, the abscissa of the lower left corner, the ordinate of the lower left corner, the abscissa of the lower right corner, the ordinate of the lower right corner, w is the width pixel value of the traffic operation video frame, h is the length pixel value of the traffic operation video frame, m is the expansion threshold value, and the adjustment can be performed according to the condition of the highway road side monitoring video, for example, the adjustment can be set to 10.
Returning to fig. 1, in step 4 of the embodiment, the lane space area is shielded in the road space area, and the edge point identification is performed on the remaining area.
In order to reduce the influence of the background, the region outside the road space region and the lane space region are shielded by a background subtraction method, the potential region is detected by the lane lines, the potential region is further subjected to color conversion, gaussian filtering and image smoothing, and edge points in the potential region are detected by a Canny edge detection algorithm and are used as lane line results obtained by preliminary recognition.
Returning to fig. 1, in step 5 of the embodiment, a lane dividing result is obtained according to the edge point recognition result.
In reality, in order to be obvious, the lane lines have a prompt effect on road participants (including drivers and pedestrians), obvious color distinction needs to be generated from roads, the distinction accords with the basic principle of edge point detection, and the lane lines are straight lines in a specified road area (only straight lines are considered in the invention), so that the lane lines can be distinguished from other interference lines (such as vehicle edges, tree edges and the like) by using the restriction conditions of a certain length and the like. Therefore, in some embodiments, the identified edge points may be converted into line segments, the line segments are screened according to the lane line features, lane lines of the road space region are determined according to the screening result, and the road space region is divided according to the lane lines, so as to obtain the lane division result. In lane division, the lane line level is accurate, and the lane division standard in the real world is met.
It should be noted that, the hough transformation may be utilized to project the edge points into the hough space, convert the edge points into line segments, and then screen the line segments through the lane line features, see fig. 2, for example, removing the line segments with large differences from the directions of most line segments, and the line segments significantly shorter than other line segments through the length limitation and the direction limitation.
In some embodiments, if line segments conforming to the lane line characteristics are not screened out between adjacent lane space regions, a center line between the adjacent lane space regions is used as a lane line; such as: the road without lane line is defaulted to take the central line as the lane line; the lane line is faded or blocked, and no line segment is recognized, then the center line of the adjacent lane space area is also used as the lane line. If line segments meeting the characteristics of the lane lines are screened out between the adjacent lane space areas, connecting the line segments on the same lane direction straight line into the lane lines, and reserving the longest lane line, as shown in fig. 3; such as: all line segments are on a straight line, and then the straight line is a lane line; the line segments are on multiple straight lines, and then the longest straight line is taken as the final lane line.
According to the method, the lane space region is obtained by clustering and space division of the vehicle tracks, the road space region is obtained according to the lane space region, the lane space region is shielded in the road space region, the lane division result is obtained through edge point identification, and time and labor-consuming manual labeling is not needed. The method comprises the steps of extracting a vehicle track by using a neural network method, obtaining a lane rough region by using the track clustering, reducing the influence of road traffic and other environmental factors on lane line detection by using a background subtraction method, reducing the influence of irrelevant factors on lane line edge detection by using an edge detection method and a region shielding method, obtaining the position of the lane line by using the edge detection method and Hough transformation, and finally adjusting the detected lane line by using the lane rough region, thereby improving the accuracy of lane segmentation results and having practical engineering value in the technical field of traffic management control and lane recognition.
Fig. 4 is a schematic diagram of an embodiment of a lane dividing apparatus suitable for highway roadside monitoring according to the present disclosure, and the embodiment of fig. 4 is a virtual apparatus that may be loaded and executed by a server of a highway management control system, and includes a track acquisition module, a lane space region acquisition module, a road space region acquisition module, an edge point recognition module, and a lane dividing module.
The track acquisition module of the embodiment is configured to acquire the track of each vehicle from the traffic running video.
It should be noted that, using YOLOv convolutional neural network with target detector to perform target detection, i.e. vehicle detection, on each frame of image, comparing the positions of vehicles in adjacent frame of images, using target tracking algorithm (such as DeepSort algorithm) to correlate the detected targets (i.e. vehicles) of the front and rear frames, and integrating the obtained results to obtain the track of each vehicle in the video.
The lane space region acquisition module of the embodiment is configured to cluster the tracks, and perform space division according to the clustering result to obtain lane space regions corresponding to various tracks.
It should be noted that, the neural network may be used for both clustering and obtaining the lane space region. The neural network is not only quick, but also can improve the accuracy of the neural network if the neural network is trained periodically, thereby ensuring the accuracy of clustering and acquiring the lane space region.
The road space region acquisition module of the embodiment is configured to acquire a road space region in a traffic running video according to the lane space region.
The road space region is calculated according to the vertex coordinates of the lane space region, and the road space region mainly refers to a region covering all lane space regions.
The edge point identification module of an embodiment is configured to mask a lane space region in a road space region and to perform edge point identification on the remaining region.
It should be noted that, by shielding the lane space region in the road space region, the remaining region is the lane line detection potential region, and the lane line profile can be further obtained by the edge points.
The lane dividing module of the embodiment is configured to obtain a lane dividing result according to the edge point identification result.
The identified edge points are converted into line segments, the line segments are screened according to the characteristics of the lane lines, the line segments after the screening are connected to obtain the lane lines, and then lane division can be further carried out.
According to the system, the lane space region is obtained by clustering and space division of the vehicle tracks, the road space region is obtained according to the lane space region, the lane space region is shielded in the road space region, the lane division result is obtained through edge point identification, and time and labor-consuming manual labeling is not needed.
Based on the same technical solution, the present disclosure also relates to a computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform a lane splitting method suitable for highway roadside monitoring.
Based on the same technical solution, the present disclosure also relates to a computer device comprising one or more processors, and one or more memories, one or more programs stored in the one or more memories and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing a lane-dividing method suitable for highway roadside monitoring.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is illustrative of the present invention and is not to be construed as limiting thereof, but rather as providing for the use of additional embodiments and advantages of all such modifications, equivalents, improvements and similar to the present invention are intended to be included within the scope of the present invention as defined by the appended claims.

Claims (8)

1. The lane dividing method suitable for the highway roadside monitoring is characterized by comprising the following steps of:
Acquiring the track of each vehicle from the traffic operation video;
clustering the tracks, and performing space division according to the clustering result to obtain lane space regions corresponding to various tracks;
Obtaining a road space region in a traffic operation video according to the lane space region; wherein, the lane space area and the road space area are both quadrilateral areas; the method comprises the steps of obtaining a road space region, calculating the vertex angle coordinates of the road space region according to the vertex angle coordinates of the lane space region, wherein the formula is as follows:
x1area=max(min(x11,x12,…,x1n)-w/m,0);
y1area=max(min(y11,y12,…,y1n)-h/m,0);
x2area=min(max(x21,x22,…,x2n)+w/m, w);
y2area=max(min(y21,y22,…,y2n)-h/m,0);
x3area=max(min(x31,x32,…,x3n)-w/m,0);
y3area=min(max(y31, y32,…, y3n)+h/m, h);
x4area=min(max(x41,x42,…,x4n)+w/m, w);
y4area=min(max(y41, y42,…, y4n)+h/m, h);
Wherein x1 area、y1area、x2area、y2area、x3area、y3area、x4area、y4area is the abscissa of the upper left corner, the ordinate of the upper left corner, the abscissa of the upper right corner, the ordinate of the upper right corner, the abscissa of the lower left corner, the ordinate of the lower left corner, the abscissa of the lower right corner, the ordinate of the lower right corner, x1 i、y1i、x2i、y2i、x3i、y3i、x4i、y4i is the abscissa of the upper left corner, the ordinate of the upper left corner, the abscissa of the upper right corner, the ordinate of the upper right corner, the abscissa of the lower left corner, the ordinate of the lower left corner, the abscissa of the lower right corner, the ordinate of the lower right corner, 1.ltoreq.i.ltoreq.n, w is the width pixel value of the traffic operation video frame, h is the length pixel value of the traffic operation video frame, m is the expansion threshold, and n is the number of lane space regions in the lane space region;
shielding a lane space region in the road space region, and identifying edge points of the rest region;
and obtaining a lane dividing result according to the edge point recognition result.
2. The lane dividing method for monitoring a highway roadside according to claim 1, wherein clustering the tracks and performing space division according to a clustering result to obtain lane space regions corresponding to each type of track comprises:
dividing the track of each vehicle into a plurality of sub-tracks;
And clustering all the sub-tracks, and obtaining lane space areas corresponding to all the tracks according to the clustering result and the neural network.
3. The lane-dividing method for highway roadside monitoring according to claim 2, further comprising converting the vehicle track into a vehicle flow vector sequence before performing the sub-track division; wherein the vehicle flow vector reflects the position of the vehicle at time t and the displacement from time t-1 to time t.
4. The lane-dividing method for highway roadside monitoring according to claim 1, wherein the obtaining of the lane-dividing result based on the edge point recognition result comprises:
Converting the identified edge points into line segments, and screening the line segments according to lane line characteristics;
Determining lane lines of the road space area according to the screening result;
and dividing the road space region according to the lane lines to obtain lane dividing results.
5. The lane-dividing method for highway roadside monitoring according to claim 4, wherein determining lane lines of the road space region according to the screening result comprises:
if line segments conforming to the characteristics of the lane lines are not screened out between the adjacent lane space areas, taking the central line between the adjacent lane space areas as the lane line;
If the line segments meeting the characteristics of the lane lines are screened out between the adjacent lane space areas, connecting the line segments on the same lane direction straight line into the lane lines, and reserving the longest lane line.
6. Lane dividing device suitable for highway roadside control, its characterized in that includes:
the track acquisition module acquires the track of each vehicle from the traffic operation video;
The lane space region acquisition module clusters the tracks, and performs space division according to the clustering result to obtain lane space regions corresponding to various tracks;
The road space region acquisition module is used for acquiring a road space region in the traffic operation video according to the lane space region; wherein, the lane space area and the road space area are both quadrilateral areas; the method comprises the steps of obtaining a road space region, calculating the vertex angle coordinates of the road space region according to the vertex angle coordinates of the lane space region, wherein the formula is as follows:
x1area=max(min(x11,x12,…,x1n)-w/m,0);
y1area=max(min(y11,y12,…,y1n)-h/m,0);
x2area=min(max(x21,x22,…,x2n)+w/m, w);
y2area=max(min(y21,y22,…,y2n)-h/m,0);
x3area=max(min(x31,x32,…,x3n)-w/m,0);
y3area=min(max(y31, y32,…, y3n)+h/m, h);
x4area=min(max(x41,x42,…,x4n)+w/m, w);
y4area=min(max(y41, y42,…, y4n)+h/m, h);
Wherein x1 area、y1area、x2area、y2area、x3area、y3area、x4area、y4area is the abscissa of the upper left corner, the ordinate of the upper left corner, the abscissa of the upper right corner, the ordinate of the upper right corner, the abscissa of the lower left corner, the ordinate of the lower left corner, the abscissa of the lower right corner, the ordinate of the lower right corner, x1 i、y1i、x2i、y2i、x3i、y3i、x4i、y4i is the abscissa of the upper left corner, the ordinate of the upper left corner, the abscissa of the upper right corner, the ordinate of the upper right corner, the abscissa of the lower left corner, the ordinate of the lower left corner, the abscissa of the lower right corner, the ordinate of the lower right corner, 1.ltoreq.i.ltoreq.n, w is the width pixel value of the traffic operation video frame, h is the length pixel value of the traffic operation video frame, m is the expansion threshold, and n is the number of lane space regions in the lane space region;
The edge point identification module is used for shielding a lane space area in the road space area and carrying out edge point identification on the rest area;
and the lane dividing module is used for obtaining a lane dividing result according to the edge point identification result.
7. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform the method of any of claims 1-5.
8. A computer device, comprising:
One or more processors, and one or more memories, one or more programs stored in the one or more memories and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing the method of any of claims 1-5.
CN202410404743.9A 2024-04-07 2024-04-07 Lane dividing method and related device suitable for highway roadside monitoring Active CN118015567B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410404743.9A CN118015567B (en) 2024-04-07 2024-04-07 Lane dividing method and related device suitable for highway roadside monitoring

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410404743.9A CN118015567B (en) 2024-04-07 2024-04-07 Lane dividing method and related device suitable for highway roadside monitoring

Publications (2)

Publication Number Publication Date
CN118015567A CN118015567A (en) 2024-05-10
CN118015567B true CN118015567B (en) 2024-06-11

Family

ID=90954839

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410404743.9A Active CN118015567B (en) 2024-04-07 2024-04-07 Lane dividing method and related device suitable for highway roadside monitoring

Country Status (1)

Country Link
CN (1) CN118015567B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114379555A (en) * 2020-10-22 2022-04-22 奥迪股份公司 Vehicle lane change control method, device, equipment and storage medium
CN116503818A (en) * 2023-04-27 2023-07-28 内蒙古工业大学 Multi-lane vehicle speed detection method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4020428A4 (en) * 2019-08-28 2022-10-12 Huawei Technologies Co., Ltd. Method and apparatus for recognizing lane, and computing device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114379555A (en) * 2020-10-22 2022-04-22 奥迪股份公司 Vehicle lane change control method, device, equipment and storage medium
CN116503818A (en) * 2023-04-27 2023-07-28 内蒙古工业大学 Multi-lane vehicle speed detection method and system

Also Published As

Publication number Publication date
CN118015567A (en) 2024-05-10

Similar Documents

Publication Publication Date Title
CN106652465B (en) Method and system for identifying abnormal driving behaviors on road
Lee et al. Robust lane detection and tracking for real-time applications
Liu et al. A vision-based pipeline for vehicle counting, speed estimation, and classification
Marzougui et al. A lane tracking method based on progressive probabilistic Hough transform
Kim Real time object tracking based on dynamic feature grouping with background subtraction
Nieto et al. Road environment modeling using robust perspective analysis and recursive Bayesian segmentation
US9477892B2 (en) Efficient method of offline training a special-type parked vehicle detector for video-based on-street parking occupancy detection systems
Yaghoobi Ershadi et al. Robust vehicle detection in different weather conditions: Using MIPM
Mu et al. Multiscale edge fusion for vehicle detection based on difference of Gaussian
CN111191611A (en) Deep learning-based traffic sign label identification method
Rabiu Vehicle detection and classification for cluttered urban intersection
CN112862845A (en) Lane line reconstruction method and device based on confidence evaluation
Arya et al. Real-time vehicle detection and tracking
Rasib et al. Pixel level segmentation based drivable road region detection and steering angle estimation method for autonomous driving on unstructured roads
Babaei Vehicles tracking and classification using traffic zones in a hybrid scheme for intersection traffic management by smart cameras
Zhao et al. APPOS: An adaptive partial occlusion segmentation method for multiple vehicles tracking
CN113516853B (en) Multi-lane traffic flow detection method for complex monitoring scene
Qing et al. A novel particle filter implementation for a multiple-vehicle detection and tracking system using tail light segmentation
Gad et al. Real-time lane instance segmentation using SegNet and image processing
Khan et al. Estimating speeds of pedestrians in real-world using computer vision
CN113361299B (en) Abnormal parking detection method and device, storage medium and electronic equipment
Gong et al. Complex lane detection based on dynamic constraint of the double threshold
CN118015567B (en) Lane dividing method and related device suitable for highway roadside monitoring
Yang et al. A novel vision-based framework for real-time lane detection and tracking
Ha et al. Improved Optical Flow Estimation In Wrong Way Vehicle Detection.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant