CN109949578B - Vehicle line pressing violation automatic auditing method based on deep learning - Google Patents

Vehicle line pressing violation automatic auditing method based on deep learning Download PDF

Info

Publication number
CN109949578B
CN109949578B CN201811654496.9A CN201811654496A CN109949578B CN 109949578 B CN109949578 B CN 109949578B CN 201811654496 A CN201811654496 A CN 201811654496A CN 109949578 B CN109949578 B CN 109949578B
Authority
CN
China
Prior art keywords
line
target vehicle
vehicle
contour
line segment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201811654496.9A
Other languages
Chinese (zh)
Other versions
CN109949578A (en
Inventor
周康明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eye Control Technology Co Ltd
Original Assignee
Shanghai Eye Control Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eye Control Technology Co Ltd filed Critical Shanghai Eye Control Technology Co Ltd
Priority to CN201811654496.9A priority Critical patent/CN109949578B/en
Publication of CN109949578A publication Critical patent/CN109949578A/en
Application granted granted Critical
Publication of CN109949578B publication Critical patent/CN109949578B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a vehicle line pressing violation automatic auditing method based on deep learning, which comprises the following steps: acquiring snap pictures of a camera, and cutting and sequencing the pictures; acquiring a license plate number of a target vehicle; respectively detecting the target vehicles of each sequencing graph by adopting a target vehicle detection module based on deep learning to obtain a detection frame of the target vehicle; performing scene segmentation on the ranking graph by adopting a scene segmentation module based on deep learning to obtain segmented solid line pixels; on each sequencing graph, calculating whether a straight line fitted by a solid line and a straight line where a lower frame of a target vehicle detection frame is located have an intersection point by adopting a vehicle line violation judgment module; and judging whether the target vehicle on the sequencing chart is illegal to press a line according to the position of the intersection point, wherein the method is suitable for the illegal audit of the pictures shot by the traffic camera in the real scene.

Description

Vehicle line pressing violation automatic auditing method based on deep learning
Technical Field
The invention relates to the technical field of artificial intelligence judgment of traffic violation, in particular to a system for checking vehicle line pressing violation.
Background
With the continuous development of social economy and the continuous improvement of the living standard of people, the traffic administration has an increasing demand for automatic examination and verification of traffic violation. The traditional and violation auditing methods mainly adopt manual identification, the method has high labor cost and low efficiency, and the long-time repeated verification operation easily generates bad states such as fatigue and negligence and influences the verification accuracy.
How to accurately and quickly check illegal behaviors in traffic, and simultaneously avoiding the defects of high manual identification cost, easy fatigue, easy negligence and the like, is a technical problem which needs to be solved urgently.
Disclosure of Invention
The purpose of the invention is: the vehicle line pressing violation automatic checking system based on deep learning is provided, and line pressing violation behaviors are automatically checked so as to meet the requirements of efficiency and accuracy in the current traffic violation checking work.
The technical scheme adopted by the invention is as follows:
a vehicle line break automatic auditing system based on deep learning comprises the following steps:
s1, reading the violation data set information table by row; each row of data information comprises picture address information, equipment number information and license plate number information; reading in all-in-one pictures according to the picture address information, cutting the pictures and obtaining a group of sequencing graphs according to the sequence of the cutting positions;
s2, respectively detecting the target vehicle of each image of the group of sequencing graphs by adopting a target vehicle detection module based on deep learning to obtain a detection frame of the target vehicle;
s3, performing scene segmentation on each image of the sequencing graph group by adopting a scene segmentation module algorithm based on deep learning to obtain a segmented lane line outline, a stop line outline, a guide line outline and an outline of a target vehicle;
s4, respectively solving geometric union set of the lane line outline, the stop line outline and the guide line outline among all the images of the group of sequence diagrams;
s5, respectively calculating the contour of the lane line, the contour of the stop line and the contour of the guide line, and filtering out smaller contours through the size of the external rectangles;
s6, respectively calculating a fitting line segment of the lane line outline and a fitting line segment of the stop line; if the fitting line segment between the outlines of the two lane lines meets the coincidence judgment condition, the two outlines are considered to belong to the same lane line, the two outlines are subjected to outline combination, and then the line segment is fitted to the combined outline; if the fitted straight lines between the outlines of the two stop lines are overlapped, the two outlines are considered to belong to the same lane line, the two outlines are subjected to outline combination, and then line segments are fitted to the combined outlines;
s7, judging whether each lane line and each guide line are in contact or not through a lane line and guide line position relation algorithm; if the contact judges that the lane line is the mistakenly-divided lane line, deleting the lane line;
s8, if the number of the lane line fitting line segments is more than or equal to 2, calculating intersection points between every two lane lines, and solving vanishing point positions of the intersection points through a vanishing point calculation algorithm; calculating the vertical distance between the vanishing point and a fitted line segment of the lane line, and if the vertical distance is more than 100 pixels, determining that the lane line is mistakenly segmented, and deleting the lane line;
s9, keeping the stop line with the maximum length to fit a line segment, and extending the lane line to the stop line;
s10, calculating a fitting line segment of the bottom contour of the target vehicle contour through a vehicle chassis bottom contour fitting algorithm on each sequencing graph, and calculating a gap between the left and right end points of the fitting line segment of the bottom contour and a target vehicle detection frame; the method comprises the following steps that a visible chassis side contour is considered to be formed in the contour of a target vehicle on one side with a large gap, contour points of which the transverse coordinates are located in the gap and the longitudinal coordinates are located in the height range of the lower half detection frame of a target vehicle detection frame in the contour of the target vehicle are selected as fitting line segments, and the fitting line segments are visible side chassis contour fitting line segments of the target vehicle; the end point on one side with small clearance is regarded as the lower end point of the fitting line segment of the invisible chassis side contour;
s11, calculating the intersection point of the extension line of the fitting line segment of the visible side outline and the horizontal line where the vanishing point of the lane line is located, wherein the intersection point is the vanishing point of the side outline of the target vehicle; connecting a target vehicle side contour vanishing point with a lower endpoint of the invisible chassis side contour fitting line segment to form a straight line where the invisible chassis side contour fitting line segment is located; making a parallel line of the chassis lower part contour fitting line segment through the upper endpoint of the visible chassis side contour fitting line segment, wherein the intersection point of the parallel line and the straight line of the invisible chassis side contour fitting line segment is the upper endpoint of the invisible chassis side contour fitting line segment; a line segment connecting the upper endpoint and the lower endpoint of the invisible chassis side contour is an invisible chassis side contour line segment, a line segment connecting the upper endpoint of the visible chassis side contour line segment and the upper endpoint of the invisible chassis side contour line segment is a chassis upper contour fitting line segment of the target vehicle, and thus a target vehicle chassis contour quadrangle consisting of a chassis lower contour fitting line segment, a visible chassis side contour fitting line segment, a chassis upper contour fitting line segment and an invisible chassis side contour fitting line segment of the target vehicle is obtained;
s12, calculating whether an intersection point exists between the target vehicle chassis quadrangle and the lane line fitting line segment on each sequencing graph by adopting a line pressing violation judgment module algorithm; if the intersection points exist, judging that the target vehicle line pressing on the sequencing chart is illegal, otherwise, not judging that the target vehicle line pressing is illegal;
further, the automatic vehicle line-pressing violation auditing method based on deep learning is characterized in that data of a violation data set information table come from a client, and the format of the violation data set information table can be a text file in txt and csv format; the number of the subgraphs in the all-in-one picture is 1-4, and the subgraphs of the all-in-one picture take different driving positions of a target vehicle, lane lines, stop lines and guide lines in the same scene;
further, the automatic vehicle line pressing violation auditing method based on deep learning comprises the following detection steps that a target vehicle detection module based on deep learning comprises a vehicle detection unit, a license plate recognition unit and a vehicle ReiD unit:
s31, detecting all vehicles of the reorganization ranking map by adopting a vehicle detection unit of a target vehicle detection module based on deep learning; adopting a license plate detection unit of a target vehicle detection module based on deep learning to perform license plate detection on the vehicles detected by the first and second sequencing graphs;
s32, recognizing the license plate number of the first ranking graph by a license plate recognition unit of a target vehicle detection module based on deep learning, and determining a target vehicle detection frame of the first ranking graph if the first ranking graph is matched with the target vehicle number;
s33, if the first target vehicle detection frame exists, performing license plate recognition on the second sequence chart; if the license plate number of the target vehicle is not matched, performing vehicle re-identification on the second sequence chart by adopting a vehicle ReID unit based on deep learning, and determining a target vehicle detection frame;
s34, if a third ranking graph exists, adopting a deep learning-based vehicle ReID unit to perform vehicle re-identification on the third ranking graph, and determining a target vehicle detection frame; determining respective detection frames of the target vehicle in the sequence diagram according to the steps;
further, the method for automatically auditing the vehicle line pressing violation based on deep learning comprises the following steps:
s41, performing scene segmentation on the sequence diagram to respectively obtain a lane line, a stop line, a guide line and a pixel segmentation diagram of the target vehicle; the lane lines include a white solid line and a yellow solid line;
s42, respectively obtaining the contour of the lane line, the stop line, the guide line and the target vehicle by using a conventional contour detection method, wherein the obtained contour is a point position set;
further, the automatic vehicle line pressing violation auditing method based on deep learning comprises the following steps:
s51, the included angle of the two line segments is less than 15 degrees;
s52, the average value of the vertical distances from the upper endpoint and the lower endpoint of one line segment to the other line segment is less than 50 pixels;
s53, the average value of the vertical distances from the upper endpoint and the lower endpoint of the other line segment to the line segment is less than 50 pixels;
further, in the method for automatically checking vehicle line pressing violation based on deep learning, the contour merging is a union of two contour point sets and is merged into one contour point set;
further, the automatic vehicle line pressing violation auditing method based on deep learning is characterized in that vanishing points are vanishing points in perspective projection meaning, and the calculation algorithm comprises the following steps:
s71, calculating the intersection point of the extension lines of the fitted line segments of every two lane lines, and solving the transverse coordinate mean value and the longitudinal coordinate mean value of the intersection point of the lines as the coordinates of the central point;
s72, calculating the distance from each intersection point to the center point, recording the point coordinates with the largest distance, deleting the point with the largest distance if the largest distance is larger than 150 pixels, and repeating the steps S71 and S72; if the maximum distance is less than or equal to 150 pixels, the central point is a vanishing point, and the calculation is finished;
further, the method for automatically auditing vehicle line pressing violation based on deep learning in claim 1 includes the following steps:
s81, on each sequencing graph, whether intersection points exist between each lane line fitting line segment and four sides of the target vehicle chassis contour four-deformation or not is determined, and if intersection points exist, line pressing is judged;
s82, on each sequencing graph, whether the upper end point and the lower end point of each lane line fitting line segment are within the four-deformation of the target vehicle chassis outline is determined through a conventional area method, if the upper end point and the lower end point are within the four-deformation of the target vehicle chassis outline, the upper end point and the lower end point are determined to be illegal, and if the upper end point and the lower end point are not within the four-deformation of the target vehicle chassis outline;
further, the automatic vehicle line violation auditing method based on deep learning comprises the following steps:
s91, respectively extending the two end points of each lane line fitting line segment by the length of 50 pixels, respectively shifting the extended fitting line segments by 50 pixels left and right, and respectively connecting the upper end point and the lower end point of the left shifting line segment with the upper end point and the lower end point of the right shifting line segment to form a parallelogram;
s92, calculating the contour point of each guide line to be located in the parallelogram, if the guide line exists in the contour point in the parallelogram, judging that the lane line is in contact with the guide line, otherwise, judging that the lane line is not in contact with the guide line
Further, the vehicle ReID unit adopts a *** lenet inclusion-V2 network structure to extract vehicle features and track vehicle positions, and the steps are as follows:
s101, when a feature extraction module is trained, a classification layer is layered on the last 256-dimensional full-connection layer of the network, the classification layer classifies different types of vehicles, each classification layer has the same vehicle acquired at different frame times, and data enhancement is performed on all the acquired vehicles. And when the loss value loss of the training is reduced to the lowest value, cutting the classification layer, and taking out the last 256-dimensional full-connection layer, wherein the obtained 256-dimensional features can well represent the features of the vehicle.
S102, inputting the vehicle positioned in the first map into a Googlenet inclusion-V2 network, padding the input vehicle at an input layer of the network to form an image with consistent length and width, and filling redundant parts with 0 pixel; then, performing up-sampling or down-sampling operation on the preprocessed image, unifying resize into an image with 200 × 200 resolution, and finally obtaining a 256-dimensional feature;
s103, inputting a GoogLenet inclusion-V2 network to all vehicles to be matched in the second image, and obtaining a plurality of 256-dimensional features in the same way as S31;
s104, inputting a GoogLenet inclusion-V2 network to all vehicles to be matched in the third graph, and obtaining a plurality of 256-dimensional features in the same way as S31;
s105, performing cosine similarity by using one 256-dimensional feature in the S31 and a plurality of 256-dimensional features in the S32, wherein the 256-dimensional features extracted by the feature extraction module can well represent the vehicle, so that the difference between two vehicles can be better shown by adopting cosine similarity, and finally the 256-dimensional feature corresponding to the highest score is extracted;
s106, using the 256-dimensional features with the highest score in the S32 and a plurality of 256-dimensional features in the S33 to make cosine similarity, and taking out the 256-dimensional features corresponding to the highest score;
s107, since the second image and the third image have already detected a plurality of vehicles through the detection algorithm, the vehicle with the highest similarity score is found by using the algorithm, and the index number of the vehicle corresponding to the highest score is taken out as the tracked vehicle.
The invention has the beneficial effects that: the invention is mainly applied to line-pressing violation auditing in traffic violation, and realizes full-automatic detection, identification and judgment in the auditing process. The high-precision vehicle detection frame is directly obtained through deep learning, the lower half frame and the lower frame of the target vehicle detection frame are used as main judgment bases, and the characteristics of the three-dimensional perspective shot by the camera are better met.
Drawings
FIG. 1 is a flow chart of the vehicle line break detection of the present invention.
Fig. 2 is a schematic structural diagram of the present invention.
FIG. 3 is a schematic diagram of the configuration of the subject vehicle detection module of the present invention.
Fig. 4 is a schematic structural diagram of the scene segmentation module according to the present invention.
FIG. 5 is a schematic structural diagram of a pressing line violation determining module according to the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings.
The invention is mainly based on a target vehicle detection module, a scene segmentation module and a line pressing violation judgment module.
As shown in fig. 2, the target vehicle detection module includes a vehicle detection unit, a license plate recognition unit, a vehicle ReID unit, and a determination unit.
First, all the vehicle detection frames on the ranking map are acquired using the vehicle detection unit for the ranking map. Then the vehicle detected by the first sequence chart is transmitted into a license plate detection unit to obtain a license plate detection frame, then the license plate detection result is input into a license plate recognition unit to recognize the number of the license plate,
and judging whether the target vehicle license plate number is matched. If the matching is successful, the detected vehicles of the second sequence chart are transmitted into a license plate detection unit to obtain a license plate detection frame, then the license plate detection result is input into a license plate recognition unit to recognize the license plate number, and whether the license plate number of the target vehicle is matched or not is judged; and if the detected vehicles are not matched, the detected vehicles of the second sequence diagram are transmitted into a vehicle ReID unit to be matched with the detected target vehicles of the first sequence diagram. If the third ranking map exists, the detection vehicle is transmitted into the vehicle ReID unit to match the target vehicle of the first ranking map. And finally, determining a detection frame of the target vehicle in each sequencing graph, locking the target vehicle at each position, effectively avoiding false detection influence caused by vehicle position change, and improving the accuracy of positioning detection of the target vehicle.
As shown in fig. 3, the scene segmentation module includes a scene segmentation unit and a solid line fusion unit. Firstly, a scene segmentation unit is used for a sequencing graph to obtain a lane line, a guide line, a stop line and a segmentation characteristic graph of a target vehicle, and the lane line, the guide line, the stop line and the outline of the target vehicle of each graph in the reorganization sequencing graph are respectively calculated; then, the outlines of the lane lines, the guide lines and the stop lines of all the sequencing graphs are transmitted into a solid line fusion unit, and the outlines of the lane lines, the guide lines and the stop lines are subjected to union operation among all the sequencing graphs, so that the solid lines disconnected due to the shielding of vehicles, pedestrians and the like are fused, and relatively complete and continuous solid lines are obtained.
As shown in fig. 4, the line break judgment unit includes an intersection calculation unit and a judgment unit. Firstly, each obtained target vehicle chassis outline quadrangle of the sequencing graph and a lane line fitting line segment are transmitted into an intersection point calculation unit, whether intersection points exist between each lane line fitting line segment and four sides of the target vehicle chassis outline quadrimorphism or not is solved, and if the intersection points exist, a line is pressed; and if the intersection point does not exist, whether the upper end point and the lower end point of each lane line fitting line segment are within the four-deformation of the target vehicle chassis outline is determined through a conventional area method, if the upper end point and the lower end point are within the four-deformation of the target vehicle chassis outline, the upper end point and the lower end point are determined to be illegal, and if the upper end point and the lower end point are not within the four-deformation.
The specific implementation process of the invention is shown in fig. 1, and the automatic vehicle line pressing violation auditing system based on deep learning comprises the following steps:
s1, acquiring the snapshot picture of the camera, and cutting and sequencing the picture; acquiring a license plate number of a target vehicle;
s2, respectively detecting the target vehicles of the sequencing graphs by adopting a target vehicle detection module based on deep learning to obtain a detection frame of the target vehicles;
s3, performing scene segmentation on the sequencing graph by adopting a scene segmentation module based on deep learning to obtain segmented solid line pixels;
s4, on each sequencing graph, calculating whether an intersection point exists between a straight line fitted by a solid line and a straight line where a lower frame of the target vehicle detection frame is located by using a vehicle line pressing judgment module; judging whether the target vehicle on the sequencing graph is illegal according to the position of the intersection point and the position of the lower end point of the solid line;
the basic principles and the main features of the solution and the advantages of the solution have been shown and described above. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are given by way of illustration of the principles of the present invention, and that various changes and modifications may be made without departing from the spirit and scope of the present invention and these changes and modifications are within the scope of the present invention.

Claims (10)

1. A vehicle line-pressing violation automatic auditing method based on deep learning comprises the following steps:
s1, reading a violation data set information table; each row of data information comprises picture address information, equipment number information and license plate number information; reading in all-in-one pictures according to the picture address information, cutting the pictures and obtaining a group of sequencing graphs according to the sequence of the cutting positions;
s2, respectively detecting the target vehicle of each image of the group of sequencing graphs by adopting a target vehicle detection module based on deep learning to obtain a detection frame of the target vehicle;
s3, performing scene segmentation on each image of the sequencing graph group by adopting a scene segmentation module algorithm based on deep learning to obtain a segmented lane line outline, a stop line outline, a guide line outline and an outline of a target vehicle;
s4, respectively solving geometric union set of the lane line outline, the stop line outline and the guide line outline among all the images of the group of sequence diagrams;
s5, respectively calculating the contour of the lane line, the contour of the stop line and the contour of the guide line, and filtering out smaller contours through the size of the external rectangles;
s6, respectively calculating a fitting line segment of the lane line outline and a fitting line segment of the stop line; if the fitting line segment between the outlines of the two lane lines meets the coincidence judgment condition, the two outlines are considered to belong to the same lane line, the two outlines are subjected to outline combination, and then the line segment is fitted to the combined outline; if the fitted straight lines between the outlines of the two stop lines are overlapped, the two outlines are considered to belong to the same lane line, the two outlines are subjected to outline combination, and then line segments are fitted to the combined outlines;
s7, judging whether each lane line and each guide line are in contact or not through a lane line and guide line position relation algorithm; if the contact judges that the lane line is the mistakenly-divided lane line, deleting the lane line;
s8, if the number of the lane line fitting line segments is more than or equal to 2, calculating intersection points between every two lane lines, and solving vanishing point positions of the intersection points through a vanishing point calculation algorithm; calculating the vertical distance between the vanishing point and a fitted line segment of the lane line, and if the vertical distance is more than 100 pixels, determining that the lane line is mistakenly segmented, and deleting the lane line;
s9, keeping the stop line with the maximum length to fit a line segment, and extending the lane line to the stop line;
s10, calculating a fitting line segment of the bottom contour of the target vehicle contour through a vehicle chassis bottom contour fitting algorithm on each sequencing graph, and calculating a gap between the left and right end points of the fitting line segment of the bottom contour and a target vehicle detection frame;
the method comprises the following steps that a visible chassis side contour is considered to be formed in the contour of a target vehicle on one side with a large gap, contour points of which the transverse coordinates are located in the gap and the longitudinal coordinates are located in the height range of the lower half detection frame of a target vehicle detection frame in the contour of the target vehicle are selected as fitting line segments, and the fitting line segments are visible side chassis contour fitting line segments of the target vehicle;
the end point on one side with small clearance is regarded as the lower end point of the fitting line segment of the invisible chassis side contour;
s11, calculating the intersection point of the extension line of the fitting line segment of the visible side outline and the horizontal line where the vanishing point of the lane line is located, wherein the intersection point is the vanishing point of the side outline of the target vehicle; connecting a target vehicle side contour vanishing point with a lower endpoint of the invisible chassis side contour fitting line segment to form a straight line where the invisible chassis side contour fitting line segment is located; making a parallel line of the chassis lower part contour fitting line segment through the upper endpoint of the visible chassis side contour fitting line segment, wherein the intersection point of the parallel line and the straight line of the invisible chassis side contour fitting line segment is the upper endpoint of the invisible chassis side contour fitting line segment; a line segment connecting the upper endpoint and the lower endpoint of the invisible chassis side contour is an invisible chassis side contour line segment, a line segment connecting the upper endpoint of the visible chassis side contour line segment and the upper endpoint of the invisible chassis side contour line segment is a chassis upper contour fitting line segment of the target vehicle, and thus a target vehicle chassis contour quadrangle consisting of a chassis lower contour fitting line segment, a visible chassis side contour fitting line segment, a chassis upper contour fitting line segment and an invisible chassis side contour fitting line segment of the target vehicle is obtained;
s12, calculating whether an intersection point exists between the target vehicle chassis quadrangle and the lane line fitting line segment on each sequencing graph by adopting a line pressing violation judgment module algorithm; and if the intersection points exist, judging that the target vehicle line pressing on the group of sequencing graphs is illegal, otherwise, not judging that the target vehicle line pressing on the group of sequencing graphs is illegal.
2. The automatic vehicle line violation auditing method based on deep learning of claim 1 characterized in that the data format of the violation data set information table can be text file in txt, csv format; the number of the subgraphs in the all-in-one picture is 1-4, and the subgraphs of the all-in-one picture take different driving positions of a target vehicle, lane lines, stop lines and guide lines in the same scene.
3. The method for automatically auditing vehicle line pressing violation based on deep learning of claim 1, wherein the target vehicle detection module based on deep learning comprises a vehicle detection unit, a license plate recognition unit and a vehicle ReID unit, and the detection steps are as follows:
s21, detecting all vehicles of the group of sequencing graphs by adopting a vehicle detection unit of a target vehicle detection module based on deep learning; adopting a license plate detection unit of a target vehicle detection module based on deep learning to perform license plate detection on the vehicles detected by a first and a second sequencing graphs in the group of sequencing graphs;
s22, recognizing the license plate number of the first ranking graph by a license plate recognition unit of a target vehicle detection module based on deep learning, and determining a target vehicle detection frame of the first ranking graph if the first ranking graph is matched with the target vehicle number;
s23, if the first target vehicle detection frame exists, performing license plate recognition on the second sequence chart; if the license plate number of the target vehicle is not matched, performing vehicle re-identification on the second sequence chart by adopting a vehicle ReID unit based on deep learning, and determining a target vehicle detection frame;
s24, if a third ranking graph exists, adopting a deep learning-based vehicle ReID unit to perform vehicle re-identification on the third ranking graph, and determining a target vehicle detection frame; and determining the respective detection frames of the target vehicle in the ranking chart according to the steps.
4. The automatic vehicle line violation auditing method based on deep learning of claim 1, wherein the scene segmentation module based on deep learning in S3 comprises the following steps:
s31, performing scene segmentation on the sequence diagram to respectively obtain a lane line, a stop line, a guide line and a pixel segmentation diagram of the target vehicle; the lane lines include a white solid line and a yellow solid line;
and S32, respectively obtaining the outlines of the lane line, the stop line, the guide line and the target vehicle by using a conventional outline detection method, wherein the obtained outlines are point position sets.
5. The automatic vehicle line violation auditing method based on deep learning of claim 1 characterized in that the coincidence judgment condition is:
s51, the included angle of the two line segments is less than 15 degrees;
s52, the average value of the vertical distances from the upper endpoint and the lower endpoint of one line segment to the other line segment is less than 50 pixels;
and S53, the average value of the vertical distances from the upper endpoint and the lower endpoint of the other line segment to the line segment is less than 50 pixels.
6. The automatic vehicle line violation auditing method based on deep learning of claim 1 where the contour merging is a union of two contour point sets merged into one contour point set.
7. The automatic vehicle line violation auditing method based on deep learning of claim 1 characterized in that the vanishing point is the vanishing point in perspective projection meaning, and the calculating algorithm steps are as follows:
s71, calculating the intersection point of the extension lines of the fitted line segments of every two lane lines, and solving the transverse coordinate mean value and the longitudinal coordinate mean value of the intersection point of the lines as the coordinates of the central point;
s72, calculating the distance from each intersection point to the center point, recording the point coordinates with the largest distance, deleting the point with the largest distance if the largest distance is larger than 150 pixels, and repeating the steps S71 and S72; if the maximum distance is less than or equal to 150 pixels, the central point is the vanishing point, and the calculation is finished.
8. The automatic vehicle line break auditing method based on deep learning of claim 1 characterized in that the line break judgment module algorithm steps are as follows:
s81, on each sequencing graph, whether intersection points exist between each lane line fitting line segment and four sides of the target vehicle chassis contour four-deformation or not is determined, and if intersection points exist, line pressing is judged;
s82, on each sequencing graph, whether the upper end point and the lower end point of each lane line fitting line segment are within the four-deformation of the target vehicle chassis outline is determined through a conventional area method, if the upper end point and the lower end point are within the four-deformation of the target vehicle chassis outline, the upper end point and the lower end point are determined to be illegal, and if the upper end point and the lower end point are not within the four-deformation of the target vehicle chassis outline.
9. The automatic vehicle line violation auditing method based on deep learning of claim 1 characterized in that the lane line and guide line position relation algorithm steps are as follows:
s91, respectively extending the two end points of each lane line fitting line segment by the length of 50 pixels, respectively shifting the extended fitting line segments by 50 pixels left and right, and respectively connecting the upper end point and the lower end point of the left shifting line segment with the upper end point and the lower end point of the right shifting line segment to form a parallelogram;
and S92, calculating that the contour point of each guide line is positioned in the parallelogram, if the guide line exists in the contour point in the parallelogram, judging that the lane line is in contact with the guide line, and otherwise, judging that the lane line is not in contact with the guide line.
10. The automatic vehicle line violation auditing method based on deep learning of claim 3, characterized in that the vehicle ReiD unit adopts a Googlenet inclusion-V2 network structure to extract vehicle characteristics and track vehicle position, and the steps are as follows:
s101, during training of a feature extraction module, a classification layer is layered on the last 256-dimensional full-connection layer of a network, the classification layer classifies different types of vehicles, each classification has the same vehicle acquired at different frame times, data enhancement is performed on all the acquired vehicles, when the loss value loss of training is reduced to the lowest value, the classification layer is cut off, the last 256-dimensional full-connection layer is taken out, and the 256-dimensional features acquired at the moment can well represent the features of the vehicle;
s102, inputting the vehicle positioned in the first map into a Googlenet inclusion-V2 network, padding the input vehicle at an input layer of the network to form an image with consistent length and width, and filling redundant parts with 0 pixel; then, performing up-sampling or down-sampling operation on the preprocessed image, unifying resize into an image with 200 × 200 resolution, and finally obtaining a 256-dimensional feature;
s103, inputting a GoogLenet inclusion-V2 network to all vehicles to be matched in the second image, and obtaining a plurality of 256-dimensional features in the same S102;
s104, inputting a GoogLenet inclusion-V2 network to all vehicles to be matched in the third image, and obtaining a plurality of 256-dimensional features in the same S102;
s105, performing cosine similarity by using one 256-dimensional feature in the S31 and a plurality of 256-dimensional features in the S32, wherein the 256-dimensional features extracted by the feature extraction module can well represent the vehicle, so that the difference between two vehicles can be better shown by adopting cosine similarity, and finally the 256-dimensional feature corresponding to the highest score is extracted;
s106, using the 256-dimensional features with the highest score in the S32 and a plurality of 256-dimensional features in the S33 to make cosine similarity, and taking out the 256-dimensional features corresponding to the highest score;
s107, since the second image and the third image have already detected a plurality of vehicles through the detection algorithm, the vehicle with the highest similarity score is found by using the algorithm, and the index number of the vehicle corresponding to the highest score is taken out as the tracked vehicle.
CN201811654496.9A 2018-12-31 2018-12-31 Vehicle line pressing violation automatic auditing method based on deep learning Expired - Fee Related CN109949578B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811654496.9A CN109949578B (en) 2018-12-31 2018-12-31 Vehicle line pressing violation automatic auditing method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811654496.9A CN109949578B (en) 2018-12-31 2018-12-31 Vehicle line pressing violation automatic auditing method based on deep learning

Publications (2)

Publication Number Publication Date
CN109949578A CN109949578A (en) 2019-06-28
CN109949578B true CN109949578B (en) 2020-11-24

Family

ID=67007207

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811654496.9A Expired - Fee Related CN109949578B (en) 2018-12-31 2018-12-31 Vehicle line pressing violation automatic auditing method based on deep learning

Country Status (1)

Country Link
CN (1) CN109949578B (en)

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110490150B (en) * 2019-08-22 2022-02-11 浙江工业大学 Automatic illegal picture auditing system and method based on vehicle retrieval
CN110444044B (en) * 2019-08-27 2022-07-12 纵目科技(上海)股份有限公司 Vehicle pose detection system based on ultrasonic sensor, terminal and storage medium
CN110706261A (en) * 2019-10-22 2020-01-17 上海眼控科技股份有限公司 Vehicle violation detection method and device, computer equipment and storage medium
CN110765963A (en) * 2019-10-29 2020-02-07 上海眼控科技股份有限公司 Vehicle brake detection method, device, equipment and computer readable storage medium
CN110929613A (en) * 2019-11-14 2020-03-27 上海眼控科技股份有限公司 Image screening algorithm for intelligent traffic violation audit
CN111160086B (en) * 2019-11-21 2023-10-13 芜湖迈驰智行科技有限公司 Lane line identification method, device, equipment and storage medium
CN111126323A (en) * 2019-12-26 2020-05-08 广东星舆科技有限公司 Bayonet element recognition and analysis method and system serving for traffic violation detection
CN111292530A (en) * 2020-02-04 2020-06-16 浙江大华技术股份有限公司 Method, device, server and storage medium for processing violation pictures
CN111382704B (en) * 2020-03-10 2023-12-15 以萨技术股份有限公司 Vehicle line pressing violation judging method and device based on deep learning and storage medium
CN111401186B (en) * 2020-03-10 2024-05-28 北京精英智通科技股份有限公司 Vehicle line pressing detection system and method
CN111768427B (en) * 2020-05-07 2023-12-26 普联国际有限公司 Multi-moving-object tracking method, device and storage medium
CN111563463A (en) * 2020-05-11 2020-08-21 上海眼控科技股份有限公司 Method and device for identifying road lane lines, electronic equipment and storage medium
CN111833598B (en) * 2020-05-14 2022-07-05 山东科技大学 Automatic traffic incident monitoring method and system for unmanned aerial vehicle on highway
CN111968378A (en) * 2020-07-07 2020-11-20 浙江大华技术股份有限公司 Motor vehicle red light running snapshot method and device, computer equipment and storage medium
CN111882882B (en) * 2020-07-31 2021-06-25 浙江东鼎电子股份有限公司 Method for detecting cross-lane driving behavior of automobile in dynamic flat-plate scale weighing area
CN111814765B (en) * 2020-08-31 2021-05-28 蔻斯科技(上海)有限公司 Method, device and equipment for determining vehicle line pressing and storage medium
CN112101268B (en) * 2020-09-23 2022-07-29 浙江浩腾电子科技股份有限公司 Vehicle line pressing detection method based on geometric projection
CN112580457A (en) * 2020-12-09 2021-03-30 上海眼控科技股份有限公司 Vehicle video processing method and device, computer equipment and storage medium
CN112580516A (en) * 2020-12-21 2021-03-30 上海眼控科技股份有限公司 Road scene recognition method, device, equipment and storage medium
CN112785850A (en) * 2020-12-29 2021-05-11 上海眼控科技股份有限公司 Method and device for identifying vehicle lane change without lighting
CN112990087B (en) * 2021-04-08 2022-08-19 济南博观智能科技有限公司 Lane line detection method, device, equipment and readable storage medium
CN113353071B (en) * 2021-05-28 2023-12-19 云度新能源汽车有限公司 Narrow area intersection vehicle safety auxiliary method and system based on deep learning
CN113743316B (en) * 2021-09-07 2023-09-19 北京建筑大学 Vehicle plugging behavior identification method, system and device based on target detection
CN114187758B (en) * 2021-10-18 2022-09-02 中标慧安信息技术股份有限公司 Vehicle detection method and system based on intelligent road edge computing gateway
CN114170798B (en) * 2021-12-03 2023-02-17 智道网联科技(北京)有限公司 Message reminding system and method
CN114998452B (en) * 2022-08-03 2022-12-02 深圳安智杰科技有限公司 Vehicle-mounted camera online calibration method and system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4501467B2 (en) * 2004-03-05 2010-07-14 アイシン・エィ・ダブリュ株式会社 Navigation device and navigation method
JP6090146B2 (en) * 2013-12-16 2017-03-08 株式会社デンソー Lane departure control system
CN106297281B (en) * 2016-08-09 2018-10-09 北京奇虎科技有限公司 The method and apparatus of vehicle peccancy detection
CN106297314A (en) * 2016-11-03 2017-01-04 北京文安智能技术股份有限公司 A kind of drive in the wrong direction or the detection method of line ball vehicle behavior, device and a kind of ball machine
CN106874863B (en) * 2017-01-24 2020-02-07 南京大学 Vehicle illegal parking and reverse running detection method based on deep convolutional neural network
CN107358170B (en) * 2017-06-21 2021-01-19 华南理工大学 Vehicle violation line pressing identification method based on mobile machine vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
胡鹏.基于机器视觉的交通压线判别方法研究.《中国优秀硕士学位论文全文数据库 信息科技辑》.2018,(第01期), *

Also Published As

Publication number Publication date
CN109949578A (en) 2019-06-28

Similar Documents

Publication Publication Date Title
CN109949578B (en) Vehicle line pressing violation automatic auditing method based on deep learning
CN110148196B (en) Image processing method and device and related equipment
CN106709436B (en) Track traffic panoramic monitoring-oriented cross-camera suspicious pedestrian target tracking system
CN111179152B (en) Road identification recognition method and device, medium and terminal
CN106203398B (en) A kind of method, apparatus and equipment detecting lane boundary
US9467645B2 (en) System and method for recognizing parking space line markings for vehicle
CN107633516A (en) A kind of method and apparatus for identifying surface deformation class disease
CN115717894B (en) Vehicle high-precision positioning method based on GPS and common navigation map
CN110689724B (en) Automatic motor vehicle zebra crossing present pedestrian auditing method based on deep learning
CN108052904B (en) Method and device for acquiring lane line
CN112967283A (en) Target identification method, system, equipment and storage medium based on binocular camera
CN110667474A (en) General obstacle detection method and device and automatic driving system
CN111213154A (en) Lane line detection method, lane line detection equipment, mobile platform and storage medium
CN107918775B (en) Zebra crossing detection method and system for assisting safe driving of vehicle
Schreiber et al. Detecting symbols on road surface for mapping and localization using OCR
Samadzadegan et al. Automatic lane detection in image sequences for vision-based navigation purposes
JP4762026B2 (en) Road sign database construction device
CN113639685B (en) Displacement detection method, device, equipment and storage medium
CN108399360A (en) A kind of continuous type obstacle detection method, device and terminal
CN110675442A (en) Local stereo matching method and system combined with target identification technology
CN109635701B (en) Lane passing attribute acquisition method, lane passing attribute acquisition device and computer readable storage medium
Philipsen et al. Day and night-time drive analysis using stereo vision for naturalistic driving studies
Giosan et al. Superpixel-based obstacle segmentation from dense stereo urban traffic scenarios using intensity, depth and optical flow information
Saleem et al. Effects of ground manifold modeling on the accuracy of stixel calculations
CN110135382B (en) Human body detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: An automatic verification method of vehicle line pressing violation based on deep learning

Effective date of registration: 20220211

Granted publication date: 20201124

Pledgee: Shanghai Bianwei Network Technology Co.,Ltd.

Pledgor: SHANGHAI EYE CONTROL TECHNOLOGY Co.,Ltd.

Registration number: Y2022310000023

PE01 Entry into force of the registration of the contract for pledge of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20201124

CF01 Termination of patent right due to non-payment of annual fee