CN115082701B - Multi-water-line cross identification positioning method based on double cameras - Google Patents

Multi-water-line cross identification positioning method based on double cameras Download PDF

Info

Publication number
CN115082701B
CN115082701B CN202210978997.2A CN202210978997A CN115082701B CN 115082701 B CN115082701 B CN 115082701B CN 202210978997 A CN202210978997 A CN 202210978997A CN 115082701 B CN115082701 B CN 115082701B
Authority
CN
China
Prior art keywords
waterline
image
target
straight line
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210978997.2A
Other languages
Chinese (zh)
Other versions
CN115082701A (en
Inventor
龙关旭
辛公锋
石磊
潘为刚
靳华磊
尚志强
秦石铭
周骁腾
王书新
马鹏飞
胡朋
王目树
潘立平
康超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Innovation Research Institute Of Shandong Expressway Group Co ltd
Shandong Jiaotong University
Original Assignee
Innovation Research Institute Of Shandong Expressway Group Co ltd
Shandong Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Innovation Research Institute Of Shandong Expressway Group Co ltd, Shandong Jiaotong University filed Critical Innovation Research Institute Of Shandong Expressway Group Co ltd
Priority to CN202210978997.2A priority Critical patent/CN115082701B/en
Publication of CN115082701A publication Critical patent/CN115082701A/en
Application granted granted Critical
Publication of CN115082701B publication Critical patent/CN115082701B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a multi-waterline cross recognition positioning method based on double cameras, belongs to the technical field of highway traffic, and solves the problems that a waterline is difficult to distinguish under the condition of waterline cross and the waterline distinguishing and lineation distance is far away from deviation. The scheme comprises the following steps of 1: a front camera collects a wide-view-angle water line image on a highway in front of a marking vehicle; step 2: the waterline identification module identifies the edge image of the waterline for the wide-view-angle waterline image; and step 3: when the images at the edge of the waterline are crossed, the standard waterline is judged through a standard waterline identification module and fed back to a target waterline identification module; and 4, step 4: the rear camera collects a narrow-view-angle water line image on the highway; and 5: the waterline recognition module is used for recognizing the edge image of the waterline of the narrow-view waterline image; and 6: when the edge images of the waterline are crossed, the target waterline recognition module judges the target waterline; and 7: and carrying out line drawing operation of the expressway based on the target waterline.

Description

Multi-water-line cross identification positioning method based on double cameras
Technical Field
The invention particularly relates to a multi-water-line cross identification positioning method based on double cameras, and belongs to the technical field of road traffic.
Background
The traditional way of marking the lane lines is to draw a waterline on the highway surface manually, and then a marker uses a hand-push device to uniformly scrape and coat standard materials (two components, hot melt materials and the like) of the lane lines on the pavement along the waterline, so as to realize the spraying construction of the lane lines. In the planning vehicle technology taking a waterline as a navigation target, the quality of the waterline seriously influences the precision of visual navigation. In particular, in the case of a multi-waterline intersection, that is, in the case where the waterline inevitably inclines, bends, breaks, or the like during drawing, the marker makes a supplementary stroke on the original waterline, and thereby the multi-waterline intersects in a local range, and the target waterline for scribing cannot be automatically determined.
Disclosure of Invention
The invention aims to provide a multi-water-line cross identification positioning method based on double cameras aiming at the defects of the prior art. And then transmitting the marking information as a parameter to the rear camera. And the rear camera also detects the waterline by using an image processing technology and selects a target waterline according to the marking information.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
a multi-water line cross identification positioning method based on double cameras comprises the following steps:
step 1: a front camera collects a wide-view-angle water line image on a highway in front of a marking vehicle;
step 2: the waterline identification module identifies the edge image of the waterline for the wide-view-angle waterline image;
and 3, step 3: when the images at the edges of the waterlines are crossed, the standard waterline recognition module is used for distinguishing the standard waterlines, and the distinguishing results of the standard waterlines are fed back to the target waterline recognition module;
and 4, step 4: a rear camera collects a narrow-view-angle water line image on the highway;
and 5: the waterline recognition module is used for recognizing the edge image of the waterline of the narrow-view waterline image;
and 6: when the edge images of the waterlines are crossed, the target waterline recognition module is used for distinguishing the target waterlines for lineation based on the distinguishing results of the standard waterlines in the step 3;
and 7: the marking module is used for marking a highway based on a target waterline;
the front camera and the rear camera are both arranged on the scribing vehicle, the front camera is arranged in front of the side of the scribing vehicle, the scribing module is arranged in the middle of the same side of the scribing vehicle as the front camera, the rear camera is arranged right in front of the adjacent scribing module, and the ground clearance of the front camera is higher than that of the rear camera.
Further, the method for identifying the edge image of the waterline by the waterline identification module comprises the following steps:
step 2.1: reading a gray level image of a waterline based on the waterline image, and carrying out median filtering on the gray level image, wherein a formula referred by a median filtering algorithm is as follows:
Figure 58331DEST_PATH_IMAGE001
wherein G (x, y) represents the filtered gray-scale image, x and y represent the rows and columns of the image respectively, G (x-m, y-n) represents the pixel values of the coverage area of the template, med represents the pixel values sorted according to size, then the pixels with the middle sorting position are selected, and m and n are integers between +/-W;
step 2.2: stretching the filtered gray-scale image G (x, y);
step 2.3: continuously scanning the whole stretched gray image by a shape template, wherein the shape template comprises three continuous same rectangular areas, which are respectively: the transition region is narrower than the rectangle width of the white region and the black region;
step 2.4: performing image pixel enhancement on a white area and a black area which the shape template passes through by adopting an integral image method so as to improve the calculation efficiency, and performing pixel replacement on a transition area which the shape template passes through by using the corrected edge characteristic image so as to eliminate burrs to obtain an integral optimization image;
step 2.5: stretching the integral optimization image to improve the waterline detection precision, and performing morphological processing on the extracted waterline image by adopting a conventional swelling corrosion algorithm;
step 2.6: carrying out binarization processing on the image subjected to the morphological processing again by using a self-adaptive threshold method to obtain a binarized image;
step 2.7: based on the binary image, acquiring an edge image of the waterline through a waterline edge detection algorithm:
Figure 930472DEST_PATH_IMAGE002
wherein E (x, y) represents the pixel value of the x row and y column of the edge image of the waterline, and B (x, y) represents the pixel value of the x row and y column of the binary image;
step 2.8: for the edge image of the waterline, a straight line in the edge image is detected by adopting a Hough transform method, and the slope rho of the normal line of the straight line, the distance theta from the straight line to an origin and the length l of the straight line are obtained, wherein the origin is a point with x =0 and y = 0.
Preferably, W =3 or 5,n =2, the height of the shape template is 50 pixels, and the width of the white area and the black area is 20 pixels each.
Further, the step 2.4 comprises the following specific steps:
first, the pixel values of the edge feature image are calculated according to the following formula:
Figure 360054DEST_PATH_IMAGE003
wherein S _ h (x, y) represents an integrated pixel value corresponding to the edge feature image, S _ w (x, y) represents an integrated pixel value corresponding to the white rectangle, S _ b (x, y) represents an integrated pixel value corresponding to the black rectangle, lw represents the width of the white area, and lh represents the height of the shape template;
then, the integral graph corresponding to the edge feature image is corrected according to the following formula:
Figure 193012DEST_PATH_IMAGE004
where f (x, y) represents an integrated pixel value corresponding to the edge feature image after the correction.
Preferably, the standard waterline in step 3 is the waterline with the longest length in the de-crossing waterline.
Further, the standard waterline discrimination comprises the following steps:
step 3.1: fitting and connecting the branch line sections with the slope rho of the normal line of the straight line and the distance theta from the straight line to the origin point into a straight line, eliminating discontinuous lines and reducing the number of candidate line sections so as to solve the problem that a waterline in an image is formed by discrete and discontinuous pixels and is not beneficial to judging a target waterline;
step 3.2: then, calculating the length of the fitted straight line according to a distance formula between two points on the plane;
step 3.3: then, the longest 2 straight lines are selected, the straight line with smaller ρ is represented by left and the straight line with larger ρ is represented by right, the intersection (a, b) is obtained according to the following formula, and the position is represented by the row b column of the image a:
Figure 133286DEST_PATH_IMAGE005
solving by substituting rho and theta of the two straight lines;
step 3.4: calculating the distance w from the cross point to the central point of the rear camera in the world coordinate system according to the calibration information of the front camera and the rear camera y The upper end points of the two straight lines are respectively marked as S left ,S right And respectively calculating the distances from the two end points to the intersection point, and respectively recording the distances as d left ,d right
Step 3.5: judging a standard waterline, marking label, and marking parameters label and w y As a parameter, the parameter is transmitted to a target waterline recognition module so that a rear camera can analyze and use:
when the crossover point exists and a > h/2, label and w y Feeding back to the target waterline recognition module,
Figure 764994DEST_PATH_IMAGE006
otherwise, no feedback is performed.
Further, step 6 comprises the following steps:
step 6.1: iteratively calculating w according to the formula y And updating the distance from the cross point to the central point of the rear camera in a world coordinate system, so that the system is convenient to position the cross point of the multi-waterline from the world coordinate system:
Figure 132521DEST_PATH_IMAGE007
wherein S is i Representing the advancing distance of the marking vehicle in each frame time;
step 6.2: when w is y When not equal to 0, when the following conditions are met, the detected straight line is considered as a valid waterline:
Figure 842988DEST_PATH_IMAGE008
where ρ is k And ρ k-1 Representing the slope of the line normal, theta, of the current and previous frames, respectively k And theta k-1 Respectively representing the distances from the straight line of the current frame and the previous frame to the origin;
step 6.3: when w is y When the value of label = left, selecting a straight line with smaller rho as the target waterline, otherwise, selecting a straight line with larger rho as the target waterline;
step 6.4: calculating the distance from the central point of the image to the target water line, taking the central point (w/2, h/2) of the image as the normal of the target water line, marking the intersection point of the normal and the water line as A, and calculating the distance from the central point to A as d A
Compared with the prior art, the invention has the following beneficial effects:
the invention aims to provide a multi-water-line cross identification positioning method based on double cameras aiming at the defects of the prior art. And then the marking information is used as a parameter to be transmitted to the rear camera. And the rear camera also detects the waterline by using an image processing technology and selects a target waterline according to the marking information.
Drawings
FIG. 1 is a flow chart of a multi-water line cross-recognition positioning method based on two cameras according to the present invention;
FIG. 2 is a front camera live shot of the present invention;
FIG. 3 is a grayscale image of the present invention;
FIG. 4 is a stretched gray scale image of the present invention;
FIG. 5 is a template diagram of an edge feature of the present invention;
FIG. 6 is a graph of an integration characteristic of the present invention;
FIG. 7 is a plot of integral optimization of the present invention;
FIG. 8 is a binarized image according to the present invention;
FIG. 9 is an edge extraction image of the present invention;
FIG. 10 is a cross-point image of the present invention;
FIG. 11 shows the standard waterline recognition result of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it is to be understood that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
The invention provides a multi-water-line cross identification positioning method based on double cameras. Firstly, a front camera is added, 2 waterlines are detected by using an image processing technology by utilizing the characteristic of wide visual range of the front (remote) camera, and a target waterline is judged and marked according to the length of the global waterline (the position in a world coordinate system, the deviation of the image coordinate system relative to a vertical line passing through a cross point). And then transmitting the marking information as a parameter to the rear camera. And the rear camera also detects the waterline by using an image processing technology and selects a target waterline according to the marking information. And finally, calculating the distance of the normal direction of the waterline. The rear camera is positioned in the middle of the planning vehicle, the height from the ground is 34cm between the marking construction modules, the lens is downward, and the included angle between the lens and the vertical direction of the ground is less than 2 degrees. The front camera is positioned at the foremost end of the planning vehicle, the height from the ground is 134cm, the inclination angle with the horizontal direction is 25.5 degrees, the error is not more than 2 degrees, and the distance between the front camera and the rear camera is 92cm. The model of the rear camera is Huatengwei HT-SUA630C-T, the model of the lens is Huatengwei HTF31518-5MP, the model of the front camera is Huatengwei HT-SUA630C-T, and the model of the lens is Huatengwei MV-JT2812. The single camera can not realize the judgement and the automatic line of drawing a line, though preceding camera visual range is wide, can discern the target waterline, nevertheless usually draw a line and need be located the position in car middle part, and the camera distance is far away before the distance, can't directly utilize the judgement result of camera to draw a line, and if set up near the module of drawing a line with the camera, visual range is narrow, can't carry out the discernment of waterline again. Therefore, the target recognition and the scribing operation can be realized only by arranging two cameras at the same time.
Specifically, as shown in fig. 1, a method for identifying and positioning multiple water lines in a crossing manner based on two cameras includes the following steps:
step 1: the front camera collects a wide-view-angle water line image on a highway in front of the marking vehicle;
and 2, step: the waterline identification module identifies the edge image of the waterline for the waterline image with the wide visual angle;
and 3, step 3: when the images at the edges of the waterline are crossed, the standard waterline is judged through a standard waterline identification module, and the judgment result of the standard waterline is fed back to a target waterline identification module;
and 4, step 4: the rear camera collects a narrow-view-angle water line image on the highway;
and 5: the waterline recognition module is used for recognizing the edge image of the waterline of the narrow-view waterline image;
and 6: when the edge images of the waterlines are crossed, based on the judgment result of the standard waterline in the step 3, the target waterline identification module judges the target waterline for scribing;
and 7: the marking module is used for marking a highway based on a target waterline;
the front camera and the rear camera are both arranged on the scribing vehicle, the front camera is arranged in front of the side of the scribing vehicle, the scribing module (STHX-150) is arranged in the middle of the same side of the scribing vehicle as the front camera, the rear camera is arranged right in front of the adjacent scribing module, and the ground clearance of the front camera is higher than that of the rear camera. As shown in fig. 2, it is a real shot image of the front camera.
Further, the method for identifying the edge image of the waterline by the waterline identification module comprises the following steps:
step 2.1: reading a gray level image of a waterline based on the waterline image, wherein the gray level image is as shown in figure 3, and performing median filtering on the gray level image, and a formula referred by a median filtering algorithm is as follows:
Figure 819909DEST_PATH_IMAGE001
wherein G (x, y) represents the filtered gray-scale image, x and y represent the rows and columns of the image respectively, G (x-m, y-n) represents the pixel values of the coverage area of the template, med represents the pixel values sorted according to size, then the pixels with the middle sorting position are selected, and m and n are integers between +/-W;
step 2.2: stretching the filtered gray-scale image G (x, y), as shown in fig. 4, to obtain a stretched gray-scale image, which is S (x, y);
step 2.3: the entire stretched gray image is scanned continuously by the shape template, as shown in fig. 5, which is a template map of edge features, the shape template map of edge features includes three continuous rectangular regions, which are: the white area and the black area are the same in size, and the rectangular width of the transition area is narrower than that of the white area and the black area;
step 2.4: performing image pixel enhancement on a white area and a black area which the shape template passes through by adopting an integral image method, wherein the integral image method is an integral characteristic image as shown in fig. 6 so as to improve the calculation efficiency, performing pixel replacement on a transition area which the shape template passes through by using the corrected edge characteristic image so as to eliminate burrs and obtain an integral optimization image, and the integral optimization image is shown in fig. 7;
step 2.5: stretching the integral optimization image to improve the waterline detection precision, and performing morphological processing on the extracted waterline image by adopting a conventional swelling corrosion algorithm;
step 2.6: performing binarization processing on the image subjected to the morphological processing again by using an adaptive threshold method to obtain a binarized image, wherein the binarized image is shown in figure 8;
step 2.7: based on the binary image, the edge image of the waterline is obtained through a waterline edge detection algorithm, and the edge extraction image of the waterline is shown in fig. 9:
Figure 542009DEST_PATH_IMAGE002
wherein E (x, y) represents the pixel value of the x row and y column of the edge image of the waterline, and B (x, y) represents the pixel value of the x row and y column of the binary image;
step 2.8: for the edge image of the waterline, a straight line in the edge image is detected by adopting a Hough transform method, and the slope rho of the normal line of the straight line, the distance theta from the straight line to an origin and the length l of the straight line are obtained, wherein the origin is a point with x =0 and y = 0. In the straight line detection process, the resolution in the Hough transform method is 40, the angle precision is 0.8 degrees, and the detected straight line angle is 60-120 degrees due to the limitation of the vehicle driving direction.
Preferably, W =3 or 5,n =2, the height of the shape template is 50 pixels, the width of the white area and the black area is 20 pixels, and the width of the transition area is 2 pixels.
Further, the step 2.4 comprises the following specific steps:
first, the pixel value of the edge feature image is calculated according to the following formula:
Figure 346017DEST_PATH_IMAGE003
wherein S _ h (x, y) represents an integrated pixel value corresponding to the edge feature image, S _ w (x, y) represents an integrated pixel value corresponding to the white rectangle, S _ b (x, y) represents an integrated pixel value corresponding to the black rectangle, lw represents the width of the white area, and lh represents the height of the shape template;
then, the integral graph corresponding to the edge feature image is corrected according to the following formula:
Figure 935259DEST_PATH_IMAGE009
where f (x, y) represents an integrated pixel value corresponding to the edge feature image after the correction.
Preferably, the standard waterline in step 3 is the waterline with the longest length in the de-crossing waterlines.
Further, the standard waterline discrimination comprises the following steps:
step 3.1: fitting and connecting the branch line sections with the slope rho of the normal line of the straight line and the distance theta from the straight line to the origin point into a straight line, eliminating discontinuous lines and reducing the number of candidate line sections so as to solve the problem that a waterline in an image is formed by discrete and discontinuous pixels and is not beneficial to judging a target waterline;
step 3.2: then, calculating the length of the fitted straight line according to a distance formula between two points on the plane;
step 3.3: then, the longest 2 straight lines are selected, the straight line with smaller ρ is represented by left and the straight line with larger ρ is represented by right, and the intersection (a, b) is obtained according to the following formula, as shown in fig. 10, the intersection schematic image is represented by the row b column of the image a:
Figure 217336DEST_PATH_IMAGE005
solving by substituting rho and theta of the two straight lines;
step 3.4: according to the calibration information of the front camera and the rear camera, calculating the distance w from the cross point to the central point of the rear camera in the world coordinate system y The upper end points of the two straight lines are respectively marked as S left ,S right And respectively calculating the distances from the two end points to the intersection point, and respectively recording the distances as d left ,d right
Step 3.5: judging the standard waterline and marking label, as shown in FIG. 11, identifying the image for the standard waterline, and marking parameters label and w y As a parameter, the parameter is transmitted to a target waterline recognition module so that a rear camera can analyze and use:
when a cross-over point exists and a > h/2, label and w are added y Feeding back the information to the target waterline recognition module,
Figure 652996DEST_PATH_IMAGE006
otherwise, no feedback is performed.
Further, step 6 comprises the following steps:
step 6.1: iteratively calculating w according to the formula y And updating the distance from the cross point to the central point of the rear camera in a world coordinate system, so that the system can conveniently locate the cross point of the multi-waterline from the world coordinate system:
Figure 392020DEST_PATH_IMAGE007
wherein S is i Representing the advancing distance of the marking vehicle in each frame time;
step 6.2: when w is y When not equal to 0, when the following conditions are met, the detected straight line is considered as an effective waterline:
Figure 421287DEST_PATH_IMAGE008
wherein ρ k And ρ k-1 Respectively representing the slope of the straight line normal of the current frame and the previous frame, theta k And theta k-1 Respectively representing the distances of the straight lines of the current frame and the previous frame from the origin, where l is taken 0 Is 0.2m, theta 0 Take 2, ρ 0 Taking 25;
step 6.3: when w is y When =0, selecting a target waterline according to the value of label, namely when label = left, selecting a straight line with smaller rho as the target waterline, otherwise, selecting a straight line with larger rho as the target waterline;
step 6.4: calculating the distance from the central point of the image to the target water line, taking the central point (w/2, h/2) of the image as the normal of the target water line, marking the intersection point of the normal and the water line as A, and marking the distance from the central point to A as d A
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.

Claims (3)

1. A multi-water line cross identification positioning method based on double cameras is characterized by comprising the following steps:
step 1: the front camera collects a wide-view-angle water line image on a highway in front of the marking vehicle;
and 2, step: the waterline identification module identifies the edge image of the waterline for the wide-view-angle waterline image;
and step 3: when the images at the edges of the waterlines are crossed, the standard waterline recognition module is used for distinguishing the standard waterlines, and the distinguishing results of the standard waterlines are fed back to the target waterline recognition module;
and 4, step 4: the rear camera collects a narrow-view-angle water line image on the highway;
and 5: the waterline identification module identifies the edge image of the waterline of the narrow-view waterline image;
step 6: when the edge images of the waterlines are crossed, the target waterline recognition module is used for distinguishing the target waterlines for lineation based on the distinguishing results of the standard waterlines in the step 3;
and 7: the marking module is used for marking a highway based on a target waterline;
the front camera and the rear camera are both arranged on the scribing car, the front camera is arranged in front of the side of the scribing car, the scribing module is arranged in the middle of the same side of the scribing car as the front camera, the rear camera is arranged right in front of the scribing module, and the ground clearance of the front camera is higher than that of the rear camera;
the method for identifying the edge image of the waterline by the waterline identification module comprises the following steps:
step 2.1: based on the water line image, reading the gray level image of the water line, and carrying out median filtering on the gray level image, wherein a formula referred by a median filtering algorithm is as follows:
Figure DEST_PATH_IMAGE002
wherein G (x, y) represents the filtered gray-scale image, x and y represent the rows and columns of the image respectively, G (x-m, y-n) represents the pixel values of the coverage area of the template, med represents the pixel values sorted according to size, then the pixels with the middle sorting position are selected, and m and n are integers between +/-W;
step 2.2: stretching the filtered gray-scale image G (x, y);
step 2.3: continuously scanning the whole stretched gray image by a shape template, wherein the shape template comprises three continuous same rectangular areas, which are respectively: the transition region is narrower than the rectangle width of the white region and the black region;
step 2.4: performing image pixel enhancement on a white area and a black area which the shape template passes through by adopting an integral graph method so as to improve the calculation efficiency, and performing pixel replacement on a transition area which the shape template passes through by using the corrected edge characteristic image so as to eliminate burrs;
step 2.5: stretching the integral optimization image to improve the waterline detection precision, and performing morphological processing on the extracted waterline image by adopting a conventional expansion corrosion algorithm;
step 2.6: carrying out binarization processing on the image subjected to the morphological processing again by using a self-adaptive threshold method to obtain a binarized image;
step 2.7: based on the binary image, acquiring an edge image of the waterline through a waterline edge detection algorithm:
Figure DEST_PATH_IMAGE004
wherein, E (x, y) represents the pixel value of the x row and y column of the edge image of the waterline, and B (x, y) represents the pixel value of the x row and y column of the binary image;
step 2.8: for an edge image of a waterline, detecting a straight line in the edge image by adopting a Hough transform method, and obtaining a straight line normal slope rho, a distance theta from the straight line to an origin and a length l of the straight line, wherein the origin is a point with x =0 and y = 0;
the standard waterline in the step 3 is the waterline with the longest length in the de-crossing waterline;
the standard waterline distinguishing method comprises the following steps:
step 3.1: fitting and connecting the branch line sections with the slope rho of the normal line of the straight line and the distance theta from the straight line to the origin point into a straight line, eliminating discontinuous lines and reducing the number of candidate line sections so as to solve the problem that a waterline in an image is formed by discrete and discontinuous pixels and is not beneficial to judging a target waterline;
step 3.2: then, calculating the length of the fitted straight line according to a distance formula between two points on the plane;
step 3.3: then, the longest 2 straight lines are selected, the straight line with smaller ρ is represented by left and the straight line with larger ρ is represented by right, the intersection (a, b) is obtained according to the following formula, and the position is represented by the row b column of the image a:
Figure DEST_PATH_IMAGE006
solving by substituting rho and theta of the two straight lines;
step 3.4: calculating the distance w from the cross point to the central point of the rear camera in the world coordinate system according to the calibration information of the front camera and the rear camera y The upper end points of the two straight lines are respectively marked as S left ,S right And respectively calculating the distances from the two end points to the intersection point, and respectively recording the distances as d left ,d right
Step 3.5: judging a standard waterline, marking label, and marking parameters label and w y As a parameter, the parameter is transmitted to a target waterline recognition module so that a rear camera can analyze and use:
when a cross-over point exists and a > h/2, label and w are added y Feeding back to the target waterline recognition module,
Figure DEST_PATH_IMAGE008
otherwise, no feedback is performed;
the step 6 comprises the following steps:
step 6.1: iteratively calculating w according to the following formula y And updating the distance from the cross point to the central point of the rear camera in a world coordinate system, so that the system is convenient to position the cross point of the multi-waterline from the world coordinate system:
Figure DEST_PATH_IMAGE010
wherein S is i Representing the advancing distance of the line marking vehicle in each frame time;
step 6.2: when w is y When not equal to 0, when the following conditions are met, the detected straight line is considered as an effective waterline:
Figure DEST_PATH_IMAGE012
where ρ is k And ρ k-1 Respectively representing the slope of the straight line normal of the current frame and the previous frame, theta k And theta k-1 Respectively representing the distances from the straight line of the current frame and the previous frame to the origin, l 0 Is a constant;
step 6.3: when w is y When the value of label = left, selecting a straight line with smaller rho as the target waterline, otherwise, selecting a straight line with larger rho as the target waterline;
step 6.4: calculating the distance from the central point of the image to the target water line, taking the central point (w/2, h/2) of the image as the normal of the target water line, marking the intersection point of the normal and the water line as A, and marking the distance from the central point to A as d A
2. The double-camera-based multi-water-line cross identification and positioning method according to claim 1, characterized in that: w =3 or 5,n =2, the height of the shape template is 50 pixels, and the width of the white area and the black area is 20 pixels each.
3. The method for multi-water-line cross identification and positioning based on the double cameras as claimed in claim 1, wherein the step 2.4 comprises the following steps:
first, the pixel values of the edge feature image are calculated according to the following formula:
Figure DEST_PATH_IMAGE014
wherein S _ h (x, y) represents an integrated pixel value corresponding to the edge feature image, S _ w (x, y) represents an integrated pixel value corresponding to the white rectangle, S _ b (x, y) represents an integrated pixel value corresponding to the black rectangle, lw represents the width of the white area, and lh represents the height of the shape template;
then, the integral graph corresponding to the edge feature image is corrected according to the following formula:
Figure DEST_PATH_IMAGE016
where f (x, y) represents an integrated pixel value corresponding to the edge feature image after the correction.
CN202210978997.2A 2022-08-16 2022-08-16 Multi-water-line cross identification positioning method based on double cameras Active CN115082701B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210978997.2A CN115082701B (en) 2022-08-16 2022-08-16 Multi-water-line cross identification positioning method based on double cameras

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210978997.2A CN115082701B (en) 2022-08-16 2022-08-16 Multi-water-line cross identification positioning method based on double cameras

Publications (2)

Publication Number Publication Date
CN115082701A CN115082701A (en) 2022-09-20
CN115082701B true CN115082701B (en) 2022-11-08

Family

ID=83244421

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210978997.2A Active CN115082701B (en) 2022-08-16 2022-08-16 Multi-water-line cross identification positioning method based on double cameras

Country Status (1)

Country Link
CN (1) CN115082701B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115509122B (en) * 2022-11-21 2023-03-21 山东高速集团有限公司创新研究院 Online optimization control method and system for unmanned line marking vehicle based on machine vision navigation

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010170488A (en) * 2009-01-26 2010-08-05 Nissan Motor Co Ltd Lane recognition device, and lane recognition method
CN106960192A (en) * 2017-03-23 2017-07-18 深圳智达机械技术有限公司 Based on the preceding roadmarking extraction system to camera in automatic Pilot
CN112706835A (en) * 2021-01-07 2021-04-27 济南北方交通工程咨询监理有限公司 Expressway unmanned marking method based on image navigation
CN213342426U (en) * 2020-12-29 2021-06-01 山东交通学院 Lofting water line image acquisition device for automatic line marking vehicle
CN114072840A (en) * 2020-05-26 2022-02-18 百度时代网络技术(北京)有限公司 Depth-guided video repair for autonomous driving
CN114299247A (en) * 2021-08-31 2022-04-08 武汉理工大学 Rapid detection and problem troubleshooting method for road traffic sign lines
CN114808649A (en) * 2022-06-06 2022-07-29 仲恺农业工程学院 Highway marking method based on vision system control

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6582055B2 (en) * 2015-10-22 2019-09-25 京セラ株式会社 Road surface state determination device, imaging device, imaging system, and road surface state determination method
IT201600094858A1 (en) * 2016-09-21 2018-03-21 St Microelectronics Srl PROCEDURE FOR A LOW COST ADVANCED CROSS TRAFFIC ALERT, CORRESPONDING PROCESSING SYSTEM, CROSS TRAFFIC ALERT AND VEHICLE SYSTEM
CN112115778B (en) * 2020-08-11 2023-07-21 华南理工大学 Intelligent lane line identification method under ring simulation condition

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010170488A (en) * 2009-01-26 2010-08-05 Nissan Motor Co Ltd Lane recognition device, and lane recognition method
CN106960192A (en) * 2017-03-23 2017-07-18 深圳智达机械技术有限公司 Based on the preceding roadmarking extraction system to camera in automatic Pilot
CN114072840A (en) * 2020-05-26 2022-02-18 百度时代网络技术(北京)有限公司 Depth-guided video repair for autonomous driving
CN213342426U (en) * 2020-12-29 2021-06-01 山东交通学院 Lofting water line image acquisition device for automatic line marking vehicle
CN112706835A (en) * 2021-01-07 2021-04-27 济南北方交通工程咨询监理有限公司 Expressway unmanned marking method based on image navigation
CN114299247A (en) * 2021-08-31 2022-04-08 武汉理工大学 Rapid detection and problem troubleshooting method for road traffic sign lines
CN114808649A (en) * 2022-06-06 2022-07-29 仲恺农业工程学院 Highway marking method based on vision system control

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Research on water hazard detection based on line structured light sensor for long-distance all day;Haiyan Shao等;《2015 IEEE International Conference on Mechatronics and Automation (ICMA)》;20150903;第1785-1789页 *
高速公路智能道路标线施工技术研究;王振庆等;《中国设备工程》;20220710;第200-202页 *

Also Published As

Publication number Publication date
CN115082701A (en) 2022-09-20

Similar Documents

Publication Publication Date Title
CN109886896B (en) Blue license plate segmentation and correction method
CN109785291B (en) Lane line self-adaptive detection method
CN105488454B (en) Front vehicles detection and ranging based on monocular vision
JP4016735B2 (en) Lane mark recognition method
US9076046B2 (en) Lane recognition device
US8184859B2 (en) Road marking recognition apparatus and method
JP4930046B2 (en) Road surface discrimination method and road surface discrimination device
CN111563412B (en) Rapid lane line detection method based on parameter space voting and Bessel fitting
CN104700072B (en) Recognition methods based on lane line historical frames
CN109299674B (en) Tunnel illegal lane change detection method based on car lamp
CN109784344A (en) A kind of non-targeted filtering method of image for ground level mark identification
CN109886175B (en) Method for detecting lane line by combining straight line and circular arc
CN107665327B (en) Lane line detection method and device
CN109635737B (en) Auxiliary vehicle navigation positioning method based on road marking line visual identification
CN102419820A (en) Method for rapidly detecting car logo in videos and images
CN107895151A (en) Method for detecting lane lines based on machine vision under a kind of high light conditions
CN106887004A (en) A kind of method for detecting lane lines based on Block- matching
CN115082701B (en) Multi-water-line cross identification positioning method based on double cameras
CN110334625A (en) A kind of parking stall visual identifying system and its recognition methods towards automatic parking
CN111079541B (en) Road stop line detection method based on monocular vision
CN110889342A (en) Deceleration strip identification method
CN111881878A (en) Lane line identification method for look-around multiplexing
CN111178210A (en) Image identification and alignment method for cross mark
CN110688876A (en) Lane line detection method and device based on vision
CN111079598B (en) Lane line detection method based on image texture and machine learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant