CN109583393B - Lane line end point identification method and device, equipment and medium - Google Patents

Lane line end point identification method and device, equipment and medium Download PDF

Info

Publication number
CN109583393B
CN109583393B CN201811478746.8A CN201811478746A CN109583393B CN 109583393 B CN109583393 B CN 109583393B CN 201811478746 A CN201811478746 A CN 201811478746A CN 109583393 B CN109583393 B CN 109583393B
Authority
CN
China
Prior art keywords
lane line
lane
end point
target detection
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811478746.8A
Other languages
Chinese (zh)
Other versions
CN109583393A (en
Inventor
高三元
冯汉平
鞠伟平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kuandong Huzhou Technology Co ltd
Original Assignee
Kuandeng Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kuandeng Beijing Technology Co ltd filed Critical Kuandeng Beijing Technology Co ltd
Priority to CN201811478746.8A priority Critical patent/CN109583393B/en
Publication of CN109583393A publication Critical patent/CN109583393A/en
Application granted granted Critical
Publication of CN109583393B publication Critical patent/CN109583393B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a lane line endpoint identification method, a lane line endpoint identification device, lane line endpoint identification equipment and a lane line endpoint identification medium. The method at least comprises the following steps: defining a boundary box for framing a lane line endpoint contained in the lane line sample image according to the lane line contained in the lane line sample image, performing target detection in the image to be identified by using a target detection algorithm based on a convolutional neural network according to the definition of the boundary box of the lane line endpoint so as to identify the boundary box of the lane line endpoint, and judging the position of the lane line endpoint according to the result of the target detection. According to the method, the appropriate boundary frame is defined for the lane line end point according to the lane line, and the target detection is carried out based on the boundary frame regression, so that the lane line end point can be accurately identified in the image to be identified, and the position of the lane line end point can be judged.

Description

Lane line end point identification method and device, equipment and medium
Technical Field
The present application relates to the field of machine learning technologies, and in particular, to a lane line end point recognition method, apparatus, device, and medium.
Background
With the rapid development of machine learning technology, deep learning-based depth models are also used in more and more places, including the field of key point detection. Guan Cedian detects that the application effect in different scenarios may vary considerably.
For example, in the scene of face key point detection, because the face key points are often located in the middle or near-middle area of the image, the more accurate detection result can be obtained; in the context of high-precision mapping, an important task is to extract the end points of the dotted lane, and in the prior art, a scheme for detecting the key points of the face is also used for detecting the end points of the dotted lane.
However, since the end points of the dotted lane lines are usually located in the region of the image edges, it is often not easy to obtain a more accurate detection result.
Disclosure of Invention
The embodiment of the application provides a lane line endpoint identification method, a lane line endpoint identification device, lane line endpoint identification equipment and lane line endpoint identification media, which are used for solving the following technical problems in the prior art: the existing key point detection scheme is not easy to obtain accurate detection results of the endpoints of the lane lines with broken lines.
The embodiment of the application adopts the following technical scheme:
a lane line end point recognition method, comprising:
defining a boundary frame for framing a lane line endpoint contained in the lane line sample image according to the lane line contained in the lane line sample image;
performing target detection in the image to be identified by using a target detection algorithm based on a convolutional neural network according to the definition of the boundary box of the lane line end point so as to identify the boundary box of the lane line end point;
and judging the position of the lane line end point according to the target detection result.
Optionally, the determining, according to the result of the target detection, the position of the lane line endpoint specifically includes:
image segmentation is carried out in the identified boundary box by utilizing an image segmentation algorithm so as to segment the foreground and the background;
and judging the position of the lane line end point according to the result of the target detection and the result of the image segmentation.
Optionally, the defining a bounding box for framing the lane line endpoint included in the lane line sample image according to the lane line included in the lane line sample image specifically includes:
defining a bounding box for framing lane lines contained in the lane line sample image;
and defining a boundary box for framing the lane line endpoint contained in the lane line sample image according to the width and/or the height of the boundary box of the lane line.
Optionally, the defining a bounding box for framing the lane line endpoint included in the lane line sample image according to the width and/or height of the lane line bounding box further includes:
and limiting the maximum size of the boundary box of the lane line endpoint according to a preset size threshold.
Optionally, the bounding box of the lane line endpoint is square in shape, and a side length of the square is not greater than a minimum value of: the size threshold, the width of the bounding box of the lane line, the height.
Optionally, the target detection in the image to be identified by using a target detection algorithm based on a convolutional neural network according to the definition of the bounding box of the lane line end point specifically includes:
acquiring a plurality of lane line sample images in at least one lane scene;
marking lane lines and lane line endpoints contained in the lane line sample images respectively;
training a bounding box regression model by utilizing a target detection algorithm based on a convolutional neural network according to the plurality of lane line sample images and labels thereof and the definition of a bounding box of a lane line endpoint;
and performing target detection in the image to be identified by using the trained bounding box regression model.
Optionally, the determining, according to the result of the target detection, the position of the lane line endpoint specifically includes:
and determining the position of the lane line end point according to the center point of the boundary box of the identified lane line end point.
Optionally, the lane line is a dashed line lane line.
A lane line end point recognition device, comprising:
the definition module is used for defining a boundary frame for framing the lane line end points contained in the lane line sample image according to the lane lines contained in the lane line sample image;
the recognition module is used for carrying out target detection in the image to be recognized by utilizing a target detection algorithm based on a convolutional neural network according to the definition of the boundary box of the lane line end point so as to recognize the boundary box of the lane line end point;
and the judging module is used for judging the position of the lane line end point according to the target detection result.
Optionally, the determining module determines the position of the lane line endpoint according to the result of the target detection, and specifically includes:
the judging module performs image segmentation in the identified bounding box by utilizing an image segmentation algorithm so as to segment a foreground and a background;
and judging the position of the lane line end point according to the result of the target detection and the result of the image segmentation.
Optionally, the defining module defines a bounding box for framing a lane line endpoint included in the lane line sample image according to a lane line included in the lane line sample image, and specifically includes:
the definition module is used for defining a boundary box for framing lane lines contained in the lane line sample image;
and defining a boundary box for framing the lane line endpoint contained in the lane line sample image according to the width and/or the height of the boundary box of the lane line.
Optionally, the defining module defines a bounding box for framing a lane line endpoint included in the lane line sample image according to the width and/or the height of the bounding box of the lane line, and further includes:
and the definition module limits the maximum size of the boundary box of the lane line endpoint according to a preset size threshold.
Optionally, the bounding box of the lane line endpoint is square in shape, and a side length of the square is not greater than a minimum value of: the size threshold, the width of the bounding box of the lane line, the height.
Optionally, the identifying module performs target detection in the image to be identified by using a target detection algorithm based on a convolutional neural network according to the definition of the bounding box of the lane line end point, and specifically includes:
the recognition module acquires a plurality of lane line sample images in at least one lane scene;
marking lane lines and lane line endpoints contained in the lane line sample images respectively;
training a bounding box regression model by utilizing a target detection algorithm based on a convolutional neural network according to the plurality of lane line sample images and labels thereof and the definition of a bounding box of a lane line endpoint;
and performing target detection in the image to be identified by using the trained bounding box regression model.
Optionally, the determining module determines the position of the lane line endpoint according to the result of the target detection, and specifically includes:
and the judging module judges the position of the lane line end point according to the central point of the boundary box of the identified lane line end point.
Optionally, the lane line is a dashed line lane line.
A lane line end point recognition apparatus comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
defining a boundary frame for framing a lane line endpoint contained in the lane line sample image according to the lane line contained in the lane line sample image;
performing target detection in the image to be identified by using a target detection algorithm based on a convolutional neural network according to the definition of the boundary box of the lane line end point so as to identify the boundary box of the lane line end point;
and judging the position of the lane line end point according to the target detection result.
A non-transitory computer storage medium storing computer-executable instructions for lane end point identification, the computer-executable instructions configured to:
defining a boundary frame for framing a lane line endpoint contained in the lane line sample image according to the lane line contained in the lane line sample image;
performing target detection in the image to be identified by using a target detection algorithm based on a convolutional neural network according to the definition of the boundary box of the lane line end point so as to identify the boundary box of the lane line end point;
and judging the position of the lane line end point according to the target detection result.
The above at least one technical scheme adopted by the embodiment of the application can achieve the following beneficial effects: by defining a proper boundary box for the lane line end point according to the lane line, and carrying out target detection based on boundary box regression, the lane line end point can be accurately identified in the image to be identified, and the position of the lane line end point can be determined.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
fig. 1 is a flow chart of a lane end point identifying method according to some embodiments of the present application;
FIG. 2 is a schematic diagram of a dotted line lane end and a bounding box thereof according to some embodiments of the present application;
FIG. 3 is a detailed flowchart of the lane line endpoint recognition method according to some embodiments of the present application;
FIG. 4 is a schematic diagram of a lane end point identifying apparatus according to some embodiments of the present application;
fig. 5 is a schematic structural diagram of a lane end point identifying apparatus corresponding to fig. 1 according to some embodiments of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In some embodiments of the application, a manner of defining bounding boxes for lane line end points is proposed, the defined bounding boxes being distinguished from existing bounding boxes of objects occupying a bulk area (such as cars, airplanes, etc.), which generally need to approach object edges with as little rectangle as possible more accurately, whereas the bounding boxes defined by the application may not have this limitation, on the one hand because the object is an end point, and on the other hand because the bounding box may be defined with reference to the corresponding lane line.
Based on the defined boundary box of the lane line endpoint, a training model can be returned through the boundary box, the boundary box is used for identifying the lane line endpoint in the lane line sample image and the image to be identified outside the sample, and the position of the lane line endpoint is determined. In order to improve the recognition accuracy, further processing such as image segmentation and image enhancement may be performed, and the positions of the lane line end points may be comprehensively determined. The following is a detailed description of the aspects of the application.
Fig. 1 is a flow chart of a lane end point identifying method according to some embodiments of the present application. In this flow, the execution subject may be one or more computing devices from a device perspective, such as a single machine learning server, a cluster of machine learning servers, an image segmentation server, or the like, and from a program perspective, the execution subject may be a program that is loaded on these computing devices accordingly, such as a neural network modeling platform, an image processing platform, or the like.
The flow in fig. 1 may include the steps of:
s102: and defining a boundary box for framing the end point of the lane line contained in the lane line sample image according to the lane line contained in the lane line sample image.
In some embodiments of the application, typically, each lane line image contains one or two lane line endpoints. The lane lines may be varied depending on the actual recognition needs, such as a dotted line lane line or a solid line lane line, a single lane line or a double lane line, a white lane line or a yellow lane line, etc. In practical application, the broken line lane line is formed by a plurality of discontinuous line segments, so that the lane line end point is difficult to accurately identify, and the scheme of the application can achieve a better identification effect on the broken line lane line end point, and the following embodiments mainly take the lane line in fig. 1 as a broken line lane line as an example.
In some embodiments of the application, there are a plurality of lane line sample images for training a corresponding machine learning model for detecting a bounding box of a lane line endpoint based at least on the definition of the bounding box. The bounding box of the lane line end point may be defined based on a variety of factors such as the lane line itself, the relative proportions of other objects in the image, a preset size threshold, the extent of distance of the lane line end point from the image edge, and the like.
S104: according to the definition of the boundary box of the lane line endpoint, a target detection algorithm based on a convolutional neural network is utilized to perform target detection in an image to be identified (mainly an image other than a sample image, such as a newly acquired road pavement image to be identified, etc.) so as to identify the boundary box of the lane line endpoint.
In some embodiments of the present application, local processing is performed on the image to be identified based on the convolutional neural network, and then an overall processing result is obtained according to a plurality of local processing results, so that the bounding box can be extracted more accurately for the lane line endpoint with a smaller target.
S106: and judging the position of the lane line end point according to the target detection result.
In some embodiments of the present application, after identifying the bounding box of the lane line endpoint through object detection, the position of the lane line endpoint may be determined directly according to the bounding box, for example, the position of the center point of the bounding box or the position of any point in the middle area of the bounding box is determined as the position of the lane line endpoint; alternatively, other algorithms may be employed to further identify within the bounding box to determine the location of the lane line end point.
By the method of fig. 1, by defining a proper boundary box for the lane line end point according to the lane line, object detection is performed based on the regression of the boundary box, so that the lane line end point can be accurately identified in the image to be identified, and the position of the lane line end point can be determined.
Some embodiments of the present application also provide some specific implementations of the method, as well as extensions thereof, based on the method of fig. 1, as described below.
In some embodiments of the present application, for step S106, the determining the position of the lane line endpoint according to the result of the target detection may include: image segmentation is carried out in the identified boundary box by utilizing an image segmentation algorithm so as to segment the foreground and the background; and judging the position of the lane line endpoint according to the result of the image segmentation or the combination of the result of the target detection and the result of the image segmentation. In the latter case, for example, the coordinates of the center point of the boundary box of the identified lane line may be averaged with the coordinates of at least one foreground pixel obtained by dividing the image, and the obtained coordinates may be determined as the position of the end point of the lane line.
The foreground pixels may be, for example, lane line pixels, more specifically lane line edge pixels, and the background pixels may be road surface pixels other than lane lines. The image segmentation may be implemented by using a corresponding trained model, and if the labeling of the samples (such as the boundary box image of the lane line, etc.) used for the model training is sufficiently accurate (for example, accurate to the end point pixels of the lane line), the end point of the lane line may be directly segmented as a foreground. For example, an image semantic segmentation algorithm can be adopted to segment the image, which is helpful for obtaining a more accurate segmentation result.
The algorithm adopted for the target detection and the image segmentation is not particularly limited, and an existing algorithm or an algorithm adapted to the improvement of a practical scene can be adopted, so that the described effect can be achieved. For example, MASK RCNN algorithm is used.
In some embodiments of the present application, assuming that the bounding box of the lane line is defined according to the lane line itself, for step S102, the defining a bounding box for framing the end point of the lane line included in the lane line sample image according to the lane line included in the lane line sample image may include: defining a bounding box for framing lane lines contained in the lane line sample image; and defining a boundary box for framing the lane line endpoint contained in the lane line sample image according to the width and/or the height of the boundary box of the lane line. The bounding box of the lane line can refer to the bounding box of each line segment in the dotted line lane line, and also can refer to the bounding box of the whole lane line, and the determination mode of the bounding box of the lane line can be referred to as follows: the bounding box of a well-defined object such as an automobile, aircraft, etc.
The lane line end point is directly related to the lane line where the lane line end point is located, and in the boundary box defining the lane line end point, the size of the boundary box is reasonable with reference to the lane line where the boundary box is located, so that the defined boundary box is not necessarily the same size in each lane line sample image, but the boundary box is suitable in size compared with the lane line where the boundary box is located, and the characteristics of the end point area can be extracted more effectively.
For example, the bounding box of the lane line may be determined first, resulting in a width and a height of the bounding box, and then a smaller value in the width and the height, from which the width and/or the height of the bounding box of the lane line end point is defined.
Further, considering that when the lane line itself occupies a relatively large proportion in the image, a bounding box with a larger lane line end point may be defined according to the bounding box, a size threshold may be preset for this problem, for limiting the maximum size of the bounding box of the lane line end point. The size threshold value may be, for example, the same value that is set uniformly for each sample picture (for example, the width and the height of the bounding box are each set to not more than 50 pixels, etc.), or may be an adaptive variable value that is set individually for the size of each sample picture (for example, the width and the height of the bounding box are each set to not more than 20 times the minimum value of the width and the height of the corresponding image, etc.).
In some embodiments of the present application, the bounding box of the lane line end point may be defined as a rectangle, but the bounding box of the lane line end point may also be defined as a square, so that the symmetry of the square is better, and the number of dimension parameters of the bounding box (two parameters of width and height are combined into one parameter of side length) may also be reduced, thereby saving resources. In addition, in some images, the lane line end points may be very close to the image edges, in which case if the bounding box is defined as a square, the side length of the square may be too small to facilitate extraction of features within the bounding box, in which case the bounding box may be defined as a rectangle to extract features as far as possible in the vertical direction of the edges.
More intuitively, some embodiments of the present application provide a dashed lane line end and its bounding box schematic, as shown in FIG. 2. Fig. 2 shows a dashed lane line, and a bounding box of each of two end points of the dashed lane line is exemplarily indicated by a dashed square box, and a center point (indicated by a cross) of the bounding box may be regarded as the end point.
In some embodiments of the present application, for step S102, the performing object detection in the image to be identified by using an object detection algorithm based on a convolutional neural network according to the definition of the bounding box of the lane line endpoint may include: acquiring a plurality of lane line sample images in at least one lane scene; marking lane lines and lane line endpoints contained in the lane line sample images respectively; training a bounding box regression model by utilizing a target detection algorithm based on a convolutional neural network according to the plurality of lane line sample images and labels thereof and the definition of a bounding box of a lane line endpoint; and performing target detection in the image to be identified by using the trained bounding box regression model.
Lane line images under different lane scenes may have certain features of scene definition, distinguishing lane scenes facilitates subsequent more accurate identification of lane line endpoints. The lane scene may be defined according to actual recognition requirements, such as a single lane scene, an intersection scene, a u-turn intersection scene, a virtual-real double-line scene, a bidirectional multi-lane scene, a curve scene, etc., which are not limited in detail herein, but only serve as an example for understanding.
According to the above description, some embodiments of the present application further provide a detailed flow of the lane line endpoint recognition method, as shown in fig. 3.
The steps in fig. 3 may include the steps of:
s302: a number of sample images including dashed lane lines under various lane scenarios are collected.
S304: and marking the dotted line lane line and the lane line end points contained in the dotted line lane line according to the lane scene.
S306: for each lane line endpoint, according to the corresponding size of the dotted line lane line, defining a bounding box for the lane line endpoint, specifically defined as: the bounding box is square, the side length of the bounding box is a preset size threshold, the minimum value of the width and the height of the bounding box of the corresponding dotted line lane line is expressed as min (min (w, h), 50), min () represents a function taking the minimum value, w and h represent the width and the height respectively, and 50 represents the size threshold.
S308: using the object detection algorithm, a bounding box of the lane line end point is identified, and a center point of the identified bounding box may be considered as the lane line end point.
S310: and further utilizing an image semantic segmentation algorithm to segment a foreground and a background in the identified boundary box, wherein the foreground is regarded as a lane line end point pixel, and the background is regarded as other pixels.
S312: and extracting lane line endpoints according to the image semantic segmentation result.
S314: and combining the target detection result and the image semantic segmentation result (for example, taking an average value of coordinates of the target detection result and the image semantic segmentation result), and finally judging the position of the lane line endpoint.
Based on the same thought, some embodiments of the present application further provide an apparatus, a device, and a non-volatile computer storage medium corresponding to the above method.
Fig. 4 is a schematic structural diagram of a lane end point identifying apparatus corresponding to fig. 1 according to some embodiments of the present application, where the apparatus includes:
the definition module 401 is used for defining a boundary box for framing the lane line endpoint contained in the lane line sample image according to the lane line contained in the lane line sample image;
the recognition module 402 is used for carrying out target detection in the image to be recognized by utilizing a target detection algorithm based on a convolutional neural network according to the definition of the boundary box of the lane line end point so as to recognize the boundary box of the lane line end point;
the determining module 403 determines the position of the lane line endpoint according to the result of the target detection.
Optionally, the determining module 403 determines the position of the lane line endpoint according to the result of the target detection, specifically includes:
the determining module 403 performs image segmentation in the identified bounding box using an image segmentation algorithm to perform segmentation of foreground and background;
and judging the position of the lane line end point according to the result of the target detection and the result of the image segmentation.
Optionally, the defining module 401 defines a bounding box for framing a lane line endpoint included in the lane line sample image according to a lane line included in the lane line sample image, and specifically includes:
the definition module 401 defines a bounding box for framing the lane lines contained in the lane line sample image;
and defining a boundary box for framing the lane line endpoint contained in the lane line sample image according to the width and/or the height of the boundary box of the lane line.
Optionally, the defining module 401 defines a bounding box for framing a lane line endpoint included in the lane line sample image according to the width and/or height of the lane line bounding box, and further includes:
the definition module 401 limits the maximum size of the bounding box of the lane line endpoint according to a preset size threshold.
Optionally, the bounding box of the lane line endpoint is square in shape, and a side length of the square is not greater than a minimum value of: the size threshold, the width of the bounding box of the lane line, the height.
Optionally, the identifying module 402 performs target detection in the image to be identified by using a target detection algorithm based on a convolutional neural network according to the definition of the bounding box of the lane line endpoint, which specifically includes:
the recognition module 402 obtains a plurality of lane line sample images in at least one lane scene;
marking lane lines and lane line endpoints contained in the lane line sample images respectively;
training a bounding box regression model by utilizing a target detection algorithm based on a convolutional neural network according to the plurality of lane line sample images and labels thereof and the definition of a bounding box of a lane line endpoint;
and performing target detection in the image to be identified by using the trained bounding box regression model.
Optionally, the determining module 403 determines the position of the lane line endpoint according to the result of the target detection, specifically includes:
the determination module 403 determines a location of the lane line endpoint based on a center point of the bounding box of the identified lane line endpoint.
Optionally, the lane line is a dashed line lane line.
Fig. 5 is a schematic structural diagram of a lane end point identifying apparatus corresponding to fig. 1 according to some embodiments of the present application, the apparatus includes:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
defining a boundary frame for framing a lane line endpoint contained in the lane line sample image according to the lane line contained in the lane line sample image;
performing target detection in the image to be identified by using a target detection algorithm based on a convolutional neural network according to the definition of the boundary box of the lane line end point so as to identify the boundary box of the lane line end point;
and judging the position of the lane line end point according to the target detection result.
Some embodiments of the application provide a lane end point identification non-volatile computer storage medium corresponding to that of fig. 1, storing computer-executable instructions configured to:
defining a boundary frame for framing a lane line endpoint contained in the lane line sample image according to the lane line contained in the lane line sample image;
performing target detection in the image to be identified by using a target detection algorithm based on a convolutional neural network according to the definition of the boundary box of the lane line end point so as to identify the boundary box of the lane line end point;
and judging the position of the lane line end point according to the target detection result.
The embodiments of the present application are described in a progressive manner, and the same and similar parts of the embodiments are all referred to each other, and each embodiment is mainly described in the differences from the other embodiments. In particular, for apparatus, devices and media embodiments, the description is relatively simple as it is substantially similar to method embodiments, with reference to the description of method embodiments in part.
The devices, the devices and the media provided by the embodiments of the present application are in one-to-one correspondence with the methods, so that the devices, the devices and the media also have similar beneficial technical effects as the corresponding methods, and since the beneficial technical effects of the methods have been described in detail above, the beneficial technical effects of the devices, the devices and the media are not described again here.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (16)

1. A lane end point recognition method, characterized by comprising:
defining a boundary frame for framing a lane line endpoint contained in the lane line sample image according to the lane line contained in the lane line sample image;
performing target detection in the image to be identified by using a target detection algorithm based on a convolutional neural network according to the definition of the boundary box of the lane line end point so as to identify the boundary box of the lane line end point;
according to the result of the target detection, determining the position of the lane line endpoint specifically includes: in the identified boundary box, further utilizing an image semantic segmentation algorithm to segment a foreground and a background, wherein the foreground is regarded as a lane line end point pixel, and the background is regarded as other pixels; extracting lane line endpoints according to the image semantic segmentation result; combining the target detection result and the image semantic segmentation result, and finally judging the position of the lane line end point;
the boundary box for identifying the lane line end point specifically comprises:
acquiring a plurality of lane line sample images in at least one lane scene;
marking lane lines and lane line endpoints contained in the lane line sample images respectively;
training a bounding box regression model by utilizing a target detection algorithm based on a convolutional neural network according to the plurality of lane line sample images and labels thereof and the definition of a bounding box of a lane line endpoint;
and performing target detection in the image to be identified by using the trained bounding box regression model.
2. The method of claim 1, wherein determining the location of the lane end based on the result of the target detection comprises:
image segmentation is carried out in the identified boundary box by utilizing an image segmentation algorithm so as to segment the foreground and the background;
and judging the position of the lane line end point according to the result of the target detection and the result of the image segmentation.
3. The method of claim 1, wherein defining a bounding box for framing lane line endpoints included in the lane line sample image from lane lines included in the lane line sample image specifically includes:
defining a bounding box for framing lane lines contained in the lane line sample image;
and defining a boundary box for framing the lane line endpoint contained in the lane line sample image according to the width and/or the height of the boundary box of the lane line.
4. The method of claim 3, wherein the defining a bounding box for framing lane line endpoints contained in the lane line sample image according to a width and/or a height of the bounding box of the lane line further comprises:
and limiting the maximum size of the boundary box of the lane line endpoint according to a preset size threshold.
5. The method of claim 4, wherein the bounding box of the lane line endpoint is square in shape, and a side length of the square is not greater than a minimum of: the size threshold, the width of the bounding box of the lane line, the height.
6. The method of claim 1, wherein determining the location of the lane end based on the result of the target detection comprises:
and determining the position of the lane line end point according to the center point of the boundary box of the identified lane line end point.
7. The method of any one of claims 1-6, wherein the lane line is a dashed lane line.
8. A lane line end point recognition device, characterized by comprising:
the definition module is used for defining a boundary frame for framing the lane line end points contained in the lane line sample image according to the lane lines contained in the lane line sample image;
the recognition module is used for carrying out target detection in the image to be recognized by utilizing a target detection algorithm based on a convolutional neural network according to the definition of the boundary box of the lane line end point so as to recognize the boundary box of the lane line end point;
the judging module is used for judging the position of the lane line end point according to the target detection result, and specifically comprises the following steps: in the identified boundary box, further utilizing an image semantic segmentation algorithm to segment a foreground and a background, wherein the foreground is regarded as a lane line end point pixel, and the background is regarded as other pixels; extracting lane line endpoints according to the image semantic segmentation result; combining the target detection result and the image semantic segmentation result, and finally judging the position of the lane line end point;
the boundary box for identifying the lane line end point specifically comprises:
acquiring a plurality of lane line sample images in at least one lane scene;
marking lane lines and lane line endpoints contained in the lane line sample images respectively;
training a bounding box regression model by utilizing a target detection algorithm based on a convolutional neural network according to the plurality of lane line sample images and labels thereof and the definition of a bounding box of a lane line endpoint;
and performing target detection in the image to be identified by using the trained bounding box regression model.
9. The apparatus of claim 8, wherein the determining module determines the location of the lane-line endpoint based on the result of the target detection, specifically comprising:
the judging module performs image segmentation in the identified bounding box by utilizing an image segmentation algorithm so as to segment a foreground and a background;
and judging the position of the lane line end point according to the result of the target detection and the result of the image segmentation.
10. The apparatus of claim 8, wherein the definition module defines a bounding box for framing lane line endpoints included in the lane line sample image from lane lines included in the lane line sample image, specifically comprising:
the definition module is used for defining a boundary box for framing lane lines contained in the lane line sample image;
and defining a boundary box for framing the lane line endpoint contained in the lane line sample image according to the width and/or the height of the boundary box of the lane line.
11. The apparatus of claim 10, wherein the definition module defines a bounding box for framing lane-line endpoints contained in the lane-line sample image according to a width and/or a height of the bounding box of the lane-line, further comprising:
and the definition module limits the maximum size of the boundary box of the lane line endpoint according to a preset size threshold.
12. The apparatus of claim 11, wherein the bounding box of the lane line endpoint is square in shape, a side length of the square being no greater than a minimum of: the size threshold, the width of the bounding box of the lane line, the height.
13. The apparatus of claim 8, wherein the determining module determines the location of the lane-line endpoint based on the result of the target detection, specifically comprising:
and the judging module judges the position of the lane line end point according to the central point of the boundary box of the identified lane line end point.
14. The apparatus of any one of claims 8 to 13, wherein the lane line is a dashed lane line.
15. A lane line end point identifying apparatus, characterized by comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
defining a boundary frame for framing a lane line endpoint contained in the lane line sample image according to the lane line contained in the lane line sample image;
performing target detection in the image to be identified by using a target detection algorithm based on a convolutional neural network according to the definition of the boundary box of the lane line end point so as to identify the boundary box of the lane line end point;
according to the result of the target detection, determining the position of the lane line endpoint specifically includes: in the identified boundary box, further utilizing an image semantic segmentation algorithm to segment a foreground and a background, wherein the foreground is regarded as a lane line end point pixel, and the background is regarded as other pixels; extracting lane line endpoints according to the image semantic segmentation result; combining the target detection result and the image semantic segmentation result, and finally judging the position of the lane line end point;
the boundary box for identifying the lane line end point specifically comprises:
acquiring a plurality of lane line sample images in at least one lane scene;
marking lane lines and lane line endpoints contained in the lane line sample images respectively;
training a bounding box regression model by utilizing a target detection algorithm based on a convolutional neural network according to the plurality of lane line sample images and labels thereof and the definition of a bounding box of a lane line endpoint;
and performing target detection in the image to be identified by using the trained bounding box regression model.
16. A non-transitory computer storage medium storing computer-executable instructions for lane end point identification, the computer-executable instructions configured to:
defining a boundary frame for framing a lane line endpoint contained in the lane line sample image according to the lane line contained in the lane line sample image;
performing target detection in the image to be identified by using a target detection algorithm based on a convolutional neural network according to the definition of the boundary box of the lane line end point so as to identify the boundary box of the lane line end point;
according to the result of the target detection, determining the position of the lane line endpoint specifically includes: in the identified boundary box, further utilizing an image semantic segmentation algorithm to segment a foreground and a background, wherein the foreground is regarded as a lane line end point pixel, and the background is regarded as other pixels; extracting lane line endpoints according to the image semantic segmentation result; combining the target detection result and the image semantic segmentation result, and finally judging the position of the lane line end point;
the boundary box for identifying the lane line end point specifically comprises:
acquiring a plurality of lane line sample images in at least one lane scene;
marking lane lines and lane line endpoints contained in the lane line sample images respectively;
training a bounding box regression model by utilizing a target detection algorithm based on a convolutional neural network according to the plurality of lane line sample images and labels thereof and the definition of a bounding box of a lane line endpoint;
and performing target detection in the image to be identified by using the trained bounding box regression model.
CN201811478746.8A 2018-12-05 2018-12-05 Lane line end point identification method and device, equipment and medium Active CN109583393B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811478746.8A CN109583393B (en) 2018-12-05 2018-12-05 Lane line end point identification method and device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811478746.8A CN109583393B (en) 2018-12-05 2018-12-05 Lane line end point identification method and device, equipment and medium

Publications (2)

Publication Number Publication Date
CN109583393A CN109583393A (en) 2019-04-05
CN109583393B true CN109583393B (en) 2023-08-11

Family

ID=65926316

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811478746.8A Active CN109583393B (en) 2018-12-05 2018-12-05 Lane line end point identification method and device, equipment and medium

Country Status (1)

Country Link
CN (1) CN109583393B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110688971B (en) * 2019-09-30 2022-06-24 上海商汤临港智能科技有限公司 Method, device and equipment for detecting dotted lane line
CN112053407B (en) * 2020-08-03 2024-04-09 杭州电子科技大学 Automatic lane line detection method based on AI technology in traffic law enforcement image
CN114092903A (en) * 2020-08-06 2022-02-25 长沙智能驾驶研究院有限公司 Lane line marking method, lane line detection model determining method, lane line detection method and related equipment
CN115035488A (en) * 2021-02-23 2022-09-09 北京图森智途科技有限公司 Lane line corner detection method and device, electronic equipment and storage medium
CN113449648B (en) * 2021-06-30 2024-06-14 北京纵目安驰智能科技有限公司 Method, system, equipment and computer readable storage medium for detecting indication line

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036246A (en) * 2014-06-10 2014-09-10 电子科技大学 Lane line positioning method based on multi-feature fusion and polymorphism mean value
CN104036253A (en) * 2014-06-20 2014-09-10 智慧城市***服务(中国)有限公司 Lane line tracking method and lane line tracking system
CN105740782A (en) * 2016-01-25 2016-07-06 北京航空航天大学 Monocular vision based driver lane-changing process quantization method
CN106663207A (en) * 2014-10-29 2017-05-10 微软技术许可有限责任公司 Whiteboard and document image detection method and system
CN106682646A (en) * 2017-01-16 2017-05-17 北京新能源汽车股份有限公司 Method and apparatus for recognizing lane line
CN108090456A (en) * 2017-12-27 2018-05-29 北京初速度科技有限公司 A kind of Lane detection method and device
CN108545019A (en) * 2018-04-08 2018-09-18 多伦科技股份有限公司 A kind of safety driving assist system and method based on image recognition technology

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6190830B2 (en) * 2015-02-10 2017-08-30 本田技研工業株式会社 Driving support system and driving support method
JP6889005B2 (en) * 2017-04-05 2021-06-18 株式会社Soken Road parameter estimator

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036246A (en) * 2014-06-10 2014-09-10 电子科技大学 Lane line positioning method based on multi-feature fusion and polymorphism mean value
CN104036253A (en) * 2014-06-20 2014-09-10 智慧城市***服务(中国)有限公司 Lane line tracking method and lane line tracking system
CN106663207A (en) * 2014-10-29 2017-05-10 微软技术许可有限责任公司 Whiteboard and document image detection method and system
CN105740782A (en) * 2016-01-25 2016-07-06 北京航空航天大学 Monocular vision based driver lane-changing process quantization method
CN106682646A (en) * 2017-01-16 2017-05-17 北京新能源汽车股份有限公司 Method and apparatus for recognizing lane line
CN108090456A (en) * 2017-12-27 2018-05-29 北京初速度科技有限公司 A kind of Lane detection method and device
CN108545019A (en) * 2018-04-08 2018-09-18 多伦科技股份有限公司 A kind of safety driving assist system and method based on image recognition technology

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种基于机器学习的ADAS车道类型判别方法;郭剑鹰等;《汽车电器》;20171220;第22-24、28页 *

Also Published As

Publication number Publication date
CN109583393A (en) 2019-04-05

Similar Documents

Publication Publication Date Title
CN109583393B (en) Lane line end point identification method and device, equipment and medium
CN108038474B (en) Face detection method, convolutional neural network parameter training method, device and medium
US9990546B2 (en) Method and apparatus for determining target region in video frame for target acquisition
CN109426801B (en) Lane line instance detection method and device
EP3620981B1 (en) Object detection method, device, apparatus and computer-readable storage medium
WO2018103608A1 (en) Text detection method, device and storage medium
US9501703B2 (en) Apparatus and method for recognizing traffic sign board
US20170103258A1 (en) Object detection method and object detection apparatus
CN110913243B (en) Video auditing method, device and equipment
US20160307050A1 (en) Method and system for ground truth determination in lane departure warning
CN111191611A (en) Deep learning-based traffic sign label identification method
CN110599453A (en) Panel defect detection method and device based on image fusion and equipment terminal
WO2021088504A1 (en) Road junction detection method and apparatus, neural network training method and apparatus, intelligent driving method and apparatus, and device
CN112699711B (en) Lane line detection method and device, storage medium and electronic equipment
CN111191482B (en) Brake lamp identification method and device and electronic equipment
CN109711341B (en) Virtual lane line identification method and device, equipment and medium
CN113345015A (en) Package position detection method, device and equipment and readable storage medium
CN110728229B (en) Image processing method, device, equipment and storage medium
CN112785595B (en) Target attribute detection, neural network training and intelligent driving method and device
CN111426299A (en) Method and device for ranging based on depth of field of target object
CN114494398B (en) Processing method and device of inclined target, storage medium and processor
CN113031010B (en) Method, apparatus, computer readable storage medium and processor for detecting weather
CN112183485B (en) Deep learning-based traffic cone detection positioning method, system and storage medium
CN110555344A (en) Lane line recognition method, lane line recognition device, electronic device, and storage medium
CN112597960A (en) Image processing method, image processing device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: Room 108-27, Building 1, No. 611 Yunxiu South Road, Wuyang Street, Deqing County, Huzhou City, Zhejiang Province, 313200 (Moganshan National High tech Zone)

Patentee after: Kuandong (Huzhou) Technology Co.,Ltd.

Address before: 811, 8 / F, 101, 3-8 / F, building 17, rongchuang Road, Chaoyang District, Beijing 100012

Patentee before: KUANDENG (BEIJING) TECHNOLOGY Co.,Ltd.

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20190405

Assignee: Zhejiang Kuandong Yuntu Technology Co.,Ltd.

Assignor: Kuandong (Huzhou) Technology Co.,Ltd.

Contract record no.: X2024980001061

Denomination of invention: A lane line endpoint recognition method, device, equipment, and medium

Granted publication date: 20230811

License type: Common License

Record date: 20240119