WO2020258894A1 - 车道线属性检测 - Google Patents

车道线属性检测 Download PDF

Info

Publication number
WO2020258894A1
WO2020258894A1 PCT/CN2020/076036 CN2020076036W WO2020258894A1 WO 2020258894 A1 WO2020258894 A1 WO 2020258894A1 CN 2020076036 W CN2020076036 W CN 2020076036W WO 2020258894 A1 WO2020258894 A1 WO 2020258894A1
Authority
WO
WIPO (PCT)
Prior art keywords
attribute
value
lane line
probability
point
Prior art date
Application number
PCT/CN2020/076036
Other languages
English (en)
French (fr)
Inventor
张雅姝
林培文
程光亮
石建萍
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Priority to JP2021500086A priority Critical patent/JP7119197B2/ja
Priority to SG11202013052UA priority patent/SG11202013052UA/en
Priority to KR1020217000803A priority patent/KR20210018493A/ko
Priority to US17/137,030 priority patent/US20210117700A1/en
Publication of WO2020258894A1 publication Critical patent/WO2020258894A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06V10/7753Incorporation of unlabelled data, e.g. multiple instance learning [MIL]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/809Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Definitions

  • the embodiments of the present disclosure relate to computer technology, and in particular to a method, device, electronic device, and smart device for detecting lane line attributes.
  • Assisted driving and automatic driving are two important technologies in the field of intelligent driving. Through assisted driving or automatic driving, it can reduce the distance between workshops, reduce the occurrence of traffic accidents, and reduce the burden on the driver. Therefore, it plays an important role in the field of intelligent driving.
  • lane line attribute detection is required. Through the lane line attribute detection, the type of lane line on the road can be identified, such as white solid lines and white dashed lines. Based on the detection results of lane line attributes, path planning, path deviation warning, and traffic flow analysis can be performed, and it can also provide reference for precise navigation.
  • lane line attribute detection is of great significance to assisted driving and automatic driving. How to perform accurate and efficient lane line attribute detection is an important topic worthy of research.
  • the embodiments of the present disclosure provide a technical solution for lane line attribute detection.
  • the first aspect of the embodiments of the present disclosure provides a lane line attribute detection method, including:
  • each color attribute probability map represents Each point in the road image belongs to the probability of the color corresponding to the color attribute probability map
  • each line attribute probability map represents the probability that each point in the road image belongs to the line type corresponding to the line attribute probability map.
  • the edge attribute probability map represents the probability that each point in the road image belongs to the edge corresponding to the edge attribute probability map; according to the probability map, the lane line attribute in the road image is determined.
  • a second aspect of the embodiments of the present disclosure provides a lane line attribute detection device, including:
  • the first acquisition module is used to acquire the road surface image collected by the image acquisition device installed on the smart device; the first determination module is used to determine the probability map according to the road surface image, and the probability map includes: color attribute probability map, line There are at least two types of attribute probability maps and edge attribute probability maps.
  • N1 color attribute probability maps there are N1 color attribute probability maps, N2 linear attribute probability maps, and N3 edge attribute probability maps, where N1, N2, and N3 are all Is an integer greater than 0; each color attribute probability map represents the probability that each point in the road image belongs to the corresponding color of the color attribute probability map, and each linear attribute probability map represents that each point in the road image belongs to the The linear attribute probability map corresponds to the linear probability, and each edge attribute probability map represents the probability that each point in the road image belongs to the corresponding edge of the edge attribute probability map; the second determining module is used to determine according to the probability map The lane line attributes in the road image.
  • a third aspect of the embodiments of the present disclosure provides an electronic device, including:
  • the memory is used to store program instructions; the processor is used to call and execute the program instructions in the memory to execute the method steps described in the first aspect above.
  • a fourth aspect of the embodiments of the present disclosure provides an intelligent driving method for use in a smart device, including:
  • a fifth aspect of the embodiments of the present disclosure provides a smart device, including:
  • the image acquisition device is used to acquire road images; the memory is used to store program instructions, and the stored program instructions are executed to implement the lane line attribute detection method described in the first aspect; the processor is used to acquire images according to the The road surface image acquired by the device executes the program instructions stored in the memory to detect the lane line attributes in the road surface image, and output prompt information or perform driving control on the smart device according to the detected lane line attributes.
  • a sixth aspect of the embodiments of the present disclosure provides a non-volatile readable storage medium in which a computer program is stored, and the computer program is configured to execute the method steps described in the first aspect.
  • the lane line attribute detection method, device, electronic device, and smart device provided in the embodiments of the present disclosure divide the lane line attribute into three dimensions: color, line type, and edge, and then can use the obtained road image points in these three dimensions. Based on at least two of the three attribute probability maps in three dimensions, the lane line attributes in the road image can be determined. Since the three attribute probability maps obtained in the above process are respectively for the lane line attributes of one dimension, it can be considered that the determination of various probability maps based on the road image is a single task detection, which reduces the complexity of the detection task. degree. Then, according to the detection results of each task, the attributes of lane lines in the road image are determined, that is, the various detection results must be combined to obtain the attributes of lane lines.
  • the lane line attribute detection method uses different attributes to detect the attributes of lane lines separately, and the detection results are then merged. The method improves the accuracy and robustness of predicting lane line attributes. Therefore, when the above method is applied to a scene with higher complexity, more accurate lane line attribute detection results can be obtained.
  • FIG. 1 is a schematic diagram of a scene of a lane line attribute detection method provided by an embodiment of the disclosure.
  • FIG. 2 is a schematic flowchart of a method for detecting lane line attributes according to an embodiment of the disclosure.
  • FIG. 3 is a schematic flowchart of a method for detecting lane line attributes according to another embodiment of the present disclosure.
  • FIG. 4 is a schematic flowchart of a method for detecting lane line attributes according to still another embodiment of the present disclosure.
  • FIG. 5 is a schematic flowchart of a neural network training method for detecting lane line attributes according to an embodiment of the disclosure.
  • FIG. 6 is a schematic structural diagram of a convolutional neural network provided by an embodiment of the disclosure.
  • FIG. 7 is a schematic diagram of a flow of road image processing performed by a neural network for detecting lane line attributes according to an embodiment of the disclosure.
  • FIG. 8 is a module structure diagram of a lane line attribute detection device provided by an embodiment of the disclosure.
  • FIG. 9 is a block diagram of a module structure of a lane line attribute detection device provided by another embodiment of the disclosure.
  • FIG. 10 is a schematic structural diagram of an electronic device provided by an embodiment of the disclosure.
  • FIG. 11 is a schematic structural diagram of a smart device provided by an embodiment of the disclosure.
  • FIG. 12 is a schematic flowchart of an intelligent driving method provided by an embodiment of the disclosure.
  • FIG. 1 is a schematic diagram of a scene of a lane line attribute detection method provided by an embodiment of the disclosure.
  • this method can be applied to a vehicle 120 equipped with an image acquisition device 110.
  • the image acquisition device 110 may be a device with a shooting function installed on the vehicle 120, for example, a camera, a driving recorder and other devices.
  • the image of the road is collected by the image acquisition device on the vehicle, and the attributes of the lane line on the road where the vehicle is located are detected based on the method provided in the present disclosure, so that the obtained detection results can be applied to assisted driving or autonomous driving in. For example, route planning, route deviation warning, and traffic flow analysis.
  • the lane line attribute detection method provided in the present disclosure is also applicable to smart devices that require road recognition such as robots or blind guide devices.
  • FIG. 2 is a schematic flowchart of a method for detecting lane line attributes according to an embodiment of the disclosure. As shown in FIG. 2, the method includes steps S201-S203.
  • the image acquisition device installed on the vehicle can collect the road surface image of the vehicle on the road in real time. Furthermore, through subsequent steps, continuously updated lane attribute detection results can be obtained continuously based on the road image collected by the image collection device.
  • the aforementioned probability graph includes at least two of a color attribute probability graph, a linear attribute probability graph, and an edge attribute probability graph.
  • each color attribute probability map corresponds to one color
  • N1 color attribute probability maps correspond to N1 colors.
  • N2 linear attribute probability graphs each linear attribute probability graph corresponds to one line type
  • the N2 linear attribute probability graphs correspond to N2 types of line types.
  • N3 edge attribute probability graphs each edge attribute probability graph corresponds to one type of edge
  • the N3 edge attribute probability graphs correspond to N3 types of edges.
  • each color attribute probability map represents the probability that each point in the above-mentioned road image belongs to that kind of color
  • each linear attribute probability map represents the probability that each point in the above-mentioned road image belongs to this kind of line type
  • each edge attribute The probability map represents the probability that each point in the above road image belongs to this kind of edge.
  • N1, N2 and N3 are all integers greater than zero.
  • the above probability map can be determined based on a neural network.
  • the above road image is input to a neural network, and the neural network outputs the above probability map.
  • the aforementioned neural network may include, but is not limited to, a convolutional neural network.
  • the attributes of the lane line are divided into three dimensions of color, line type, and edge, and the probability of each point of the road image in at least two of the three dimensions is predicted through the neural network.
  • the aforementioned N1 colors may include at least one of the following: white, yellow, and blue.
  • the color dimension it can also include two results, no lane line and other colors, that is, no lane line and other colors are also used as one color.
  • the lane-free line indicates that the pixel point of the road image does not belong to the lane line, and the other colors indicate that the color of the point of the road image is colors other than white, yellow, and blue.
  • Table 1 is an example of the above-mentioned color types in the color dimension. As shown in Table 1, the color dimension can include 5 color types, and the value of N1 is 5.
  • the above-mentioned N2 line types may include at least one of the following: dashed line, solid line, double dashed line, double solid line, dashed solid line, solid dashed line, triple dashed line, and dashed dashed line.
  • dashed line solid line
  • double dashed line double solid line
  • dashed solid line solid dashed line
  • solid dashed line triple dashed line
  • dashed dashed line there can also be two types of results: no lane line and other line types, that is, no lane line and other line types are also used as a line type.
  • the no lane line indicates that the point of the road image does not belong to the lane line
  • the line type of the point of the other line type indicates the road image is a line type other than the above-mentioned line type.
  • the above dashed solid line can be in the left to right direction, the first line is a dashed line, and the second is a solid line; correspondingly, the above solid dashed line can be in the left to right direction, and the first line is a solid line , The second line is a dotted line.
  • Table 2 is an example of the line type in the above-mentioned line type dimension. As shown in Table 2, the line type dimension can include 10 types of line types, and the value of N2 is 10.
  • the aforementioned N3 types of edges may include at least one of the following: curb-shaped edges, fence-shaped edges, wall or flowerbed-shaped edges, virtual edges, and non-edges.
  • non-edge indicates that the point of the road image does not belong to the edge, but belongs to the lane line.
  • no lane line and other edges that is, no lane line and other edges are also regarded as a kind of edge respectively.
  • no lane line indicates that the point of the road image does not belong to the lane line or the edge
  • the points of other edges indicate the road image belong to the edge type other than the above-mentioned edge type.
  • Table 3 is an example of the aforementioned edge in the edge dimension. As shown in Table 3, the edge dimension can include 7 edge types, and the value of N3 is 7.
  • the neural network can output 5 color attribute probability maps and 10 line types. Attribute probability map and 7 edge attribute probability maps.
  • each color attribute probability map in the 5 color attribute probability maps represents the probability that each point in the road image belongs to one of the colors in Table 1 above
  • the probability of each linear attribute in the 10 linear attribute probability maps The graph represents the probability that each point in the road image belongs to one of the line types in Table 2 above
  • the probability graph of each edge attribute in the 7 edge attribute probability maps represents that each point in the road image belongs to one of the types in Table 3 above. Probability of the edge.
  • the probability map 2 can identify the probability that each point in the road image is white.
  • the road image is represented by a 200*200 matrix.
  • a 200*200 matrix can be output.
  • the value of each element in the matrix is the corresponding position on the road image.
  • Probability of white For example, in the 200*200 matrix output by the neural network, the value in the first row and the first column is 0.4, which means that the probability that the point in the first row and the first column in the road image belongs to the white dashed lane type is 0.4.
  • the matrix output by the neural network can be expressed in the form of a color attribute probability map.
  • the color attribute probability map, the line attribute probability map, and the edge attribute probability map belong to three kinds of probability maps.
  • the probability map can be used at the same time. Multiple probability plots in. For example, using the color attribute probability map, N1 color attribute probability maps can be used to determine the color attributes of the road image.
  • the aforementioned probability map may be two of the aforementioned color attribute probability map, the aforementioned linear attribute probability map, and the aforementioned edge attribute probability map, that is, the aforementioned color attribute probability map and the aforementioned linear attribute probability map can be used. And two of the above edge attribute probability maps determine the lane line attributes in the road image.
  • the number of lane line attributes is the number of combinations of the two probability maps used, and each lane line attribute is the two probabilities used A collection of each attribute in the figure.
  • a lane line type attribute is a collection of a color attribute and a line type attribute, that is, a lane line attribute includes a color attribute and a line type attribute.
  • a certain lane line attribute is a white dashed line, which is a collection of white and dashed lines.
  • the aforementioned probability map may be the aforementioned color attribute probability map, the aforementioned linear attribute probability map, and the aforementioned edge attribute probability map. That is, the aforementioned color attribute probability map and the aforementioned linear attribute probability map can be used at the same time.
  • the graph and the above-mentioned edge attribute probability graph determine the lane line attributes in the road image.
  • the number of lane line attributes is the number of combinations of the number of attributes corresponding to the three probability maps used, and each lane line attribute is the three probabilities used A combination of each attribute in the figure.
  • a lane line attribute is a combination of a color attribute, a line type attribute and an edge attribute, that is, a lane line attribute includes a color attribute, a line type attribute and an edge attribute.
  • a lane line whose attribute is a white dashed line is a combination of white, dashed and non-edge.
  • N1*N2*N3 refers to all the combinations that can be supported by the embodiments of the present disclosure. In the specific implementation process, some combinations may not appear in the actual use process.
  • the lane line attributes are divided into three dimensions: color, line type, and edge, and then three attribute probability maps of each point of the road image in these three dimensions can be obtained, based on these three attribute probability maps At least two of them can determine the attributes of lane lines in the road image. Since the three attribute probability maps obtained in the above process are respectively for the lane line attributes of one dimension, it can be considered that the determination of various probability maps based on the road image is a single task detection, which reduces the complexity of the detection task. degree. Then determine the lane line attributes in the road image based on the results of each task detection, that is, combine various detection results to get the lane line attributes.
  • the lane line attribute detection method uses different attributes to detect lane line attributes separately, and then combines the detection results. The method improves the accuracy and robustness in predicting the attributes of lane lines.
  • the edge is used as an attribute dimension in the present disclosure, so that the present disclosure can not only accurately detect the type of lane in a structured pavement scene with lane markings, but at the same time, when lane markings are missing or no lanes are marked. In the scene of marking lines, for example, in the scene of driving on a rural road, various edge types and the like can also be accurately detected by the method of this embodiment.
  • this embodiment specifically describes the process of using the probability map to determine the lane line attributes in the road image.
  • two of the color attribute probability map, the line attribute probability map, and the edge attribute probability map can be used to determine the lane line attributes in the road image.
  • the probability map obtained in the above step S203 includes a first attribute probability map and a second attribute probability map.
  • the first attribute probability map and the second attribute probability map are color attribute probability maps, linear attribute probability maps, and edges. There are two types of attribute probability maps, and the first attribute probability map is different from the second attribute probability map.
  • FIG. 3 is a schematic flowchart of a method for detecting attributes of lane lines according to another embodiment of the present disclosure. As shown in FIG. 3, when the probability map includes a first attribute probability map and a second attribute probability map, the above step S203 is based on the probability The process of determining the attributes of lane lines in the road image includes the following steps.
  • the road image can be preprocessed to obtain lane lines in the road image.
  • a road image can be input to a trained neural network, and the neural network outputs the result of lane lines in the road image.
  • a road image can be input to a trained semantic segmentation network, and the semantic segmentation network outputs the lane line segmentation result in the road image. Then use the method shown in Figure 3 to process the attributes of the lane lines and calculate the attributes of the lane lines. Thereby improving the accuracy of lane line recognition.
  • the above steps S301-S303 can determine the value of the first attribute of a lane line in the road image.
  • the first attribute is an attribute corresponding to the first attribute probability map.
  • the first attribute probability map is a color attribute probability map
  • the first attribute is a color attribute
  • the value of the first attribute can be white, yellow, Blue, other colors, etc.
  • the neural network can output L first attribute probability maps. For a point in a lane line in the road image, at each The first attribute probability map has a corresponding probability value. The greater the probability value, the greater the probability that the point belongs to the corresponding attribute of the probability map. Therefore, for this point, you can compare the L first attribute probability maps For the probability value of the corresponding position, the value of the first attribute corresponding to the first attribute probability map with the largest probability value is used as the value of the first attribute of the point.
  • the first attribute probability map is a color attribute probability map
  • the first attribute is a color attribute
  • L is 5, that is, it includes 5 color attribute probability maps, which are respectively the probability map 0 and the probability map shown in Table 4 above.
  • Probability graph 2 probability graph 3 and probability graph 4.
  • Each probability graph corresponds to a color attribute. Assuming that a point in a lane line in a road image has the largest probability value in the probability map 1, it can be determined that the color attribute value of the point is the color attribute corresponding to the probability map 1.
  • the value of the first attribute of each point at the position of a lane line in the road image can be obtained, and on this basis, the value of the first attribute of the lane line can be determined according to the value of the first attribute of each point.
  • the value of the first attribute of each point at the position of the lane line may be , As the value of the first attribute of the lane line.
  • the first attribute is a color attribute
  • the number of points whose value of the first attribute is white accounts for 80% of the total number of points
  • the value of the first attribute is yellow
  • white can be used as the first attribute of the lane line Value, that is, the value of the color attribute.
  • the value of the first attribute of each point at the position of the lane line may be used as the value of the first attribute of the lane line.
  • the first attribute is a color attribute
  • the value of the first attribute of all points on the lane line position is yellow
  • yellow can be used as the value of the first attribute of the lane line, that is, the value of the color attribute value.
  • S306 Determine the value of the second attribute of the lane line according to the value of the second attribute of each point at the position of the lane line in the road image.
  • the above steps S304-S306 can determine the value of the second attribute of a lane line in the road image.
  • the second attribute is the attribute corresponding to the second attribute probability map.
  • the second attribute probability map is a linear attribute probability map
  • the second attribute is a linear attribute
  • the value of the second attribute can be a solid line , Dashed line, double solid line, double dashed line, etc.
  • the neural network can output S second attribute probability maps. For a point in a lane line in the road image, at each The second attribute probability map has a corresponding probability value. The larger the probability value, the greater the probability that the point belongs to the corresponding attribute of the probability map. Therefore, for this point, you can compare the S second attribute probability maps For the probability value of the corresponding position, the value of the second attribute corresponding to the second attribute probability map with the largest probability value is used as the value of the second attribute of the point.
  • the second attribute probability map is a linear attribute probability map
  • the second attribute is a linear attribute
  • S is 10, that is, 10 linear attribute probability maps are included, and each probability map corresponds to a linear attribute.
  • a point in a lane line in the road image has the largest probability value in the first linear attribute probability map
  • it can be determined that the line attribute value of the point is the line attribute corresponding to the first linear attribute probability map .
  • the value of the second attribute of each point at the position of a lane line in the road image can be obtained.
  • the value of the second attribute of the lane line can be determined according to the value of the second attribute of each point.
  • the value of the second attribute of each point at the position of the lane line is different, the value of the second attribute of the point with the largest number of points with the same second attribute value at the position of the lane line can be , As the value of the second attribute of the lane line.
  • the second attribute is a line attribute
  • the number of points whose value of the second attribute is a solid line accounts for 81% of the total number of points
  • the value of the second attribute The number of dots that are dotted lines account for 15% of the total number of points, and the number of points with the second attribute value of other line types account for 4% of the total number of points, then the solid line can be used as the lane line
  • the value of the second attribute is the value of the line attribute.
  • the value of the second attribute of each point at the position of the lane line may be used as the value of the second attribute of the lane line.
  • the solid line can be used as the value of the second attribute of the lane line, namely The value of the line type attribute.
  • the first attribute probability map is a color attribute probability map
  • L is equal to N1
  • the first attribute is a color attribute.
  • L is equal to N2
  • the first attribute is a linear attribute.
  • the first attribute probability map is an edge attribute probability map
  • L is equal to N3
  • the first attribute is an edge attribute.
  • S is equal to N1
  • the second attribute is a color attribute probability map
  • S is equal to N2
  • the second attribute is a linear attribute probability map
  • S is equal to N2
  • the second attribute is a linear attribute probability map
  • S is equal to N3
  • the second attribute is an edge attribute.
  • the second attribute probability map when the first attribute probability map is a color attribute probability map, can be a linear attribute probability map or an edge attribute probability map Figure; when the first attribute probability diagram is a linear attribute probability diagram, the second attribute probability diagram can be a color attribute probability diagram or an edge attribute probability diagram; when the first attribute probability diagram is an edge attribute probability diagram, the second attribute probability diagram
  • the graph can be a color attribute probability graph or a linear attribute probability graph.
  • the value of the first attribute and the value of the second attribute of a lane line can be combined, so that the value of the combined attribute can be used as the The value of the attribute of the lane line.
  • the method of combination processing may be, for example, superimposing the value of the second attribute after the value of the first attribute, or superimposing the value of the first attribute after the value of the second attribute.
  • the first attribute is a color attribute and the second attribute is a line attribute
  • the value of the first attribute of a certain lane line in the road image is white
  • the value of the second attribute is a solid line.
  • a "white solid line” can be obtained, and the "white solid line” is the attribute value of the lane line.
  • the color attribute probability map, the line attribute probability map, and the edge attribute probability map can be used simultaneously to determine the lane line attributes in the road image.
  • the probability map obtained in step S203 above includes the third attribute probability map in addition to the aforementioned first attribute probability map and the second attribute probability map.
  • the third attribute probability map is one of a color attribute probability map, a linear attribute probability map, and an edge attribute probability map, and the third attribute probability map, the second attribute probability map and the first attribute probability map are Probability graph with two different attributes.
  • FIG. 4 is a schematic flow chart of a method for detecting lane line attributes according to another embodiment of the present disclosure.
  • the above-mentioned probability map includes both a first attribute probability map and a second attribute probability map, as well as a third attribute probability map.
  • the following steps may also be performed.
  • S403 Determine the value of the third attribute of the lane line according to the value of the third attribute of each point at the position of the lane line in the road image.
  • the above steps S401-S403 can determine the value of the third attribute of a lane line in the road image.
  • the third attribute is the attribute corresponding to the third attribute probability map.
  • the third attribute probability map is an edge attribute probability map
  • the third attribute is an edge attribute
  • the value of the third attribute can be a curb edge , Fence edge, virtual edge, etc.
  • the neural network can output U third attribute probability maps. For a point in a lane line in the road image, at each The third attribute probability map has a corresponding probability value. The larger the probability value, the greater the probability that the point belongs to the corresponding attribute of the probability map. Therefore, for this point, you can compare U third attribute probability maps For the probability value of the corresponding position, the value of the third attribute corresponding to the third attribute probability map with the largest probability value is used as the value of the third attribute of the point.
  • the third attribute probability graph is an edge attribute probability graph
  • the third attribute is an edge attribute
  • U is 7, that is, it includes 7 edge attribute probability graphs
  • each probability graph corresponds to an edge attribute.
  • a point in a lane line in the road image has the largest probability value in the 7th edge attribute probability map
  • it can be determined that the edge attribute value of this point is the edge attribute corresponding to the 7th edge attribute probability map.
  • the value of the third attribute of each point at the position of a lane line in the road image can be obtained.
  • the value of the third attribute of the lane line can be determined according to the value of the third attribute of each point.
  • the value of the third attribute of each point at the position of the lane line is different, the value of the third attribute of the point with the largest number of points with the same value of the third attribute at the position of the lane line can be , As the value of the third attribute of the lane line.
  • the value of the third attribute is 82% of the total number of points on the curb edge.
  • the number of points whose value is a virtual edge accounts for 14% of the total number of points, and the number of non-edge points whose value is the third attribute accounts for 4% of the total number of points, then the curb edge can be used as the The value of the third attribute of the lane line, that is, the value of the edge attribute.
  • the value of the third attribute of each point at the position of the lane line may be used as the value of the third attribute of the lane line.
  • the curb edge can be used as the third attribute of the lane line Value, that is, the value of the edge attribute.
  • the third attribute probability map is a color attribute probability map
  • U is equal to N1
  • the third attribute is a color attribute.
  • U is equal to N2
  • the third attribute is a linear attribute.
  • U is equal to N3
  • the third attribute is an edge attribute.
  • the value of the first attribute and the value of the second attribute of a lane line are performed in step S307.
  • the value of the first attribute of the lane line, the value of the second attribute of the lane line, and the value of the third attribute of the lane line can be combined.
  • the method of combined processing may be, for example, superimposing the value of the third attribute on the value of the second attribute and the value of the first attribute, or superimposing the value of the third attribute on the value of the second attribute and the value of the first attribute. Before the value of the attribute.
  • the first attribute is a color attribute
  • the second attribute is a line attribute
  • the third attribute is an edge attribute.
  • the value of the first attribute of a certain lane line in the road image is obtained through the aforementioned method, and the second attribute is white.
  • the value of the attribute is a solid line
  • the value of the third attribute is non-edge.
  • the probability map can be obtained through a neural network.
  • the road image can be input into the neural network, and the neural network can output the above-mentioned probability map.
  • the following embodiments illustrate the training and use process of the neural network involved in the above embodiments.
  • the above-mentioned neural network may preliminarily use a road surface training image set including color type, line type, and edge type label information for supervised training.
  • the road training image set includes a large number of training images.
  • Each training image is obtained through the process of collecting actual road images and labeling. In an example, you can first collect multiple actual road images in various scenes such as day, night, rain, tunnel, straight road, curve, strong light, etc., and then perform pixel-level labeling for each actual road image. That is, the category of each pixel in the actual road image is labeled as color type, line type, and edge type label information, thereby obtaining a training image set.
  • the neural network after training can be used not only in some simple scenes, such as daytime scenes with good weather and lighting conditions. , Obtain accurate lane line attribute detection results. In scenes with high complexity, such as rainy weather, night, tunnel, curve, strong light, etc., accurate lane line attribute detection results can also be obtained.
  • the training image set involved in the above process covers various scenes in practice. Therefore, the neural network trained using the training image set has good robustness for lane line attribute detection in various scenarios, and the detection The time is short and the accuracy of the detection result is high.
  • the neural network can be trained according to the following process.
  • FIG. 5 is a schematic flowchart of a method for training a neural network for lane line attribute detection according to an embodiment of the disclosure. As shown in FIG. 5, the training process of the above-mentioned neural network may include the following steps.
  • the neural network processes the input training image, and outputs the predicted color attribute probability map, the predicted linear attribute probability map, and the predicted edge attribute probability map of the training image.
  • the training image is included in the road surface training image set.
  • the predicted color attribute probability map, the predicted line attribute probability map, and the predicted edge attribute probability map are the predicted color attribute probability map, the predicted line attribute probability map, and the predicted edge attribute probability map actually output by the neural network.
  • S502 For each point at the position of a lane line in the training image, determine the value of the color attribute, the value of the line type, and the value of the edge attribute of the point.
  • S503 Determine the predicted color type, predicted line type and value of the lane line according to the color attribute value, line type attribute value, and edge attribute value of each point at the lane line position in the above-mentioned training image. Predict the edge type.
  • the predicted color type refers to the value of the color attribute of the lane line obtained from the probability map output by the neural network
  • the predicted line type refers to the value of the line type attribute of the lane line obtained from the probability map output by the neural network.
  • the predicted edge type It refers to the value of the edge attribute of the lane line obtained from the probability map output by the neural network.
  • the color, line type, and edge dimensions may be processed separately to determine the predicted color type, predicted line type, and predicted edge type of a lane line in the training image.
  • the neural network determines the line type attribute or edge attribute of each point at the position of a lane line, it judges whether the lane line is a dashed line or a solid line through the entire road image, and then gives a dotted line or a solid line on the lane line.
  • the probability of the line this is because each pixel in the feature map extracted from the road image by the neural network aggregates the information of a large area in the road image, so it can determine the line type of the lane line or the type of the edge .
  • the color type truth map expresses the color of the training image by means of logical algebra, and the color type truth map is obtained based on the label information of the color type of the training image.
  • the linear truth map expresses the line type of the training image by means of logical algebra, and the linear truth map is obtained based on the label information of the line type of the training image.
  • the edge type truth map expresses the edge type of the training image by means of logical algebra, and the edge type truth map is obtained based on the label information of the edge type of the training image.
  • the loss function can be used to calculate the first loss value between the predicted color type and the color type of the color type truth map, and the second loss between the predicted line type and the line type of the line truth map Value, and the third loss value between the predicted edge type and the edge type in the edge type truth map.
  • S505 Adjust parameter values of the neural network according to the first loss value, the second loss value, and the third loss value.
  • the network parameters of the neural network may include the size of the convolution kernel, weight information, and so on.
  • the above-mentioned loss value can be back-propagated in the neural network by way of gradient back propagation, so as to adjust the network parameter value of the neural network.
  • the above steps S501-S504 can be continued until the first loss value is within the preset loss range, and the second loss value is within the preset loss range, and the first loss value is within the preset loss range.
  • the third loss value is within the preset loss range.
  • the parameter value of the neural network is obtained as the optimized parameter value, and the training of the neural network ends.
  • the above steps S501-S504 can be continued until the sum of the first loss value, the second loss value, and the third loss value is within another preset loss range ,
  • the parameter value of the neural network is obtained at this time the optimized parameter value, and the training of the neural network ends.
  • one training image can be used to train the neural network at a time, or multiple training images can also be used to train the neural network at a time
  • the aforementioned neural network may be a convolutional neural network
  • the convolutional neural network may include a convolutional layer, a residual network unit, an up-sampling layer, and a normalization layer.
  • the order of the convolutional layer and the residual network unit can be flexibly set as required, and the number of each layer can also be flexibly set as required.
  • the aforementioned convolutional neural network may include connected 6-10 convolutional layers, connected 7-12 residual network units, and 1-4 upsampling layers.
  • the convolutional neural network with this specific structure is used for lane line attribute detection, it can meet the requirements of multi-scene or complex scene lane line attribute detection, so that the detection result is more robust.
  • the above convolutional neural network may include 8 connected convolutional layers, 9 connected residual network units, and 2 connected upsampling layers.
  • Fig. 6 is a schematic structural diagram of a convolutional neural network provided by an embodiment of the present disclosure. As shown in Fig. 6, after a road image is input 610, it first passes through 8 consecutive convolutional layers 620 of the convolutional neural network. After the 8 convolutional layers 620, it includes 9 consecutive residual network units 630. After the consecutive 9 residual network units 630, it includes 2 consecutive up-sampling layers 640, and in the consecutive 2 up-sampling layers After 640, it is the normalization layer 650, and the normalization layer 650 finally outputs the probability map.
  • each residual network unit 630 may include 256 filters, and each layer may include 128 3*3 and 128 1*1 filters.
  • the output can be output according to the following process.
  • FIG. 7 is a schematic diagram of a flow chart of road image processing performed by a neural network for detecting lane line attributes according to an embodiment of the disclosure. As shown in FIG. 7, the process of obtaining the above-mentioned probability map through the neural network is:
  • S701 Extract low-level feature information of M channels of the road image through at least one convolutional layer of the neural network.
  • M is the number of probability maps obtained in step S202.
  • M is the sum of N1 and N2.
  • M is the sum of N1, N2, and N3.
  • the convolutional layer can reduce the resolution of the road image and retain the low-level features of the road image.
  • the low-level feature information of the road image may include edge information, straight line information, and curve information in the image.
  • each of the M channels of the above road image corresponds to a color attribute, or a linear attribute or a kind of Edge attributes.
  • S702 Extract high-level feature information of the M channels of the road image based on the low-level feature information of the M channels by using at least one residual network unit of the neural network.
  • the high-level feature information of the M channels of the road image extracted by the residual network unit includes semantic features, contours, and overall structure.
  • S703 Perform up-sampling processing on the high-level feature information of the M channels through at least one upsampling layer of the neural network to obtain M probability maps that are as large as the road image.
  • the image can be restored to the original size of the image input to the neural network.
  • M probability maps that are as large as the road image input to the neural network can be obtained.
  • the low-level feature information and the high-level feature information described in the embodiments of the present disclosure are relative concepts under a specific neural network.
  • the features extracted by the shallower network layers are relative to the features extracted by the deeper network layers.
  • the former extracts belong to low-level feature information, while the latter extracts belong to high-level feature information. .
  • the neural network may further include a normalization layer after the above-mentioned up-sampling layer, and the above-mentioned M probability maps are output through the normalization layer.
  • the feature map of the road image is obtained after upsampling, and the value of each pixel in the feature map is normalized so that the value of each pixel in the feature map is in the range of 0 to 1. , Thereby obtaining M probability maps.
  • a normalization method is: first determine the maximum value of the pixel point in the feature map, and then divide the value of each pixel point by the maximum value, so that the value of each pixel point in the feature map is In the range of 0 to 1.
  • the road surface image before the road surface image is input to the neural network in step S202, the road surface image may be subjected to distortion processing first to further improve the accuracy of the neural network output result.
  • FIG. 8 is a module structure diagram of a lane line attribute detection device provided by an embodiment of the disclosure. As shown in FIG. 8, the device includes: a first acquisition module 801, a first determination module 802, and a second determination module 803.
  • the first acquisition module 801 is used to acquire road images collected by an image acquisition device installed on a smart device.
  • the first determining module 802 is configured to determine a probability map according to the road image, and the probability map includes at least two of: a color attribute probability map, a line attribute probability map, and an edge attribute probability map, wherein the color attribute probability map N1, linear attribute probability maps are N2, edge attribute probability maps are N3, where N1, N2, and N3 are integers greater than 0; each color attribute probability map represents each point in the road image The probability of belonging to this kind of color, each linear attribute probability map represents the probability that each point in the road image belongs to this kind of line type, and each edge attribute probability map represents that each point in the road image belongs to this kind of edge The probability.
  • the second determining module 803 is configured to determine the lane line attributes in the road image according to the probability map.
  • the colors corresponding to the N1 color attribute probability maps include at least one of the following: white, yellow, and blue.
  • the line type corresponding to the N2 line type attribute probability maps includes at least one of the following: a dashed line, a solid line, a double dashed line, a double solid line, a dashed solid line, a solid dashed line, a triple dashed line, and a dashed line.
  • edges corresponding to the N3 edge attribute probability graphs include at least one of the following: curb-shaped edges, fence-shaped edges, wall or flowerbed-shaped edges, virtual edges, and non-edges.
  • the probability diagram includes a first attribute probability diagram and a second attribute probability diagram
  • the first attribute probability diagram and the second attribute probability diagram are color attribute probability diagrams, linear attribute probability diagrams, and
  • the second determining module 803 is specifically configured to: for each point at a lane line position in the road image, determine the respective probability value of the point in the corresponding position in the L first attribute probability maps; For this point, use the value of the first attribute corresponding to the first attribute probability map with the largest probability value as the value of the first attribute of the point; according to the first attribute of each point at the position of the lane line in the road image The value of the attribute determines the value of the first attribute of the lane line; for each point at the position of the lane line in the road image, determine the corresponding point in the S second attribute probability maps The respective probability value of the position; for this point, the value of the second attribute corresponding to the second attribute probability map with the largest probability value is used as the value of the second attribute of the point; according to the lane line in the road image The value of the second attribute of each point at the position determines the value of the second attribute of the lane line; the value of the first attribute of the lane line and the value of the second attribute of the lane line are
  • the second determining module 803 determines the value of the first attribute of the lane line according to the first attribute of each point at the position of the lane line in the road image, including: responding to the line The value of the first attribute of each point at the position of the lane line is different, and the value of the first attribute of the point with the largest number of points with the same value of the first attribute at the position of the lane line is taken as the value of the first attribute of the lane line The value of the first attribute.
  • the second determining module 803 determines the value of the first attribute of the lane line according to the first attribute of each point at the position of the lane line in the road image, including: responding to the line The value of the first attribute of each point at the lane line position is the same, and the value of the first attribute of the point at the lane line position is taken as the value of the first attribute of the lane line.
  • the second determining module 803 determines the value of the second attribute of the lane line according to the value of the second attribute of each point at the position of the lane line in the road image, including: responding to The value of the second attribute of each point at the lane line position is different, and the second attribute value of the point with the largest number of points with the same second attribute value at the lane line position is taken as the lane line The value of the second attribute of the line.
  • the second determining module 803 determines the value of the second attribute of the lane line according to the value of the second attribute of each point at the position of the lane line in the road image, including: responding to The value of the second attribute of each point at the position of the lane line is the same, and the value of the second attribute of the point at the position of the lane line is taken as the value of the second attribute of the lane line.
  • the probability map further includes a third attribute probability map, the third attribute probability map being one of a color attribute probability map, a line attribute probability map, and an edge attribute probability map, and the first attribute probability map
  • the third attribute probability map being one of a color attribute probability map, a line attribute probability map, and an edge attribute probability map
  • the first attribute probability map The three-attribute probability map, the second attribute probability map, and the first attribute probability map are probability maps with two different attributes.
  • the second determining module 803 is further configured to: before combining the value of the first attribute of the lane line with the value of the second attribute of the lane line, perform a reference to the lane in the road image For each point at the line position, determine the respective probability value of the point in the corresponding position in the U third attribute probability maps; for this point, the third attribute corresponding to the third attribute probability map with the largest probability value The value of is used as the value of the third attribute of the point; the value of the third attribute of the lane line is determined according to the value of the third attribute of each point at the position of the lane line in the road image; where, When the third attribute probability map is a color attribute probability map, U is equal to N1, and the third attribute is a color attribute; when the third attribute probability map is a linear attribute probability map, U is equal to N2, and the third attribute is a linear attribute; When the third attribute probability map is an edge attribute probability map, U is equal to N3, and the third attribute is an edge attribute; the second determination module 803 is the value of the
  • the second determining module 803 determines the value of the third attribute of the lane line according to the value of the third attribute of each point at the position of the lane line in the road image, including: responding to The value of the third attribute of each point at the lane line position is different, and the third attribute value of the point with the largest number of points with the same third attribute value at the lane line position is taken as the lane line The value of the third attribute of the line.
  • the second determining module 803 determines the value of the third attribute of the lane line according to the value of the third attribute of each point at the position of the lane line in the road image, including: responding to The value of the third attribute of each point at the position of the lane line is the same, and the value of the third attribute of the point at the position of the lane line is taken as the value of the third attribute of the lane line.
  • the first determining module 802 is specifically configured to: input the road image into a neural network, and the neural network outputs the probability map; wherein, the neural network adopts a color type, a line type, and an edge.
  • the road training image set of type label information is supervised and trained.
  • FIG. 9 is a module structure diagram of a lane line attribute detection device provided by an embodiment of the present disclosure. As shown in FIG. 9, it further includes: a preprocessing module 804 for de-distorting the road image.
  • the division of the various modules of the above device is only a division of logical functions, and may be fully or partially integrated into a physical entity in actual implementation, or may be physically separated.
  • these modules can all be implemented in the form of software called by processing elements; they can also be implemented in the form of hardware; some modules can be implemented in the form of calling software by processing elements, and some of the modules can be implemented in the form of hardware.
  • the determining module may be a separately established processing element, or it may be integrated into a certain chip of the above-mentioned device for implementation.
  • each step of the above method or each of the above modules can be completed by hardware integrated logic circuits in the processor element or instructions in the form of software.
  • the above modules may be one or more integrated circuits configured to implement the above methods, such as one or more application specific integrated circuit (ASIC), or one or more microprocessors (digital signal processor, DSP), or, one or more field programmable gate arrays (FPGA), etc.
  • ASIC application specific integrated circuit
  • DSP digital signal processor
  • FPGA field programmable gate arrays
  • the processing element may be a general-purpose processor, such as a central processing unit (CPU) or other processors that can call program codes.
  • CPU central processing unit
  • these modules can be integrated together and implemented in the form of a system-on-a-chip (SOC).
  • SOC system-on-a-chip
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).
  • FIG. 10 is a schematic structural diagram of an electronic device provided by an embodiment of the disclosure.
  • the electronic device 1000 may include a processor 1001, a memory 1002, a communication interface 1003, and a system bus 1004.
  • the memory 1002 and the communication interface 1003 are connected to the processor 1001 through the system bus 1004. And complete mutual communication, the memory 1002 is used to store computer execution instructions, the communication interface 1003 is used to communicate with other devices, and the processor 1001 executes the computer program to implement the provision of the embodiment of the present disclosure The lane line attribute detection method.
  • the system bus mentioned in FIG. 10 may be a peripheral component interconnect standard (PCI) bus or an extended industry standard architecture (EISA) bus, etc.
  • PCI peripheral component interconnect standard
  • EISA extended industry standard architecture
  • the system bus can be divided into address bus, data bus, control bus, etc.
  • the communication interface is used to realize the communication between the database access device and other devices (such as client, read-write library and read-only library).
  • the memory may include random access memory (RAM), and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
  • the above-mentioned processor can be a general-purpose processor, including a central processing unit CPU, a network processor (NP), etc.; it can also be a digital signal processor DSP, an application specific integrated circuit ASIC, a field programmable gate array FPGA or other Programming logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • a general-purpose processor including a central processing unit CPU, a network processor (NP), etc.; it can also be a digital signal processor DSP, an application specific integrated circuit ASIC, a field programmable gate array FPGA or other Programming logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • FIG. 11 is a schematic structural diagram of a smart device provided by an embodiment of the disclosure. As shown in FIG. 11, the smart device 1100 of this embodiment includes: an image acquisition device 1101, a processor 1102, and a memory 1103.
  • the image acquisition device 1101 takes a road image and sends the road image to the processor 1102.
  • the processor 1102 calls the memory 1103 and executes the program instructions in the memory 1103 to detect the acquired
  • the attributes of the lane lines in the road image are outputted according to the detected attributes of the lane lines, and prompt information is output or driving control of the smart device is performed.
  • the smart device in this embodiment is a smart device capable of driving on the road, such as a smart driving vehicle, a robot, a blind guide device, etc.
  • the smart driving vehicle may be an autonomous driving vehicle or a vehicle with a driving assistance system.
  • the prompt information may include lane line departure warning prompt, lane line keeping prompt, change of driving speed, change of driving direction, lane line maintenance, change of vehicle light state, etc.
  • the above-mentioned driving control may include: braking, changing the driving speed, changing the driving direction, maintaining the lane line, changing the state of the lights, switching the driving mode, etc., wherein the driving mode switching may be the switching between assisted driving and automatic driving, for example, Switch assisted driving to automatic driving.
  • FIG. 12 is a schematic flowchart of the intelligent driving method provided by the embodiments of the present disclosure.
  • the embodiments of the present disclosure also provide an intelligent driving method, which is used in the smart device described in FIG. 11, as shown in FIG. As shown, the method includes:
  • S1203 Output prompt information or perform driving control on the smart device according to the detected attributes of the lane line.
  • the execution subject of this embodiment is a mobile smart device, such as a smart driving vehicle, a robot, a blind guide device, etc., where the smart driving vehicle may be an autonomous driving vehicle or a vehicle with an assisted driving system.
  • the intelligent driving in this embodiment includes assisted driving, automatic driving, and/or driving mode switching between assisted driving and automatic driving.
  • the lane line attribute detection result of the road image is obtained by the lane line attribute detection method of the foregoing embodiment, and the specific process is referred to the description of the foregoing embodiment, and will not be repeated here.
  • the smart device executes the aforementioned lane line attribute detection method, obtains the lane line attribute detection result of the road image, and outputs prompt information and/or performs movement control according to the lane line attribute detection result of the road image.
  • the prompt information may include lane line departure warning prompts, lane line maintenance prompts, change of driving speed, change of driving direction, lane line maintenance, change of vehicle light status, etc.
  • the above-mentioned driving control may include: braking, changing the driving speed, changing the driving direction, and maintaining the lane line.
  • the smart device obtains the lane line attribute detection result of the road image, and outputs prompt information or performs driving control of the smart device according to the lane line attribute detection result of the road image, thereby improving the safety of the smart device Sex and reliability.
  • an embodiment of the present disclosure further provides a non-volatile storage medium that stores instructions in the storage medium, which when run on a computer, causes the computer to execute the lane line attribute detection method provided by the embodiment of the present disclosure .
  • an embodiment of the present disclosure further provides a chip for executing instructions, and the chip is configured to execute the lane line attribute detection method provided by the embodiment of the present disclosure.
  • the embodiments of the present disclosure further provide a program product, the program product includes a computer program, the computer program is stored in a storage medium, at least one processor can read the computer program from the storage medium, and the at least one When the processor executes the computer program, the lane line attribute detection method provided by the embodiment of the present disclosure can be implemented.
  • At least one refers to one or more, and “multiple” refers to two or more.
  • “And/or” describes the association relationship of the associated objects, indicating that there can be three relationships, for example, A and/or B, which can mean: A alone exists, both A and B exist, and B exists alone, where A, B can be singular or plural.
  • the character “/” generally indicates that the associated objects before and after are in an “or” relationship; in the formula, the character “/” indicates that the associated objects before and after are in a “division” relationship.
  • “The following at least one item (a)” or similar expressions refers to any combination of these items, including any combination of a single item (a) or plural items (a).
  • at least one of a, b, or c can mean: a, b, c, ab, ac, bc, or abc, where a, b, and c can be single or multiple A.
  • the size of the sequence numbers of the foregoing processes does not mean the order of execution.
  • the execution order of the processes should be determined by their functions and internal logic, and should not be used in the embodiments of the present disclosure.
  • the implementation process constitutes any limitation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Mathematical Optimization (AREA)
  • Algebra (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Probability & Statistics with Applications (AREA)
  • Pure & Applied Mathematics (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Geometry (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)

Abstract

本公开实施例提供一种车道线属性检测方法、装置、电子设备及智能设备,该方法包括:获取智能设备上安装的图像采集装置所采集的路面图像;根据所述路面图像,确定概率图,所述概率图包括:颜色属性概率图、线型属性概率图和边沿属性概率图中的至少两种;每个颜色属性概率图表示所述路面图像中的点属于该种颜色的概率,每个线型属性概率图表示所述路面图像中的点属于该种线型的概率,每个边沿属性概率图表示所述路面图像中的点属于该种边沿的概率;根据所述概率图确定,所述路面图像中的车道线属性。

Description

车道线属性检测 技术领域
本公开实施例涉及计算机技术,尤其涉及一种车道线属性检测方法、装置、电子设备及智能设备。
背景技术
辅助驾驶和自动驾驶是智能驾驶领域的两项重要技术,通过辅助驾驶或自动驾驶,可以减小车间间隔,减少交通事故的发生,减轻驾驶员的负担,因此在智能驾驶领域发挥着重要作用。在辅助驾驶技术和自动驾驶技术中,需要进行车道线属性检测,通过车道线属性检测,可以识别出路面上的车道线的类型,例如白色实线、白色虚线等。基于车道线属性的检测结果,可以进行路径规划、路径偏移预警以及车流分析等,还能为精准导航提供参照。
因此,车道线属性检测对于辅助驾驶和自动驾驶的意义重大,如何进行准确高效的车道线属性检测,是值得研究的重要课题。
发明内容
本公开实施例提供一种车道线属性检测技术方案。
本公开实施例第一方面提供一种车道线属性检测方法,包括:
获取智能设备上安装的图像采集装置所采集的路面图像;根据所述路面图像,确定概率图,所述概率图包括:颜色属性概率图、线型属性概率图和边沿属性概率图中的至少两种,其中,颜色属性概率图为N1个,线型属性概率图为N2个,边沿属性概率图为N3个,其中,N1、N2和N3均为大于0的整数;每个颜色属性概率图表示所述路面图像中的各个点属于该颜色属性概率图对应颜色的概率,每个线型属性概率图表示所述路面图像中的各个点属于该线型属性概率图对应线型的概率,每个边沿属性概率图表示所述路面图像中的各个点属于该边沿属性概率图对应边沿的概率;根据所述概率图,确定所述路面图像中的车道线属性。
本公开实施例第二方面提供一种车道线属性检测装置,包括:
第一获取模块,用于获取智能设备上安装的图像采集装置所采集的路面图像;第一确定模块,用于根据所述路面图像确定概率图,所述概率图包括:颜色属性概率图、线型属性概率图和边沿属性概率图中的至少两种,其中,颜色属性概率图为N1个,线型属性概率图为N2个,边沿属性概率图为N3个,其中,N1、N2和N3均为大于0的整数;每个颜色属性概率图表示所述路面图像中的各个点属于该颜色属性概率图对应颜色的概率,每个线型属性概率图表示所述路面图像中的各个点属于该线型属性概率图对应线型的概率,每个边沿属性概率图表示所述路面图像中的各个点属于该边沿属性概率图对应边沿的概率;第二确定模块,用于根据所述概率图确定所述路面图像中的车道线属性。
本公开实施例第三方面提供一种电子设备,包括:
存储器,用于存储程序指令;处理器,用于调用并执行所述存储器中的程序指令,执行上述第 一方面所述的方法步骤。
本公开实施例第四方面提供一种智能驾驶方法,用于智能设备,包括:
获取路面图像;采用如上述第一方面所述的车道线属性检测方法,检测获取的路面图像中的车道线属性;根据检测得到的车道线属性输出提示信息或对所述智能设备进行行驶控制。
本公开实施例第五方面提供一种智能设备,包括:
图像采集装置,用于获取路面图像;存储器,用于存储程序指令,存储的程序指令被执行时以实现上述第一方面所述的车道线属性检测方法;处理器,用于根据所述图像采集装置获取的路面图像,执行所述存储器存储的程序指令,以检测所述路面图像中的车道线属性,并根据检测得到的车道线属性输出提示信息或对智能设备进行行驶控制。
本公开实施例第六方面提供一种非易失性可读存储介质,所述可读存储介质中存储有计算机程序,所述计算机程序用于执行上述第一方面所述的方法步骤。
本公开实施例所提供的车道线属性检测方法、装置、电子设备及智能设备,将车道线属性划分为颜色、线型和边沿三种维度,进而可以利用得到的路面图像的各个点在这三种维度上的三种属性概率图,基于这三种属性概率图中的至少两种,可以确定路面图像中的车道线属性。由于在上述过程中所得到的三种属性概率图分别针对一种维度的车道线属性,因此,可以认为在根据路面图像确定各种概率图是进行单一任务的检测,这降低了检测任务的复杂度。然后再根据每个任务检测的结果确定路面图像中的车道线属性,也就是要把各种检测结果融合起来,得出车道线属性。因此,在车道线属性的种类较多,或者需要精细确定车道线的属性时,本公开实施例所提供的车道线属性检测方法对车道线属性的检测采用不同属性分开检测、检测结果再融合的方式提高了预测车道线属性的准确性和鲁棒性。因此,将上述方法应用于复杂度较高的场景时,能够得到更加准确的车道线属性检测结果。
附图说明
图1为本公开一实施例提供的车道线属性检测方法的场景示意图。
图2为本公开一实施例提供的车道线属性检测方法的流程示意图。
图3为本公开另一实施例提供的车道线属性检测方法的流程示意图。
图4为本公开再一实施例提供的车道线属性检测方法的流程示意图。
图5为本公开一实施例提供的用于进行车道线属性检测的神经网络的训练方法的流程示意图。
图6为本公开一实施例提供的卷积神经网络的结构示意图。
图7为本公开一实施例提供的用于进行车道线属性检测的神经网络进行路面图像处理的流程示意图。
图8为本公开一实施例提供的车道线属性检测装置的模块结构图。
图9为本公开另一实施例提供的车道线属性检测装置的模块结构图。
图10为本公开一实施例提供的一种电子设备的结构示意图。
图11为本公开一实施例提供的智能设备的结构示意图。
图12为本公开一实施例提供的智能驾驶方法的流程示意图。
具体实施方式
为使本发明的目的、技术方案和优点更加清楚,下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
图1为本公开实施例提供的车道线属性检测方法的场景示意图。如图1所示,该方法可以适用于安装有图像采集装置110的车辆120。其中,该图像采集装置110可以是安装在车辆120上的具有拍摄功能的设备,例如,摄像头、行车记录仪等设备。当车辆位于路面上时,通过车辆上的图像采集装置采集路面图像,并基于本公开提供的方法检测车辆所在路面上的车道线属性,进而使得所得到的检测结果可以应用于辅助驾驶或者自动驾驶中。例如,进行路径规划、路径偏移预警以及车流分析等。
在一些例子中,本公开提供的车道线属性检测方法也适用于机器人或者导盲设备等需要进行道路识别的智能设备。
图2为本公开实施例提供的车道线属性检测方法的流程示意图,如图2所示,该方法包括步骤S201-S203。
S201、获取智能设备上安装的图像采集装置所采集的路面图像。
以智能设备为车辆为例,则安装在车辆上的图像采集装置可以实时采集车辆行驶路面上的路面图像。进而,通过后续步骤,可以持续根据图像采集装置所采集到的路面图像,得到不断更新的车道属性检测结果。
S202、根据上述路面图像,确定概率图。
其中,上述概率图包括颜色属性概率图、线型属性概率图和边沿属性概率图中的至少两种。
上述颜色属性概率图为N1个,每个颜色属性概率图对应一种颜色,N1个颜色属性概率图对应N1种颜色。上述线型属性概率图为N2个,每个线型属性概率图对应一种线型,N2个线型属性概率图对应N2种线型。上述边沿属性概率图为N3个,每个边沿属性概率图对应一种边沿,N3个边沿属性概率图对应N3种边沿。其中,每个颜色属性概率图表示上述路面图像中的各个点属于该种颜色的概率,每个线型属性概率图表示上述路面图像中的各个点属于该种线型的概率,每个边沿属性概率图表示上述路面图像中的各个点属于该种边沿的概率。其中,N1、N2和N3均为大于0的整数。
在一些例子中,可以根据神经网络确定上述概率图。具体的,将上述路面图像输入神经网络中,由神经网络输出上述概率图。其中,上述神经网络可以包括但不限于是卷积神经网络。
在本公开实施例中,将车道线的属性按照颜色、线型和边沿三个维度进行拆分,经过神经网络预测出路面图像的各个点分别在上述三个维度中的至少两个维度的概率图。
在一个例子中,在颜色维度上,上述N1种颜色可以包括以下至少之一:白色、黄色和蓝色。除了这三种颜色,在颜色维度上,还可以包括无车道线和其他颜色两种结果,即无车道线和其他颜 色也分别作为一种颜色。其中,无车道线表示路面图像的像素点不属于车道线,其他颜色表示路面图像的点的颜色为白色、黄色和蓝色之外的颜色。
表1为上述的颜色维度上的颜色类型示例,如表1所示,颜色维度上可以包括5种颜色类型,则N1的值为5。
表1
类型编号 0 1 2 3 4
类型名称 无车道线 其他颜色 白色 黄色 蓝色
在一个例子中,在线型维度上,上述N2种线型可以包括以下至少之一:虚线、实线、双虚线、双实线、虚实线、实虚线、三虚线、虚实虚线。除了这些线型,在线型维度上,还可以包括无车道线和其他线型两种结果,即无车道线和其他线型也分别作为一种线型。其中,无车道线表示路面图像的点不属于车道线,其他线型表示路面图像的点的线型为上述线型之外的线型。上述虚实线可以是在左向右的方向上,第一条线为虚线,第二条为实线;相应的,上述实虚线可以是在左向右的方向上,第一条线为实线,第二条为虚线。
表2为上述的线型维度上的线型示例,如表2所示,线型维度上可以包括10种线型,则N2的值为10。
表2
Figure PCTCN2020076036-appb-000001
在一个例子中,在边沿维度上,上述N3种边沿可以包括以下至少之一:路牙型边沿、栅栏型边沿、墙或花坛型边沿、虚拟边沿、非边沿。其中,非边沿表示路面图像的点不属于边沿,而属于车道线。除了这些边沿类型,在边沿维度上,还可以包括无车道线和其他边沿两种结果,即无车道线和其他边沿也分别作为一种边沿。其中,无车道线表示路面图像的点不属于车道线也不属于边沿,其他边沿表示路面图像的点属于上述边沿类型之外的边沿类型。
表3为上述的边沿维度上的边沿示例,如表3所示,边沿维度上可以包括7种边沿类型,则N3的值为7。
表3
Figure PCTCN2020076036-appb-000002
以上述表1、表2和表3所示的各属性的类型为例,在本步骤中,将路面图像输入到神经网络之后,可以经神经网络输出5个颜色属性概率图、10个线型属性概率图和7个边沿属性概率图。其中,5个颜色属性概率图中的每个颜色属性概率图代表路面图像中的各点属于上述表1中的一种颜色的概率,10个线型属性概率图中的每个线型属性概率图代表路面图像中的各点属于上述表2中的 一种线型的概率,7个边沿属性概率图中的每个边沿属性概率图代表路面图像中的各点属于上述表3中的一种边沿的概率。
以颜色属性为例,假设使用上述表1所示的编号,并且,5个颜色属性概率图分别为概率图0、概率图1、概率图2、概率图3和概率图4。则颜色属性概率图与表1中的颜色类型的对应关系可以如表4所示。
表4
颜色属性概率图 概率图0 概率图1 概率图2 概率图3 概率图4
颜色类型 无车道线 其他颜色 白色 黄色 蓝色
进而,基于上述表4所示的对应关系,示例性的,概率图2可以标识路面图像中的每个点属于白色的概率。假设路面图像使用200*200大小的矩阵表示,将该矩阵输入上述神经网络之后,可以输出一个200*200大小的矩阵,其中,矩阵中的每个元素的值即为路面图像上的对应位置属于白色的概率。例如,神经网络输出的200*200大小的矩阵中,第1行第1列的值为0.4,则说明路面图像中第1行第1列的点属于即白色虚线的车道类型的概率为0.4。进而,神经网络所输出的矩阵可以以颜色属性概率图的形式表示。
S203、根据上述概率图,确定上述路面图像中的车道线属性。
值得说明的是,在本公开实施例中,颜色属性概率图、线型属性概率图和边沿属性概率图属于三种概率图,在使用其中的一种概率图时,可以同时使用该种概率图中的多个概率图。例如,使用颜色属性概率图中,可以同时使用N1个颜色属性概率图确定路面图像的颜色属性。
一种可选方式中,上述概率图可以为上述颜色属性概率图、上述线型属性概率图和上述边沿属性概率图中的两种,即可以使用上述颜色属性概率图、上述线型属性概率图和上述边沿属性概率图中的其中两种确定路面图像中的车道线属性。
在这种方式中,确定路面图像中的车道线属性时,车道线属性的数量为所使用的两种概率图对应的属性数量的组合的数量,每个车道线属性为所使用的两种概率图中的各一个属性的集合。
示例性的,使用颜色属性概率图和线型属性概率图确定路面图像中的车道线属性,颜色属性概率图为N1个,线型属性概率图为N2个,则所确定的路面图像中的车道线属性的数量为N1*N2。其中,一个车道线型属性为一个颜色属性和一个线型属性的集合,即一个车道线属性包括一个颜色属性和一个线型属性。例如,某个车道线属性为白色虚线,即为白色和虚线的集合。
另一种可选方式中,上述概率图可以为上述颜色属性概率图、上述线型属性概率图和上述边沿属性概率图中三种,即可以同时使用上述颜色属性概率图、上述线型属性概率图和上述边沿属性概率图确定路面图像中的车道线属性。
在这种方式中,确定路面图像中的车道线属性时,车道线属性的数量为所使用的三种概率图对应的属性数量的组合的数量,每个车道线属性为所使用的三种概率图中的各一个属性的组合。
示例性的,颜色属性概率图为N1个,线型属性概率图为N2个,边沿属性概率图为N3个,则所确定的路面图像中的车道线属性的数量为N1*N2*N3。其中,一个车道线属性为一个颜色属性、一个线型属性和一个边沿属性的组合,即一个车道线属性包括一个颜色属性、一个线型属性和一个 边沿属性。例如,某个车道线属性为白色虚线的车道线,即为白色、虚线和非边沿的组合。
值得说明的是,上述N1*N2*N3是指本公开实施例能够支持的所有组合,在具体实施过程中,某些组合可能并不会在实际使用过程中出现。
上述三种方式的具体实施过程将在下述实施例中进行详细说明。
本实施例中,将车道线属性划分为颜色、线型和边沿三种维度,进而可以得到路面图像的各点在这三种维度上的三种属性概率图,基于这三种属性概率图中的至少两种,可以确定路面图像中的车道线属性。由于在上述过程中所得到的三种属性概率图分别针对一种维度的车道线属性,因此,可以认为在根据路面图像确定各种概率图是进行单一任务的检测,这降低了检测任务的复杂度。然后再根据每个任务检测的结果确定路面图像中的车道线属性,也就是要把各种检测结果结合起来,得出车道线属性。因此,在车道线属性的种类较多,或者需要精细确定车道线的属性时,本公开实施例所提供的车道线属性检测方法通过对车道线属性的检测采用不同属性分开检测、检测结果再结合的方式,提高了在预测车道线属性上的准确性和鲁棒性。将上述过程应用于复杂度较高的场景时,能够得到更加准确的车道线属性检测结果。另外,本公开中将边沿作为一种属性维度,使得本公开不仅在标识有车道标示线的结构化路面场景下可以准确检测出车道类型等,同时,在车道线标示线缺失或者未标识有车道标示线的场景下,例如在乡间道路行驶场景下,也可以通过本实施例的方法准确地检测出各类边沿类型等。
在上述实施例的基础上,本实施例具体描述使用概率图来确定路面图像中的车道线属性的过程。
一种可选方式中,可以使用上述颜色属性概率图、上述线型属性概率图和上述边沿属性概率图中的其中两种确定路面图像中的车道线属性。
在一个例子中,上述步骤S203所得到的概率图包括第一属性概率图和第二属性概率图,第一属性概率图和第二属性概率图为颜色属性概率图、线型属性概率图和边沿属性概率图中的两种,并且,第一属性概率图和第二属性概率图不同。
图3为本公开另一实施例提供的车道线属性检测方法的流程示意图,如图3所示,在上述概率图包括第一属性概率图和第二属性概率图时,上述步骤S203中根据概率图确定路面图像中的车道线属性的过程包括如下步骤。
S301、针对上述路面图像中的一条车道线位置处的每个点,确定该点在L个第一属性概率图中的对应位置的各自的概率值。
S302、针对该点,将概率值最大的第一属性概率图对应的第一属性的值,作为该点的第一属性的值。
S303、根据上述路面图像中的该条车道线位置处的各个点的第一属性的值,确定该条车道线的第一属性的值。
在上述步骤S301之前,可以先对上述路面图像进行预处理,得到路面图像中的车道线。例如,可以将路面图像输入某一已训练好的神经网络,由该神经网络输出该路面图像中的车道线结果。再例如,可以将路面图像输入某一已训练好的语义分割网络,由该语义分割网络输出该路面图像中的车道线分割结果。然后再使用图3所示的方法,对车道线进行属性处理,计算车道线的属性。从而提高车道线识别的准确性。
上述步骤S301-S303可以确定路面图像中一条车道线的第一属性的值。其中,第一属性是与第一属性概率图对应的属性,示例性的,第一属性概率图为颜色属性概率图,则第一属性为颜色属性,第一属性的值可以为白色、黄色、蓝色、其他颜色等。
以使用神经网络得到概率图为例,在此过程中,将路面图像输入神经网络后,神经网络可以输出L个第一属性概率图,对于路面图像中一条车道线中的一个点,在每个第一属性概率图中均有一个对应的概率值,概率值越大,表明该点属于该概率图对应属性的几率越大,因此,对于该点,可以比较L个第一属性概率图中的对应位置的概率值,将概率值最大的第一属性概率图对应的第一属性的值,作为该点的第一属性的值。
示例性的,假设第一属性概率图为颜色属性概率图,第一属性为颜色属性,L为5,即包括5个颜色属性概率图,分别为上述表4所示的概率图0、概率图1、概率图2、概率图3和概率图4,每个概率图对应一种颜色属性。假设路面图像中一条车道线中一个点在概率图1中的概率值最大,则可以确定该点的颜色属性的值为概率图1对应的颜色属性。
使用上述方法可以得到路面图像中一条车道线位置处的各个点的第一属性的值,在此基础上,可以根据各个点的第一属性的值确定该条车道线的第一属性的值。
例如,如果该条车道线位置处的各个点的第一属性的值不同,则可以将该条车道线位置处的、第一属性的值相同的点的数量最多的点的第一属性的值,作为该条车道线的第一属性的值。
示例性的,假设第一属性为颜色属性,在该条车道线的各个点中,第一属性的值为白色的点的数量占到总的点数量的80%,第一属性的值为黄色的点的数量占到总的点数量的17%,第一属性的值为其他颜色的点的数量占到总的点数量的3%,则可以将白色作为该条车道线的第一属性的值,即颜色属性的值。
又例如,如果该条车道线位置处的各个点的第一属性的值相同,则可以将该条车道线位置处的点的第一属性的值作为该条车道线的第一属性的值。
示例性的,假设第一属性为颜色属性,该条车道线位置上所有点的第一属性的值均为黄色,则可以将黄色作为该条车道线的第一属性的值,即颜色属性的值。
S304、针对上述路面图像中的一条车道线位置处的每个点,确定该点在S个第二属性概率图中的对应位置的各自的概率值。
S305、针对该点,将概率值最大的第二属性概率图对应的第二属性的值,作为该点的第二属性的值。
S306、根据上述路面图像中的该条车道线位置处的各个点的第二属性的值,确定该条车道线的第二属性的值。
上述步骤S304-S306可以确定路面图像中一条车道线的第二属性的值。其中,第二属性是与第二属性概率图对应的属性,示例性的,第二属性概率图为线型属性概率图,则第二属性为线型属性,第二属性的值可以为实线、虚线、双实线、双虚线等。
以使用神经网络得到概率图为例,在此过程中,将路面图像输入神经网络后,神经网络可以输出S个第二属性概率图,对于路面图像中一条车道线中的一个点,在每个第二属性概率图中均有一个对应的概率值,概率值越大,表明该点属于该概率图对应属性的几率越大,因此,对于该点,可 以比较S个第二属性概率图中的对应位置的概率值,将概率值最大的第二属性概率图对应的第二属性的值,作为该点的第二属性的值。
示例性的,假设第二属性概率图为线型属性概率图,第二属性为线型属性,S为10,即包括10个线型属性概率图,每个概率图对应一种线型属性。假设路面图像中一条车道线中一个点在第一个线型属性概率图中的概率值最大,则可以确定该点的线型属性的值为第一个线型属性概率图对应的线型属性。
使用上述方法可以得到路面图像中一条车道线位置处的各个点的第二属性的值,在此基础上,可以根据各个点的第二属性的值确定该条车道线的第二属性的值。
例如,如果该条车道线位置处的各个点的第二属性的值不同,则可以将该条车道线位置处的、第二属性的值相同的点的数量最多的点的第二属性的值,作为该条车道线的第二属性的值。
示例性的,假设第二属性为线型属性,在该条车道线的各个点中,第二属性的值为实线的点的数量占到总的点数量的81%,第二属性的值为虚线的点的数量占到总的点数量的15%,第二属性的值为其他线型的点的数量占到总的点数量的4%,则可以将实线作为该条车道线的第二属性的值,即线型属性的值。
又例如,如果该条车道线位置处的各个点的第二属性的值相同,则可以将该条车道线位置处的点的第二属性的值作为该条车道线的第二属性的值。
示例性的,假设第二属性为线型属性,该条车道线位置上所有点的第二属性的值均为实线,则可以将实线作为该条车道线的第二属性的值,即线型属性的值。
值得说明的是,上述步骤S301-S303为顺序执行,上述步骤S304-S306为顺序执行,而本公开实施例对S301-S303与S304-S306的执行顺序不做限制,可以先执行S301-S303,再执行S304-S306,也可以先执行S304-S306,再执行S301-S303,也可以并行执行S301-S303和S304-S306。
在上述步骤S301-S306中,第一属性概率图为L个,第二属性概率图为S个。如前文所述,颜色属性概率图为N1个,线型属性概率图为N2个,边沿属性概率图为N3个。则L、S与前述的N1、N2、N3的关系如下。
当第一属性概率图为颜色属性概率图时,L等于N1,第一属性为颜色属性。当第一属性概率图为线型属性概率图时,L等于N2,第一属性为线型属性。当第一属性概率图为边沿属性概率图时,L等于N3,第一属性为边沿属性。当第二属性概率图为颜色属性概率图时,S等于N1,第二属性为颜色属性。当第二属性概率图为线型属性概率图时,S等于N2,第二属性为线型属性。当第二属性概率图为边沿属性概率图时,S等于N3,第二属性为边沿属性。
需要注意的是,由于第一属性概率图与第二属性概率图不同,因此,当第一属性概率图为颜色属性概率图时,第二属性概率图可以为线型属性概率图或者边沿属性概率图;当第一属性概率图为线型属性概率图时,第二属性概率图可以为颜色属性概率图或者边沿属性概率图;当第一属性概率图为边沿属性概率图时,第二属性概率图可以为颜色属性概率图或者线型属性概率图。
S307、将该条车道线的第一属性的值和该条车道线的第二属性的值进行组合。
S308、将组合后的属性的值作为该条车道线的属性的值。
例如,在得到一条车道线的第一属性的值和第二属性的值之后,可以对第一属性的值和第二属 性的值进行组合处理,从而可以将组合后的属性的值作为该条车道线的属性的值。组合处理的方式例如可以是将第二属性的值叠加在第一属性的值之后,或者,将第一属性的值叠加在第二属性的值之后。
示例性的,假设第一属性为颜色属性,第二属性为线型属性,经过前述的步骤得到路面图像中某条车道线的第一属性的值为白色,第二属性的值为实线,则可以将第二属性的值叠加到第一属性的值之后,得到“白色实线”,“白色实线”即为该条车道线的属性的值。
在一个例子中,可以同时使用上述颜色属性概率图、上述线型属性概率图和上述边沿属性概率图确定路面图像中的车道线属性。
在这种方式中,上述步骤S203所得到的概率图除了包括前述的第一属性概率图和第二属性概率图外,还包括第三属性概率图。其中,该第三属性概率图为颜色属性概率图、线型属性概率图和边沿属性概率图中的一种,且该第三属性概率图、上述第二属性概率图与上述第一属性概率图为属性两两不同的概率图。
图4为本公开再一实施例提供的车道线属性检测方法的流程示意图,如图4所示,在上述概率图既包括第一属性概率图和第二属性概率图,又包括第三属性概率图时,在上述步骤S307对第一属性的值和第二属性的值进行组合之前,还可以执行如下步骤。
S401、针对上述路面图像中的一条车道线位置处的每个点,确定该点在U个第三属性概率图中的对应位置的各自的概率值。
S402、针对该点,将概率值最大的第三属性概率图对应的第三属性的值,作为该点的第三属性的值。
S403、根据上述路面图像中的该条车道线位置处的各个点的第三属性的值,确定该条车道线的第三属性的值。
上述步骤S401-S403可以确定路面图像中一条车道线的第三属性的值。其中,第三属性是与第三属性概率图对应的属性,示例性的,第三属性概率图为边沿属性概率图,则第三属性为边沿属性,第三属性的值可以为路牙型边沿、栅栏型边沿、虚拟边沿等。
以使用神经网络得到概率图为例,在此过程中,将路面图像输入神经网络后,神经网络可以输出U个第三属性概率图,对于路面图像中一条车道线中的一个点,在每个第三属性概率图中均有一个对应的概率值,概率值越大,表明该点属于该概率图对应属性的几率越大,因此,对于该点,可以比较U个第三属性概率图中的对应位置的概率值,将概率值最大的第三属性概率图对应的第三属性的值,作为该点的第三属性的值。
示例性的,假设第三属性概率图为边沿属性概率图,第三属性为边沿属性,U为7,即包括7个边沿属性概率图,每个概率图对应一种边沿属性。假设路面图像中一条车道线中一个点在第7个边沿属性概率图中的概率值最大,则可以确定该点的边沿属性的值为第7个边沿属性概率图对应的边沿属性。
使用上述方法可以得到路面图像中一条车道线位置处的各个点的第三属性的值,在此基础上,可以根据各个点的第三属性的值确定该条车道线的第三属性的值。
例如,如果该条车道线位置处的各个点的第三属性的值不同,则可以将该条车道线位置处 的、第三属性的值相同的点的数量最多的点的第三属性的值,作为该条车道线的第三属性的值。
示例性的,假设第三属性为边沿属性,在该条车道线的各个点中,第三属性的值为路牙型边沿的点的数量占到总的点数量的82%,第三属性的值为虚拟边沿的点的数量占到总的点数量的14%,第三属性的值为非边沿的点的数量占到总的点数量的4%,则可以将路牙型边沿作为该条车道线的第三属性的值,即边沿属性的值。
又例如,如果该条车道线位置处的各个点的第三属性的值相同,则可以将该条车道线位置处的点的第三属性的值作为该条车道线的第三属性的值。
示例性的,假设第三属性为边沿属性,该条车道线位置上所有点的第三属性的值均为路牙型边沿,则可以将路牙型边沿作为该条车道线的第三属性的值,即边沿属性的值。
值得说明的是,在具体实施过程中,上述步骤S401-S403为顺序执行,而本公开实施例对S401-S403与S301-S303以及S304-S306的执行顺序不做限制。示例性的,可以先执行S301-S303,再执行S304-S306,再执行S401-S403,也可以先执行S304-S306,再执行S301-S303,再执行S401-S403,也可以线先执行S401-S403,再执行S304-S306,再执行S301-S303,也可以并行执行S301-S303、S304-S306以及S401-S403。
在上述步骤S401-S403中,第三属性概率图为U个,如前文所述,颜色属性概率图为N1个,线型属性概率图为N2个,边沿属性概率图为N3个。则U与前述的N1、N2、N3的关系如下。
当第三属性概率图为颜色属性概率图时,U等于N1,第三属性为颜色属性。当第三属性概率图为线型属性概率图时,U等于N2,第三属性为线型属性。当第三属性概率图为边沿属性概率图时,U等于N3,第三属性为边沿属性。
在一个例子中,当上述概率图包括第一属性概率图、第二属性概率图和第三属性概率图时,在上述步骤S307对一条车道线的第一属性的值和第二属性的值进行组合时,具体可以将该条车道线的第一属性的值、该条车道线的第二属性的值以及该条车道线的第三属性的值进行组合。
示例性的,组合处理的方式例如可以是将第三属性的值叠加在第二属性的值和第一属性的值之后,或者,将第三属性的值叠加在第二属性的值和第一属性的值之前。
示例性的,假设第一属性为颜色属性,第二属性为线型属性,第三属性为边沿属性,经过前述的方式得到路面图像中某条车道线的第一属性的值为白色,第二属性的值为实线,第三属性的值为非边沿,则可以将第三属性的值叠加到第二属性和第一属性之后,得到“白色实线的非边沿”,如前文所述,非边沿表示不属于边沿,而属于车道线,因此,该示例中所得到的车道线属性为白色实线的车道线。
以上说明了根据概率图确定路面图像中的车道线属性的过程,如前文所述,该概率图可以通过神经网络得到,将路面图像输入神经网络中,可以由神经网络输出上述的概率图。
以下实施例说明上述实施例中所涉及的神经网络的训练和使用过程。
在使用上述神经网络之前,上述神经网络可以预先采用包括有颜色类型、线型以及边沿类型标注信息的路面训练图像集进行监督训练。该路面训练图像集中包括大量的训练用图像。每个训练用图像经由采集实际路面图像以及进行标注的过程获得。在一个例子中,可以首先采集白天、夜晚、雨天、隧道、直道、弯道、强光照等多种场景下的多幅实际路面图像,进而,对于每幅实际路 面图像,进行像素级的标注,即标注实际路面图像中每个像素点的类别为颜色类型、线型以及边沿类型标注信息,从而得到训练图像集。由于神经网络的参数是经丰富场景采集的训练图像集进行监督训练得到的,因此,训练完成后的神经网络不仅可以在一些简单的场景下,例如天气条件和光照条件都较好的白天场景下,得到准确的车道线属性检测结果,在复杂度较高的场景下,例如在雨天、夜晚、隧道、弯道、强光照等场景下,也能够得到准确的车道线属性检测结果。
通过上述过程所涉及的训练图像集覆盖了实际中的各种场景,因此,使用训练图像集所训练出的神经网络对于各种场景下的车道线属性检测都具有良好的鲁棒性,并且检测时间短,检测结果准确性高。
在获取到路面训练图像集之后,可以按照下述过程训练上述神经网络。
图5为本公开实施例提供的训练用于进行车道线属性检测的神经网络的方法的流程示意图,如图5所示,上述神经网络的训练过程可以包括如下步骤。
S501、神经网络对输入的训练用图像进行处理,输出训练用图像的预测颜色属性概率图、预测线型属性概率图和预测边沿属性概率图。
其中,上述训练用图像包含于上述路面训练图像集中。
其中,上述预测颜色属性概率图、预测线型属性概率图和预测边沿属性概率图为神经网络当前所实际输出的预测颜色属性概率图、预测线型属性概率图和预测边沿属性概率图。
S502、针对上述训练用图像中的一条车道线位置处的每个点,分别确定该点的颜色属性的值、线型属性的值和边沿属性的值。
S503、分别根据上述训练用图像中的该条车道线位置处的各个点的颜色属性的值、线型属性的值和边沿属性的值,确定该条车道线的预测颜色类型、预测线型和预测边沿类型。
其中,预测颜色类型是指由神经网络输出的概率图得到的车道线的颜色属性的值,预测线型是指由神经网络输出的概率图得到的车道线的线型属性的值,预测边沿类型是指由神经网络输出的概率图得到的车道线的边沿属性的值。
上述步骤S502-S503中,可以按照颜色、线型、边沿维度分别处理,以确定出训练用图像中一条车道线的预测颜色类型、预测线型和预测边沿类型。
其中,确定训练用图像中的一条车道线位置处的每个点的颜色属性的值,以及根据各个点颜色属性的值确定车道线的预测颜色类型的具体方法,可以参照前述的步骤S301-S303,或者,步骤S304-S306,或者,步骤S401-S403,此处不再赘述。
确定训练用图像中的一条车道线位置处的每个点的线型属性的值,以及根据各个点线型属性的值确定车道线的预测线型的具体方法,可以参照前述的步骤S301-S303,或者,步骤S304-S306,或者,步骤S401-S403,此处不再赘述。
神经网络在确定一条车道线位置处的每个点的线型属性或者边沿属性时,是通过整个路面图像来判断车道线是虚线还是实线,然后来给出车道线上的点是虚线还是实线的概率,这是因为神经网络从路面图像中提取的特征图中的每个像素点都聚合了路面图像中很大区域的信息,所以能判断出车道线的线型,或者是边沿的类型。
确定训练用图像中的一条车道线位置处的一个点的边沿属性的值,以及根据各个点边沿属性的值确定车道线的预测边沿类型的具体方法,可以参照前述的步骤S301-S303,或者,步骤S304-S306,或者,步骤S401-S403,此处不再赘述。
S504、获取上述训练用图像的车道线的预测颜色类型与上述训练用图像的车道线的颜色类型真值(Ground-truth)图中的颜色类型之间的第一损失值、上述训练用图像的车道线的预测线型与上述训练用图像的车道线的线型真值图中的线型之间的第二损失值、以及上述训练用图像的车道线的预测边沿类型与上述训练用图像的车道线的边沿类型真值图中的边沿类型之间的第三损失值。
其中,上述颜色类型真值图通过逻辑代数的方式表示训练用图像的颜色,上述颜色类型真值图基于上述训练用图像的颜色类型的标注信息获得。上述线型真值图通过逻辑代数的方式表示训练用图像的线型,上述线型真值图基于上述训练用图像的线型的标注信息获得。上述边沿类型真值图通过逻辑代数的方式表示训练用图像的边沿类型,上述边沿类型真值图基于上述训练用图像的边沿类型的标注信息获得。
在一个例子中,可以通过损失函数,计算预测颜色类型与颜色类型真值图的颜色类型之间的第一损失值,以及预测线型与线型真值图的线型之间的第二损失值,以及预测边沿类型与边沿类型真值图中的边沿类型之间的第三损失值。
S505、根据上述第一损失值、第二损失值和第三损失值调整上述神经网络的网络的参数值。
例如,神经网络的网络参数可以包括卷积核大小、权重信息等。
本步骤中,可以通过梯度反向传播的方式,将上述损失值在神经网络中进行反向回传,以调整神经网络的网络参数值。
经过本步骤之后,即完成一次训练的迭代过程,得到新的神经网络的参数值。
在一个例子中,可以基于该新的神经网络的参数值,继续上述步骤S501-S504,直至上述第一损失值在预设损失范围内,并且第二损失值在预设损失范围内,并且第三损失值在预设损失范围内,此时得到神经网络的参数值为优化后的参数值,该神经网络的训练结束。
在另一个例子中,可以基于该新的神经网络的参数值,继续上述步骤S501-S504,直至上述第一损失值、第二损失值和第三损失值的和在另一预设损失范围内,此时得到神经网络的参数值为优化后的参数值,该神经网络的训练结束。
在再一个例子中,还可以基于梯度下降算法或其他神经网络领域常见的算法,来判断该神经网络的训练是否结束。
示例性的,可每次采用一幅训练图像对神经网络进行训练,或者,还可一次采用多幅训练图像对神经网络进行训练
在一个例子中,上述神经网络可以为卷积神经网络,该卷积神经网络可以包括卷积层、残差网络单元、上采样层以及归一化层。其中,卷积层和残差网络单元的先后顺序可以根据需要进行灵活设置,另外,各层的数量也可以根据需要进行灵活设置。
例如,上述卷积神经网络中可以包括连接的6-10个卷积层、连接的7-12个残差网络单元以及1-4个上采样层。将具有该特定结构的卷积神经网络用于车道线属性检测时,能够满足多场景或复杂场景车道线属性检测的要求,从而使得检测结果鲁棒性更好。
一种示例中,上述卷积神经网络中可以包括连接的8个卷积层、连接的9个残差网络单元以及连接的2个上采样层。
图6为本公开一实施例提供的卷积神经网络的结构示意图,如图6所示,路面图像输入610之后,首先经过该卷积神经网络的连续8个卷积层620,在该连续的8个卷积层620之后,包括连续9个残差网络单元630,在该连续的9个残差网络单元630之后,包括连续的2个上采样层640,在该连续的2个上采样层640之后,为归一化层650,最终由归一化层650输出概率图。
示例性的,每个上述残差网络单元630中可以包括256个滤波器,每一层可以包括128个3*3和128个1*1大小的滤波器。
在经过上述的过程完成神经网络的训练后,在使用神经网络输出前述的概率图时,可以按照如下过程输出。
图7为本公开实施例提供的用于进行车道线属性检测的神经网络进行路面图像处理的流程示意图,如图7所示,通过神经网络获取上述概率图的过程为:
S701、通过上述神经网络的至少一个卷积层提取上述路面图像的M个通道的低层特征信息。
其中,M为步骤S202中所得到的概率图的数量。一种示例中,如果概率图包括颜色属性概率图和线型属性概率图,则M为N1和N2之和。另一种示例中,如果概率图包括颜色属性概率图、线型属性概率图和边沿属性概率图,则M为N1、N2和N3之和。
通过卷积层可以缩小路面图像的分辨率,并保留路面图像的低层特征。示例性的,路面图像的低层特征信息可以包括图像中的边缘信息、直线信息以及曲线信息等。
以概率图包括颜色属性概率图、线型属性概率图和边沿属性概率图为例,上述路面图像的M个通道中的每个通道,对应一种颜色属性、或一种线型属性或一种边沿属性。
S702、通过上述神经网络的至少一个残差网络单元基于上述M个通道低层特征信息提取上述路面图像的M个通道的高层特征信息。
通过残差网络单元所提取的路面图像的M个通道的高层特征信息包括语义特征、轮廓、整体结构等。
S703、通过上述神经网络的至少一个上采样层对上述M个通道的高层特征信息进行上采样处理,得到与上述路面图像等大的M个概率图。
通过上采样层的上采样处理,可以将图像恢复成输入神经网络的图像的原始大小。本步骤中,对M个通道的高层特征信息进行上采样处理后,可以得到与输入神经网络的路面图像等大的M个概率图。
需要说明的是,本公开实施例所述的低层特征信息和高层特征信息是在一个特定神经网络下的相对概念。例如,在深度神经网络中,深度较浅的网络层所提取的特征相对于深度较深的网络层所提取的特征,前者所提取的属于低层特征信息,而后者所提取的则属于高层特征信息。
在一个例子中,神经网络中在上述上采样层之后还可以包括归一化层,通过归一化层输出上述M个概率图。
示例性的,经过上采样处理之后得到路面图像的特征图,对该特征图中的各像素点的取值 进行归一化处理,使得特征图中的各像素点的取值在0至1范围内,从而得到M个概率图。
示例性的,一种归一化方法为:首先确定特征图中像素点取值的最大值,然后将各像素点的取值除以该最大值,从而使得特征图中各像素点的取值在0至1范围内。
需要说明的是,本公开实施例对上述步骤S701和S702的执行顺序不做限制,即可以先执行S701再执行S702,或者先执行S702再执行S701。
作为一种可选的实施方式,在上述步骤S202将路面图像输入神经网络之前,可以首先对上述路面图像进行去畸变处理,以进一步提升神经网络输出结果的准确性。
图8为本公开实施例提供的车道线属性检测装置的模块结构图,如图8所示,该装置包括:第一获取模块801、第一确定模块802和第二确定模块803。
第一获取模块801,用于获取智能设备上安装的图像采集装置所采集的路面图像。
第一确定模块802,用于根据所述路面图像确定概率图,所述概率图包括:颜色属性概率图、线型属性概率图和边沿属性概率图中的至少两种,其中,颜色属性概率图为N1个,线型属性概率图为N2个,边沿属性概率图为N3个,其中,N1、N2和N3均为大于0的整数;每个颜色属性概率图表示所述路面图像中的各个点属于该种颜色的概率,每个线型属性概率图表示所述路面图像中的各个点属于该种线型的概率,每个边沿属性概率图表示所述路面图像中的各个点属于该种边沿的概率。
第二确定模块803,用于根据所述概率图确定所述路面图像中的车道线属性。
另一实施例中,所述N1个颜色属性概率图对应的颜色包括以下至少之一:白色、黄色、蓝色。
另一实施例中,所述N2个线型属性概率图对应的线型包括以下至少之一:虚线、实线、双虚线、双实线、虚实线、实虚线、三虚线、虚实虚线。
另一实施例中,所述N3个边沿属性概率图对应的边沿包括以下至少之一:路牙型边沿、栅栏型边沿、墙或花坛型边沿、虚拟边沿、非边沿。
另一实施例中,所述概率图包括第一属性概率图和第二属性概率图,所述第一属性概率图和所述第二属性概率图为颜色属性概率图、线型属性概率图和边沿属性概率图中的两种,且所述第一属性概率图和所述第二属性概率图不同。
第二确定模块803具体用于:针对所述路面图像中的一条车道线位置处的每个点,确定该点在L个所述第一属性概率图中的对应位置的各自的概率值;针对该点,将概率值最大的第一属性概率图对应的第一属性的值,作为该点的第一属性的值;根据所述路面图像中的该条车道线位置处的各个点的第一属性的值,确定该条车道线的第一属性的值;针对所述路面图像中的该条车道线位置处的每个点,确定该点在S个所述第二属性概率图中的对应位置的各自的概率值;针对该点,将概率值最大的第二属性概率图对应的第二属性的值,作为该点的第二属性的值;根据所述路面图像中的该条车道线位置处的各个点的第二属性的值,确定该条车道线的第二属性的值;将该条车道线的第一属性的值和该条车道线的第二属性的值进行组合;将组合后的属性的值作为该条车道线的属性的值;其中,当第一属性概率图为颜色属性概率图时,L等于N1,第一属性为颜色属性;当第一属性概率图为线型属性概率图时,L等于N2,第一属性为线型属性;当第一属性概率图为边沿属性 概率图时,L等于N3,第一属性为边沿属性;当第二属性概率图为颜色属性概率图时,S等于N1,第二属性为颜色属性;当第二属性概率图为线型属性概率图时,S等于N2,第二属性为线型属性;当第二属性概率图为边沿属性概率图时,S等于N3,第二属性为边沿属性。
另一实施例中,第二确定模块803根据所述路面图像中的该条车道线位置处的各个点的第一属性,确定该条车道线的第一属性的值,包括:响应于该条车道线位置处的各个点的第一属性的值不同,将该条车道线位置处的、第一属性的值相同的点的数量最多的点的第一属性的值,作为该条车道线的第一属性的值。
另一实施例中,第二确定模块803根据所述路面图像中的该条车道线位置处的各个点的第一属性,确定该条车道线的第一属性的值,包括:响应于该条车道线位置处的各个点的第一属性的值相同,将该条车道线位置处的点的第一属性的值作为该条车道线的第一属性的值。
另一实施例中,第二确定模块803根据所述路面图像中的该条车道线位置处的各个点的第二属性的值,确定该条车道线的第二属性的值,包括:响应于该条车道线位置处的各个点的第二属性的值不同,将该条车道线位置处的、第二属性的值相同的点的数量最多的点的第二属性的值,作为该条车道线的第二属性的值。
另一实施例中,第二确定模块803根据所述路面图像中的该条车道线位置处的各个点的第二属性的值,确定该条车道线的第二属性的值,包括:响应于该条车道线位置处的各个点的第二属性的值相同,将该条车道线位置处的点的第二属性的值作为该条车道线的第二属性的值。
另一实施例中,所述概率图还包括第三属性概率图,所述第三属性概率图为颜色属性概率图、线型属性概率图和边沿属性概率图中的一种,且所述第三属性概率图、所述第二属性概率图与所述第一属性概率图为属性两两不同的概率图。
第二确定模块803还用于:在将该条车道线的所述第一属性的值和该条车道线的所述第二属性的值进行组合之前,针对所述路面图像中的该条车道线位置处的每个点,确定该点在U个所述第三属性概率图中的对应位置的各自的概率值;针对该点,将概率值最大的第三属性概率图对应的第三属性的值,作为该点的第三属性的值;根据所述路面图像中的该条车道线位置处的各个点的第三属性的值,确定该条车道线的第三属性的值;其中,当第三属性概率图为颜色属性概率图时,U等于N1,第三属性为颜色属性;当第三属性概率图为线型属性概率图时,U等于N2,第三属性为线型属性;当第三属性概率图为边沿属性概率图时,U等于N3,第三属性为边沿属性;第二确定模块803将该条车道线的第一属性的值和该条车道线的第二属性的值进行组合,包括:将该条车道线的第一属性的值、该条车道线的第二属性的值以及该条车道线的第三属性的值进行组合。
另一实施例中,第二确定模块803根据所述路面图像中的该条车道线位置处的各个点的第三属性的值,确定该条车道线的第三属性的值,包括:响应于该条车道线位置处的各个点的第三属性的值不同,将该条车道线位置处的、第三属性的值相同的点的数量最大的点的第三属性的值,作为该条车道线的第三属性的值。
另一实施例中,第二确定模块803根据所述路面图像中的该条车道线位置处的各个点的第三属性的值,确定该条车道线的第三属性的值,包括:响应于该条车道线位置处的各个点的第三属性的值相同,将该条车道线位置处的点的第三属性的值作为该条车道线的第三属性的值。
另一实施例中,第一确定模块802具体用于:将所述路面图像输入神经网络,所述神经网 络输出所述概率图;其中,所述神经网络采用包括有颜色类型、线型以及边沿类型标注信息的路面训练图像集监督训练而得。
图9为本公开实施例提供的车道线属性检测装置的模块结构图,如图9所示,还包括:预处理模块804,用于对所述路面图像进行去畸变处理。
需要说明的是,应理解以上装置的各个模块的划分仅仅是一种逻辑功能的划分,实际实现时可以全部或部分集成到一个物理实体上,也可以物理上分开。且这些模块可以全部以软件通过处理元件调用的形式实现;也可以全部以硬件的形式实现;还可以部分模块通过处理元件调用软件的形式实现,部分模块通过硬件的形式实现。例如,确定模块可以为单独设立的处理元件,也可以集成在上述装置的某一个芯片中实现,此外,也可以以程序代码的形式存储于上述装置的存储器中,由上述装置的某一个处理元件调用并执行以上确定模块的功能。其它模块的实现与之类似。此外这些模块全部或部分可以集成在一起,也可以独立实现。这里所述的处理元件可以是一种集成电路,具有信号的处理能力。在实现过程中,上述方法的各步骤或以上各个模块可以通过处理器元件中的硬件的集成逻辑电路或者软件形式的指令完成。
例如,以上这些模块可以是被配置成实施以上方法的一个或多个集成电路,例如:一个或多个特定集成电路(application specific integrated circuit,ASIC),或,一个或多个微处理器(digital signal processor,DSP),或,一个或者多个现场可编程门阵列(field programmable gate array,FPGA)等。再如,当以上某个模块通过处理元件调度程序代码的形式实现时,该处理元件可以是通用处理器,例如中央处理器(central processing unit,CPU)或其它可以调用程序代码的处理器。再如,这些模块可以集成在一起,以片上***(system-on-a-chip,SOC)的形式实现。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本公开实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘solid state disk(SSD))等。
图10为本公开实施例提供的一种电子设备的结构示意图。如图10示,该电子设备1000可以包括:处理器1001、存储器1002、通信接口1003和***总线1004,所述存储器1002和所述通信接口1003通过所述***总线1004与所述处理器1001连接并完成相互间的通信,所述存储器1002用于存储计算机执行指令,所述通信接口1003用于和其他设备进行通信,所述处理器1001执行所述计算机程序时实现如本公开实施例的提供的车道线属性检测方法。
该图10提到的***总线可以是外设部件互连标准(peripheral component interconnect,PCI)总线或扩展工业标准结构(extended industry standard architecture,EISA)总线等。所述***总线可以分为地址总线、数据总线、控制总线等。为便于表示,图中仅用一条粗线表示,但并不表示仅有 一根总线或一种类型的总线。通信接口用于实现数据库访问装置与其他设备(例如客户端、读写库和只读库)之间的通信。存储器可能包含随机存取存储器(random access memory,RAM),也可能还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。
上述的处理器可以是通用处理器,包括中央处理器CPU、网络处理器(network processor,NP)等;还可以是数字信号处理器DSP、专用集成电路ASIC、现场可编程门阵列FPGA或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。
图11为本公开实施例提供的智能设备的结构示意图,如图11所示,本实施例的智能设备1100包括:图像采集装置1101、处理器1102和存储器1103。
具体的,如图11所示,在实际使用时,图像采集装置1101拍摄路面图像,并将路面图像发送给处理器1102,处理器1102调用存储器1103并执行存储器1103中的程序指令,检测获取的路面图像中的车道线属性,并根据检测得到的车道线属性输出提示信息或对所述智能设备进行行驶控制。
本实施例中的智能设备为能够行驶在道路上的智能设备,例如智能驾驶车辆、机器人、导盲设备等,其中智能驾驶车辆可以为自动驾驶车辆或者具有辅助驾驶***的车辆。
其中,提示信息可以包括车道线偏离预警提示、进行车道线保持提示、改变行驶速度、改变行驶方向、车道线保持、改变车灯状态等。
上述行驶控制可以包括:制动、改变行驶速度、改变行驶方向、车道线保持、改变车灯状态、驾驶模式切换等,其中,驾驶模式切换可以是辅助驾驶与自动驾驶之间的切换,例如,将辅助驾驶切换为自动驾驶。
图12为本公开实施例提供的智能驾驶方法的流程示意图,在上述实施例的基础上,本公开实施例还提供一种智能驾驶方法,用于上述图11所述的智能设备,如图12所示,该方法包括:
S1201、获取路面图像。
S1202、采用上述方法实施例所述的车道线属性检测方法,检测获取的路面图像中的车道线属性。
S1203、根据检测得到的车道线属性输出提示信息或对智能设备进行行驶控制。
本实施例的执行主体为能够移动的智能设备,例如智能驾驶车辆、机器人、导盲设备等,其中智能驾驶车辆可以为自动驾驶车辆或者具有辅助驾驶***的车辆。
本实施例的智能驾驶包括辅助驾驶、自动驾驶和/或辅助驾驶和自动驾驶之间的驾驶模式切换。
其中,路面图像的车道线属性检测结果为上述实施例的车道线属性检测方法得到,具体过程参照上述实施例的描述,在此不再赘述。
具体的,智能设备执行上述的车道线属性检测方法,获得路面图像的车道线属性检测结果,并根据路面图像的车道线属性检测结果输出提示信息和/或进行移动控制。
其中,提示信息可以包括车道线偏离预警提示,进行车道线保持提示、改变行驶速度、改变行驶方向、车道线保持、改变车灯状态等。
上述行驶控制可以包括:制动、改变行驶速度、改变行驶方向、车道线保持等。
本实施例提供的驾驶控制方法,智能设备通过获取路面图像的车道线属性检测结果,并根据路面图像的车道线属性检测结果输出提示信息或进行智能设备的行驶控制,进而提高了智能设备的安全性和可靠性。
可选的,本公开实施例还提供一种非易失性存储介质,所述存储介质中存储有指令,当其在计算机上运行时,使得计算机执行本公开实施例提供的车道线属性检测方法。
可选的,本公开实施例还提供一种运行指令的芯片,所述芯片用于执行本公开实施例提供的车道线属性检测方法。
本公开实施例还提供一种程序产品,所述程序产品包括计算机程序,所述计算机程序存储在存储介质中,至少一个处理器可以从所述存储介质读取所述计算机程序,所述至少一个处理器执行所述计算机程序时可实现本公开实施例提供的车道线属性检测方法。
在本公开实施例中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系;在公式中,字符“/”,表示前后关联对象是一种“相除”的关系。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中,a,b,c可以是单个,也可以是多个。
可以理解的是,在本公开实施例中涉及的各种数字编号仅为描述方便进行的区分,并不用来限制本公开实施例的范围。
可以理解的是,在本申请的实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本公开实施例的实施过程构成任何限定。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (32)

  1. 一种车道线属性检测方法,其特征在于,包括:
    获取智能设备上安装的图像采集装置所采集的路面图像;
    根据所述路面图像,确定概率图,所述概率图包括:颜色属性概率图、线型属性概率图和边沿属性概率图中的至少两种,其中,
    所述颜色属性概率图为N1个,所述线型属性概率图为N2个,所述边沿属性概率图为N3个,其中,N1、N2和N3均为大于0的整数;
    每个所述颜色属性概率图表示所述路面图像中的各个点属于该颜色属性概率图对应颜色的概率,
    每个所述线型属性概率图表示所述路面图像中的各个点属于该线型属性概率图对应线型的概率,
    每个所述边沿属性概率图表示所述路面图像中的各个点属于该边沿属性概率图对应边沿的概率;
    根据所述概率图,确定所述路面图像中的车道线属性。
  2. 根据权利要求1所述的方法,其特征在于,所述N1个颜色属性概率图对应的颜色包括以下至少之一:白色、黄色、蓝色。
  3. 根据权利要求1或2所述的方法,其特征在于,所述N2个线型属性概率图对应的线型包括以下至少之一:虚线、实线、双虚线、双实线、虚实线、实虚线、三虚线、虚实虚线。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述N3个边沿属性概率图对应的边沿包括以下至少之一:路牙型边沿、栅栏型边沿、墙或花坛型边沿、虚拟边沿、非边沿。
  5. 根据权利要求1-4任一所述的方法,其特征在于,所述概率图包括第一属性概率图和第二属性概率图,所述第一属性概率图和所述第二属性概率图为所述颜色属性概率图、所述线型属性概率图和所述边沿属性概率图中的两种,且所述第一属性概率图和所述第二属性概率图不同;
    根据所述概率图确定所述路面图像中的所述车道线属性,包括:
    针对所述路面图像中的一条车道线位置处的每个点,确定该点在L个所述第一属性概率图中的对应位置的各自的概率值;
    针对该点,将概率值最大的第一属性概率图对应的第一属性的值,作为该点的第一属性的值;
    根据所述路面图像中的该条车道线位置处的各个点的第一属性的值,确定该条车道线的第一属性的值;
    针对所述路面图像中的该条车道线位置处的每个点,确定该点在S个所述第二属性概率图中的对应位置的各自概率值;
    针对该点,将概率值最大的第二属性概率图对应的第二属性的值,作为该点的第二属性的值;
    根据所述路面图像中的该条车道线位置处的各个点的第二属性的值,确定该条车道线的第二属性的值;
    将该条车道线的所述第一属性的值和该条车道线的所述第二属性的值进行组合;
    将组合后的属性的值作为该条车道线的属性的值;
    其中,当所述第一属性概率图为所述颜色属性概率图时,L等于N1,所述第一属性为颜色属性;当所述第一属性概率图为所述线型属性概率图时,L等于N2,所述第一属性为线型属性;当所述第一属性概率图为所述边沿属性概率图时,L等于N3,第一属性为边沿属性;当所述第二属性概率图为所述颜色属性概率图时,S等于N1,所述第二属性为所述颜色属性;当所述第二属性概率 图为所述线型属性概率图时,S等于N2,所述第二属性为所述线型属性;当所述第二属性概率图为所述边沿属性概率图时,S等于N3,所述第二属性为所述边沿属性。
  6. 根据权利要求5所述的方法,其特征在于,根据所述路面图像中的该条车道线位置处的各个点的所述第一属性,确定该条车道线的所述第一属性的值,包括:
    响应于该条车道线位置处的各个点的所述第一属性的值不同,将该条车道线位置处的、第一属性的值相同的点的数量最多的点的所述第一属性的值,作为该条车道线的所述第一属性的值。
  7. 根据权利要求5或6所述的方法,其特征在于,根据所述路面图像中的该条车道线位置处的各个点的所述第一属性,确定该条车道线的所述第一属性的值,包括:
    响应于该条车道线位置处的各个点的所述第一属性的值相同,将该条车道线位置处的点的所述第一属性的值作为该条车道线的所述第一属性的值。
  8. 根据权利要求5-7任一所述的方法,其特征在于,根据所述路面图像中的该条车道线位置处的各个点的所述第二属性的值,确定该条车道线的所述第二属性的值,包括:
    响应于该条车道线位置处的各个点的所述第二属性的值不同,将该条车道线位置处的、第二属性的值相同的点的数量最多的点的所述第二属性的值,作为该条车道线的所述第二属性的值。
  9. 根据权利要求5-8任一所述的方法,其特征在于,根据所述路面图像中的该条车道线位置处的各个点的所述第二属性的值,确定该条车道线的所述第二属性的值,包括:
    响应于该条车道线位置处的各个点的所述第二属性的值相同,将该条车道线位置处的点的所述第二属性的值作为该条车道线的所述第二属性的值。
  10. 根据权利要求5-9任一所述的方法,其特征在于,所述概率图还包括第三属性概率图,所述第三属性概率图为所述颜色属性概率图、所述线型属性概率图和所述边沿属性概率图中的一种,且所述第三属性概率图、所述第二属性概率图与所述第一属性概率图为属性两两不同的概率图;
    将该条车道线的所述第一属性的值和该条车道线的所述第二属性的值进行组合之前,所述方法还包括:
    针对所述路面图像中的该条车道线位置处的每个点,确定该点在U个所述第三属性概率图中的对应位置的各自的概率值;
    针对该点,将概率值最大的第三属性概率图对应的第三属性的值,作为该点的第三属性的值;
    根据所述路面图像中的该条车道线位置处的各个点的第三属性的值,确定该条车道线的第三属性的值;
    其中,当所述第三属性概率图为所述颜色属性概率图时,U等于N1,所述第三属性为所述颜色属性;当所述第三属性概率图为所述线型属性概率图时,U等于N2,所述第三属性为所述线型属性;当所述第三属性概率图为所述边沿属性概率图时,U等于N3,所述第三属性为所述边沿属性;
    将该条车道线的所述第一属性的值和该条车道线的所述第二属性的值进行组合,包括:
    将该条车道线的所述第一属性的值、该条车道线的所述第二属性的值以及该条车道线的所述第三属性的值进行组合。
  11. 根据权利要求10所述的方法,其特征在于,根据所述路面图像中的该条车道线位置处的各个点的所述第三属性的值,确定该条车道线的所述第三属性的值,包括:
    响应于该条车道线位置处的各个点的所述第三属性的值不同,将该条车道线位置处的、第三属性的值相同的点的数量最大的点的所述第三属性的值,作为该条车道线的所述第三属性的值。
  12. 根据权利要求10或11所述的方法,其特征在于,根据所述路面图像中的该条车道线位置处的各个点的所述第三属性的值,确定该条车道线的所述第三属性的值,包括:
    响应于该条车道线位置处的各个点的所述第三属性的值相同,将该条车道线位置处的点的所述第三属性的值作为该条车道线的所述第三属性的值。
  13. 根据权利要求1-12任一项所述的方法,其特征在于,根据所述路面图像确定所述概率图,包括:
    将所述路面图像输入神经网络,所述神经网络输出所述概率图;
    其中,所述神经网络采用包括有颜色类型、线型以及边沿类型标注信息的路面训练图像集监督训练而得。
  14. 根据权利要求13所述的方法,其特征在于,所述将所述路面图像输入所述神经网络之前,所述方法还包括:
    对所述路面图像进行去畸变处理。
  15. 一种车道线属性检测装置,其特征在于,包括:
    第一获取模块,用于获取智能设备上安装的图像采集装置所采集的路面图像;
    第一确定模块,用于根据所述路面图像确定概率图,所述概率图包括:颜色属性概率图、线型属性概率图和边沿属性概率图中的至少两种,其中,
    所述颜色属性概率图为N1个,所述线型属性概率图为N2个,所述边沿属性概率图为N3个,其中,N1、N2和N3均为大于0的整数;
    每个所述颜色属性概率图表示所述路面图像中的各个点属于该颜色属性概率图对应颜色的概率,
    每个所述线型属性概率图表示所述路面图像中的各个点属于该线型属性概率图对应线型的概率,
    每个所述边沿属性概率图表示所述路面图像中的各个点属于该边沿属性概率图对应边沿的概率;
    第二确定模块,用于根据所述概率图确定所述路面图像中的车道线属性。
  16. 根据权利要求15所述的装置,其特征在于,所述N1个颜色属性概率图对应的颜色包括以下至少之一:白色、黄色、蓝色。
  17. 根据权利要求15或16所述的装置,其特征在于,所述N2个线型属性概率图对应的线型包括以下至少之一:虚线、实线、双虚线、双实线、虚实线、实虚线、三虚线、虚实虚线。
  18. 根据权利要求15-17任一项所述的装置,其特征在于,所述N3个边沿属性概率图对应的边沿包括以下至少之一:路牙型边沿、栅栏型边沿、墙或花坛型边沿、虚拟边沿、非边沿。
  19. 根据权利要求15-18任一所述的装置,其特征在于,所述概率图包括第一属性概率图和第二属性概率图,所述第一属性概率图和所述第二属性概率图为所述颜色属性概率图、所述线型属性概率图和所述边沿属性概率图中的两种,且所述第一属性概率图和所述第二属性概率图不同;
    所述第二确定模块具体用于:
    针对所述路面图像中的一条车道线位置处的每个点,确定该点在L个所述第一属性概率图中的对应位置的各自的概率值;
    针对该点,将概率值最大的第一属性概率图对应的第一属性的值,作为该点的第一属性的值;
    根据所述路面图像中的该条车道线位置处的各个点的第一属性的值,确定该条车道线的第一属性的值;
    针对所述路面图像中的该条车道线位置处的每个点,确定该点在S个所述第二属性概率图中的对应位置的各自的概率值;
    针对该点,将概率值最大的第二属性概率图对应的第二属性的值,作为该点的第二属性的值;
    根据所述路面图像中的该条车道线位置处的各个点的第二属性的值,确定该条车道线的第二属性的值;
    将该条车道线的所述第一属性的值和该条车道线的所述第二属性的值进行组合;
    将组合后的属性的值作为该条车道线的属性的值;
    其中,当所述第一属性概率图为所述颜色属性概率图时,L等于N1,所述第一属性为颜色属性;当所述第一属性概率图为所述线型属性概率图时,L等于N2,所述第一属性为线型属性;当所述第一属性概率图为所述边沿属性概率图时,L等于N3,第一属性为边沿属性;当所述第二属性概率图为所述颜色属性概率图时,S等于N1,所述第二属性为所述颜色属性;当所述第二属性概率图为所述线型属性概率图时,S等于N2,所述第二属性为所述线型属性;当所述第二属性概率图为所述边沿属性概率图时,S等于N3,所述第二属性为所述边沿属性。
  20. 根据权利要求19所述的装置,其特征在于,所述第二确定模块根据所述路面图像中的该条车道线位置处的各个点的所述第一属性,确定该条车道线的所述第一属性的值,包括:
    响应于该条车道线位置处的各个点的所述第一属性的值不同,将该条车道线位置处的、第一属性的值相同的点的数量最多的点的所述第一属性的值,作为该条车道线的所述第一属性的值。
  21. 根据权利要求19或20所述的装置,其特征在于,所述第二确定模块根据所述路面图像中的该条车道线位置处的各个点的所述第一属性,确定该条车道线的所述第一属性的值,包括:
    响应于该条车道线位置处的各个点的所述第一属性的值相同,将该条车道线位置处的点的所述第一属性的值作为该条车道线的所述第一属性的值。
  22. 根据权利要求19-21任一所述的装置,其特征在于,所述第二确定模块根据所述路面图像中的该条车道线位置处的各个点的所述第二属性的值,确定该条车道线的所述第二属性的值,包括:
    响应于该条车道线位置处的各个点的所述第二属性的值不同,将该条车道线位置处的、第二属性的值相同的点的数量最多的点的所述第二属性的值,作为该条车道线的所述第二属性的值。
  23. 根据权利要求19-22任一所述的装置,其特征在于,所述第二确定模块根据所述路面图像中的该条车道线位置处的各个点的所述第二属性的值,确定该条车道线的所述第二属性的值,包括:
    响应于该条车道线位置处的各个点的所述第二属性的值相同,将该条车道线位置处的点的所述第二属性的值作为该条车道线的所述第二属性的值。
  24. 根据权利要求19-23任一所述的装置,其特征在于,所述概率图还包括第三属性概率图,所述第三属性概率图为所述颜色属性概率图、所述线型属性概率图和所述边沿属性概率图中的一种,且所述第三属性概率图、所述第二属性概率图与所述第一属性概率图为属性两两不同的概率图;
    所述第二确定模块,还用于:
    在将该条车道线的所述第一属性的值和该条车道线的所述第二属性的值进行组合之前,针对所述路面图像中的该条车道线位置处的每个点,确定该点在U个所述第三属性概率图中的对应位置的各自的概率值;
    针对该点,将概率值最大的第三属性概率图对应的第三属性的值,作为该点的第三属性的值;
    根据所述路面图像中的该条车道线位置处的各个点的第三属性的值,确定该条车道线的第三属性的值;
    其中,当所述第三属性概率图为所述颜色属性概率图时,U等于N1,所述第三属性为所述颜色属性;当所述第三属性概率图为所述线型属性概率图时,U等于N2,所述第三属性为所述线型属性;当所述第三属性概率图为所述边沿属性概率图时,U等于N3,所述第三属性为所述边沿属性;
    所述第二确定模块将该条车道线的所述第一属性的值和该条车道线的所述第二属性的值进行组合,包括:
    将该条车道线的所述第一属性的值、该条车道线的所述第二属性的值以及该条车道线的所述第三属性的值进行组合。
  25. 根据权利要求24所述的装置,其特征在于,所述第二确定模块根据所述路面图像中的该条车道线位置处的各个点的所述第三属性的值,确定该条车道线的所述第三属性的值,包括:
    响应于该条车道线位置处的各个点的所述第三属性的值不同,将该条车道线位置处的、第三属性的值相同的点的数量最大的点的所述第三属性的值,作为该条车道线的所述第三属性的值。
  26. 根据权利要求24或25所述的装置,其特征在于,所述第二确定模块根据所述路面图像中的一条车道线位置处的各个点的第三属性的值,确定该条车道线的第三属性的值,包括:
    响应于该条车道线位置处的各个点的所述第三属性的值相同,将该条车道线位置处的点的所述第三属性的值作为该条车道线的所述第三属性的值。
  27. 根据权利要求15-26任一项所述的装置,其特征在于,所述第一确定模块具体用于:
    将所述路面图像输入神经网络,所述神经网络输出所述概率图;
    其中,所述神经网络采用包括有颜色类型、线型以及边沿类型标注信息的路面训练图像集监督训练而得。
  28. 根据权利要求27所述的装置,其特征在于,所述装置还包括:
    预处理模块,用于对所述路面图像进行去畸变处理。
  29. 一种电子设备,其特征在于,包括:
    存储器,用于存储程序指令;
    处理器,用于调用并执行所述存储器中的所述程序指令,执行权利要求1-14任一项所述的方法步骤。
  30. 一种智能驾驶方法,用于智能设备,其特征在于,包括:
    获取路面图像;
    采用如权利要求1-14任一项所述的车道线属性检测方法,检测获取的所述路面图像中的车道线属性;
    根据检测得到的所述车道线属性输出提示信息或对所述智能设备进行行驶控制。
  31. 一种智能设备,其特征在于,包括:
    图像采集装置,用于获取路面图像;
    存储器,用于存储程序指令,存储的程序指令被执行时以实现如权利要求1-14任一项所述的车道线属性检测方法;
    处理器,用于根据所述图像采集装置获取的所述路面图像,执行所述存储器存储的所述程序指令以检测所述路面图像中的车道线属性,并根据检测得到的车道线属性输出提示信息或对智能设备进行行驶控制。
  32. 一种非易失性可读存储介质,其特征在于,所述可读存储介质中存储有计算机程序,所述计算机程序用于执行权利要求1-14任一项所述的方法步骤。
PCT/CN2020/076036 2019-06-25 2020-02-20 车道线属性检测 WO2020258894A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2021500086A JP7119197B2 (ja) 2019-06-25 2020-02-20 車線属性検出
SG11202013052UA SG11202013052UA (en) 2019-06-25 2020-02-20 Lane line attribute detection
KR1020217000803A KR20210018493A (ko) 2019-06-25 2020-02-20 차선 속성 검출
US17/137,030 US20210117700A1 (en) 2019-06-25 2020-12-29 Lane line attribute detection

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910556260.X 2019-06-25
CN201910556260.XA CN112131914B (zh) 2019-06-25 2019-06-25 车道线属性检测方法、装置、电子设备及智能设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/137,030 Continuation US20210117700A1 (en) 2019-06-25 2020-12-29 Lane line attribute detection

Publications (1)

Publication Number Publication Date
WO2020258894A1 true WO2020258894A1 (zh) 2020-12-30

Family

ID=73849445

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/076036 WO2020258894A1 (zh) 2019-06-25 2020-02-20 车道线属性检测

Country Status (6)

Country Link
US (1) US20210117700A1 (zh)
JP (1) JP7119197B2 (zh)
KR (1) KR20210018493A (zh)
CN (1) CN112131914B (zh)
SG (1) SG11202013052UA (zh)
WO (1) WO2020258894A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3955218A3 (en) * 2021-01-25 2022-04-20 Beijing Baidu Netcom Science Technology Co., Ltd. Lane line detection method and apparatus, electronic device, computer storage medium, and computer program product

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112396044B (zh) * 2021-01-21 2021-04-27 国汽智控(北京)科技有限公司 车道线属性信息检测模型训练、车道线属性信息检测方法
US11776282B2 (en) * 2021-03-26 2023-10-03 Here Global B.V. Method, apparatus, and system for removing outliers from road lane marking data

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140185879A1 (en) * 2011-09-09 2014-07-03 Industry-Academic Cooperation Foundation, Yonsei University Apparatus and method for detecting traffic lane in real time
CN105260699A (zh) * 2015-09-10 2016-01-20 百度在线网络技术(北京)有限公司 一种车道线数据的处理方法及装置
KR20170007596A (ko) * 2015-07-09 2017-01-19 현대자동차주식회사 개선된 차선 인식 방법
CN108052904A (zh) * 2017-12-13 2018-05-18 辽宁工业大学 车道线的获取方法及装置
CN109657632A (zh) * 2018-12-25 2019-04-19 重庆邮电大学 一种车道线检测识别方法
CN109670376A (zh) * 2017-10-13 2019-04-23 神州优车股份有限公司 车道线识别方法及***

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11211659A (ja) * 1998-01-23 1999-08-06 Nagoya Denki Kogyo Kk 路面状態判別方法およびその装置
JP2001263479A (ja) 2000-03-17 2001-09-26 Equos Research Co Ltd 車両制御装置、車両制御方法及びそのプログラムを記録した記録媒体
JP5083658B2 (ja) 2008-03-26 2012-11-28 本田技研工業株式会社 車両用車線認識装置、車両、及び車両用車線認識プログラム
JP2010060371A (ja) * 2008-09-02 2010-03-18 Omron Corp 物体検出装置
BR112012002884A2 (pt) * 2009-08-12 2017-12-19 Koninl Philips Electronics Nv sistema de formação de imagens médicas para gerar dados de um objeto de características de uma região de interesse de um objeto, método para a geração de dados de um objeto de características de uma região de interesse de um objeto, elemento de programa de computador para controlar um aparelho e meio que pode ser lido por computador
CN102862574B (zh) 2012-09-21 2015-08-19 上海永畅信息科技有限公司 基于智能手机实现车辆主动安全的方法
JP5983238B2 (ja) * 2012-09-25 2016-08-31 日産自動車株式会社 車線境界線検出装置及び車線境界線検出方法
CN104376297B (zh) * 2013-08-12 2017-06-23 株式会社理光 道路上的线型指示标志的检测方法和装置
CN108216229B (zh) * 2017-09-08 2020-01-10 北京市商汤科技开发有限公司 交通工具、道路线检测和驾驶控制方法及装置
US10628671B2 (en) * 2017-11-01 2020-04-21 Here Global B.V. Road modeling from overhead imagery
CN107945168B (zh) * 2017-11-30 2021-12-10 上海联影医疗科技股份有限公司 一种医学图像的处理方法及医学图像处理***
CN108009524B (zh) * 2017-12-25 2021-07-09 西北工业大学 一种基于全卷积网络的车道线检测方法
CN108875603B (zh) * 2018-05-31 2021-06-04 上海商汤智能科技有限公司 基于车道线的智能驾驶控制方法和装置、电子设备
CN109147368A (zh) * 2018-08-22 2019-01-04 北京市商汤科技开发有限公司 基于车道线的智能驾驶控制方法装置与电子设备
CN109635816B (zh) * 2018-10-31 2021-04-06 百度在线网络技术(北京)有限公司 车道线生成方法、装置、设备以及存储介质
CN109740469B (zh) * 2018-12-24 2021-01-22 百度在线网络技术(北京)有限公司 车道线检测方法、装置、计算机设备和存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140185879A1 (en) * 2011-09-09 2014-07-03 Industry-Academic Cooperation Foundation, Yonsei University Apparatus and method for detecting traffic lane in real time
KR20170007596A (ko) * 2015-07-09 2017-01-19 현대자동차주식회사 개선된 차선 인식 방법
CN105260699A (zh) * 2015-09-10 2016-01-20 百度在线网络技术(北京)有限公司 一种车道线数据的处理方法及装置
CN109670376A (zh) * 2017-10-13 2019-04-23 神州优车股份有限公司 车道线识别方法及***
CN108052904A (zh) * 2017-12-13 2018-05-18 辽宁工业大学 车道线的获取方法及装置
CN109657632A (zh) * 2018-12-25 2019-04-19 重庆邮电大学 一种车道线检测识别方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3955218A3 (en) * 2021-01-25 2022-04-20 Beijing Baidu Netcom Science Technology Co., Ltd. Lane line detection method and apparatus, electronic device, computer storage medium, and computer program product
US11741726B2 (en) 2021-01-25 2023-08-29 Beijing Baidu Netcom Science Technology Co., Ltd. Lane line detection method, electronic device, and computer storage medium

Also Published As

Publication number Publication date
US20210117700A1 (en) 2021-04-22
SG11202013052UA (en) 2021-01-28
CN112131914B (zh) 2022-10-21
JP7119197B2 (ja) 2022-08-16
KR20210018493A (ko) 2021-02-17
JP2021532449A (ja) 2021-11-25
CN112131914A (zh) 2020-12-25

Similar Documents

Publication Publication Date Title
WO2022126377A1 (zh) 检测车道线的方法、装置、终端设备及可读存储介质
WO2020103893A1 (zh) 车道线属性检测方法、装置、电子设备及可读存储介质
WO2021249071A1 (zh) 一种车道线的检测方法及相关设备
WO2020258894A1 (zh) 车道线属性检测
US11455805B2 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
US9171375B2 (en) Multi-cue object detection and analysis
WO2020103892A1 (zh) 车道线检测方法、装置、电子设备及可读存储介质
CN112528878A (zh) 检测车道线的方法、装置、终端设备及可读存储介质
CN112287912B (zh) 基于深度学习的车道线检测方法以及装置
KR20210043516A (ko) 궤적 계획 모델을 훈련하는 방법, 장치, 전자 기기, 저장 매체 및 프로그램
WO2020258077A1 (zh) 一种行人检测方法及装置
WO2022217434A1 (zh) 感知网络、感知网络的训练方法、物体识别方法及装置
CN111931683B (zh) 图像识别方法、装置及计算机可读存储介质
CN110956119B (zh) 一种图像中目标检测的方法
CN113723377A (zh) 一种基于ld-ssd网络的交通标志检测方法
WO2021083126A1 (zh) 目标检测、智能行驶方法、装置、设备及存储介质
CN111079634B (zh) 车辆行驶中检测障碍物的方法、装置、***及车辆
CN110909656B (zh) 一种雷达与摄像机融合的行人检测方法和***
CN117218622A (zh) 路况检测方法、电子设备及存储介质
CN117975418A (zh) 一种基于改进rt-detr的交通标识检测方法
Al Mamun et al. Efficient lane marking detection using deep learning technique with differential and cross-entropy loss.
CN111160206A (zh) 一种交通环境元素视觉感知方法及装置
CN114495049A (zh) 识别车道线的方法和装置
CN113449647A (zh) 弯曲车道线的拟合方法、***、设备及计算机可读存储介质
CN115082903B (zh) 非机动车违停识别方法、装置、计算机设备及存储介质

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2021500086

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20217000803

Country of ref document: KR

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20830874

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20830874

Country of ref document: EP

Kind code of ref document: A1