WO2018035815A1 - 一种成对车道线的检测方法和装置 - Google Patents

一种成对车道线的检测方法和装置 Download PDF

Info

Publication number
WO2018035815A1
WO2018035815A1 PCT/CN2016/096761 CN2016096761W WO2018035815A1 WO 2018035815 A1 WO2018035815 A1 WO 2018035815A1 CN 2016096761 W CN2016096761 W CN 2016096761W WO 2018035815 A1 WO2018035815 A1 WO 2018035815A1
Authority
WO
WIPO (PCT)
Prior art keywords
neural network
sample
artificial neural
lines
distance
Prior art date
Application number
PCT/CN2016/096761
Other languages
English (en)
French (fr)
Inventor
黄凯明
韩永刚
Original Assignee
深圳市锐明技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市锐明技术股份有限公司 filed Critical 深圳市锐明技术股份有限公司
Priority to PCT/CN2016/096761 priority Critical patent/WO2018035815A1/zh
Priority to CN201680000880.XA priority patent/CN106415602B/zh
Publication of WO2018035815A1 publication Critical patent/WO2018035815A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Definitions

  • the present invention belongs to the field of automatic driving, and in particular, to a method and apparatus for detecting a pair of lane lines.
  • the lane departure warning system is an auxiliary system for assisting a driver to reduce a car's traffic accident due to a lane departure by means of an alarm.
  • an early warning may be issued by the lane departure warning system, which may include an alarm sound, a steering wheel vibration, or an automatic change of steering.
  • An object of the present invention is to provide a method for detecting a pair of lane lines, which solves the problem that the prior art does not effectively ensure accuracy and realism in the detection of paired lane lines.
  • an embodiment of the present invention provides a method for detecting a pair of lane lines, where the method includes: [0006] acquiring two lines to be detected, according to a preset spacing in the two a sample point respectively selected on a straight line, and obtaining a distance vector between the sample point and a predetermined common point;
  • the sample selected on the two straight lines according to a preset spacing acquiring the sample and a predetermined common
  • the distance vector steps between points include: [0010] selecting a sample point on the two straight lines according to a preset pitch;
  • the method before the step of substituting the distance vector into a preset artificial neural network to calculate an incentive value, the method further includes:
  • the sample comprises N samples selected on each lane line, the N being a natural number greater than or equal to 2.
  • the determining, according to the excitation value output by the artificial neural network, the determining whether the two straight lines are paired lane lines comprises:
  • an embodiment of the present invention provides a device for detecting a pair of lane lines, where the device includes: [0021] a lane line acquiring unit, configured to acquire two lines to be detected, according to a preset The distance between the sample and the predetermined common point is obtained by selecting a sample point respectively on the two straight lines; [0022] a calculating unit, for substituting the distance vector into a preset artificial The neural network calculates the excitation value, wherein the weight vector of the artificial neural network is trained according to the pre-acquired paired lane line sample data;
  • the determining unit is configured to determine, according to the excitation value output by the artificial neural network, whether the two straight lines are paired lane lines.
  • the lane line acquiring unit includes:
  • a sample selection subunit configured to respectively select a sample point on the two straight lines according to a preset spacing
  • a common point acquisition subunit configured to use a center point of the image as a common point, and obtain a distance vector between the sample point and the common point.
  • the device further includes: [0028] a sample collection unit, configured to collect a large number of paired lane line samples and an unpaired lane a line sample, selecting a sample point on the lane line sample according to the spacing;
  • a distance calculation unit configured to calculate a distance between the sample point and the common point
  • the weight vector calculation unit is configured to substitute the distance into the neural cell layer of the artificial neural network, and calculate a corresponding weight vector of the neural cell layer of the artificial neural network according to whether the samples are paired.
  • the sample in conjunction with the second aspect, in a third possible implementation of the second aspect, includes N samples selected on each lane line, and the N is a natural number greater than or equal to 2.
  • the determining unit includes: [0033] the comparing subunit has an excitation value for acquiring the output of the artificial neural network, The excitation value is compared with a preset threshold;
  • a pair of lane line determining subunits configured to determine that the two straight lines are paired lane lines if the excitation value is greater than the threshold value, and if the excitation value is less than the threshold value, determine the The two lines are not pairs of lane lines.
  • two lines to be detected are acquired, and a sample point is selected on the two straight lines according to a preset pitch, and a distance vector between the sample point and the common point is obtained, and the distance is obtained.
  • the vector is substituted into the pre-trained artificial neural network, and the excitation value output by the artificial neural network can be obtained, and whether the two straight lines are paired lane lines is determined according to the excitation value.
  • 1 is a flowchart of an implementation of a method for detecting a pair of lane lines according to an embodiment of the present invention
  • 2 is a schematic diagram of an example of obtaining a distance vector according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of another distance vector acquisition example according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of an artificial neural network according to an embodiment of the present invention.
  • FIG. 5 is a flowchart of implementing artificial neural network training according to an embodiment of the present invention.
  • FIG. 5a, FIG. 5b, and FIG. 5c are schematic diagrams of training samples according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a device for detecting a pair of lane lines according to an embodiment of the present invention.
  • the method for detecting a pair of lane lines aims to overcome the prior art method for detecting a pair of lane lines.
  • the algorithm causes the detection calculation process to consume a certain length of time. If the vehicle is running at a high speed, the detection result will be delayed, and the detection is less effective.
  • a simple lane line judging method is adopted, it is prone to error in the detection result and affect the user's judgment.
  • FIG. 1 shows an implementation flow of a method for detecting a pair of lane lines according to an embodiment of the present invention, which is described in detail below.
  • step S101 two lines to be detected are acquired, and a distance vector between the sample point and a predetermined common point is obtained according to a sample point respectively selected on the two straight lines according to a preset spacing. .
  • the lane-forming line in the embodiment of the present invention refers to an auxiliary line for defining a lane in which the vehicle travels. Since other marking lines may be included in addition to the lane line during the running of the vehicle, as shown in FIG. 3, in addition to the lane line, an arrow mark is included, and the mark formed by the arrow line and the lane line should not be identified. For the pair of lane lines.
  • the two straight lines to be detected may be obtained by identifying an image.
  • J can be identified according to the color in the image, such as identifying a white color in the image, or a straight line with a yellow color.
  • the preset pitch may be set according to the size of the image. For example, according to the width of the image, the 1/3 screen width is set to be the length of the pitch.
  • the size of the spacing may also be selected according to the number of samples required, and the length of the spacing may be set such that the selected sample includes the end position of the straight line.
  • the selection of the common point can be flexibly set according to the needs of the user.
  • the midpoint of the upper part in the image may be set as the common point, or the midpoint of the lower part of the image may be set as the common point, and the center point of the image may be set as the common point.
  • the weight vectors corresponding to the artificial neural network are also different according to the way of selecting the common points. And the position of the common point selected in the training process of the weight vector is the same as the position of the common point corresponding to the two straight lines to be detected.
  • the distance vector between the sample point and the common point can be obtained by measuring the distance between the sample point and the common point.
  • the selected sample includes four, and the common point is the center position of the image.
  • the distance from the upper left segment is 7.7 cm
  • the distance from the lower left segment is 10.5 cm
  • the distance from the upper right segment is 8.5 cm
  • the distance from the lower right segment is 12 cm
  • the distance vector is ⁇ 7.7, 10.5, 8.5, 12>.
  • two lines are selected on each line (one of the embodiments selected for the example), and the selected sample includes four, and the common point is The center position of the image, the distance of the upper left line segment is 2.7cm, the distance of the lower left line segment is 8.2cm, the distance of the upper right corner line is 6.2cm, the distance of the lower right line segment is 4.3cm, and the distance vector is ⁇ 2.7, 8.2 , 6.2, 4.3>.
  • step S102 the distance vector is substituted into a preset artificial neural network to calculate an excitation value, wherein the weight vector of the artificial neural network is trained and acquired according to the paired lane line sample data collected in advance.
  • the calculation of the weight vector of the artificial neural network in the embodiment of the present invention may be performed according to a preset plurality of samples, wherein the training method of the weight vector may include the method shown in FIG. The following steps:
  • step S501 a plurality of pairs of lane line samples and unpaired lane line samples are collected, and samples are selected on the lane line samples according to the spacing;
  • step S02 calculating a distance between the sample point and the common point
  • step S503 the distance is substituted into the neural cell layer of the artificial neural network, according to whether the sample is To, calculate the corresponding weight vector of the neural cell layer of the artificial neural network.
  • the artificial neural network may include a neural cell layer and an output layer.
  • a hidden layer may also be included between the nerve cell layer and the output layer.
  • FIG. 4 is a schematic structural diagram of an artificial neural network according to an embodiment of the present invention. As shown in FIG. 4, the artificial neural network includes input layers XI, X2, X3, and X4, and neural cell layers Y1, Y2, Z1, and ⁇ 2, wherein Y1 and ⁇ 2 constitute a hidden layer, and Z1 and ⁇ 2 constitute an output layer.
  • the number of inputs of the input layer is set according to the number of input vectors. For example, in Figure 5a, Figure 5b and Figure 5c, there are four samples, and the corresponding input layer has four inputs.
  • Figures 5a, 5b, and 5c are three of a large number of pairs of lane-like samples, and the distance vectors for Figures 5a, 5b, and 5c are:
  • the selected sample includes four, and the common point is The center position of the image, the distance of the upper left line segment is 5cm, the distance of the lower left line segment is 14cm, the distance of the upper right corner segment is 11cm, the distance of the lower right corner segment is 9.5cm, and the distance vector is ⁇ 5,14,11. 9.5>.
  • the selected sample includes four, and the common point is The center position of the image, the distance of the upper left line segment is 5cm, the distance of the lower left line segment is 6.5cm, the distance of the upper right corner segment is 10cm, the distance of the lower right corner segment is 16cm, and the distance vector is ⁇ 5, 6.5, 10, 16>.
  • two lines are selected on each line (one of the embodiments selected for the example), and the selected sample includes four, and the common point is The center position of the image, the distance of the upper left line segment is 7cm, the distance of the lower left line segment is 11 cm, the distance of the upper right corner line is 7.7cm, the distance of the lower right line segment is 12cm, and the distance vector is ⁇ 7,11,7.7 , 12>.
  • the output results of FIG. 5a, FIG. 5b, and FIG. 5c are all "paired lane lines".
  • Wl l, W13, W15, and W17 are four inputs of the neural cell Y1.
  • Corresponding weights; W12, W 14, W16 and W18 are the weights corresponding to the four inputs of neural cell Y2;
  • W21 and W23 are the weights corresponding to the two inputs of nerve cell Z 1 ;
  • W22 and W24 are the two of neural cells Z2 Enter the corresponding weight.
  • an excitation function may be set for the nerve cells, such as: the excitation value of the output layer exceeds a certain threshold, and output 1; otherwise, the output 0.
  • the present invention sets the excitation function for Z1 and Z2 to be: if (incentive value >
  • each weight value of the artificial neural network is initialized to any random fraction between [-1, 1], and then the samples in the training set are input into the artificial neural network one by one, and each weight value is adjusted to make all the positive values "The sample produces an output of 1 at Z1 and an output of 0 at Z2; and all "negative samples” produce a 0 output at Z1 and a 1 output at Z2.
  • the training is repeated and the weight value is adjusted.
  • the weight vector of the neural cell Y1 is ⁇ 0.8, -0.2, 0.65, -0.3>
  • the input weight vector of Y2 is ⁇ 0.7, -0.3, 0.9, -0.4>
  • the input weight vector of Z1 is ⁇ 1, -1 >
  • Z2's input weight vector is ⁇ -1, 1>.
  • step S103 it is determined whether the two straight lines are paired lane lines according to the excitation value output by the artificial neural network.
  • the input vector is: ⁇ 7.7, 10.5, 8.5, 12>
  • the weight vector of Y1 is ⁇ 0.8, -0.2, 0.
  • the obtained vector values are ⁇ 2.7, 8.2, 6.2, 4.3>, and the weight vectors corresponding to Y1 are: ⁇ 0.8, -0.2, 0.65, -0.3>, and the Y1 excitation is obtained.
  • Value 3.26;
  • the present invention acquires two straight lines to be detected, selects a sample point on the two straight lines according to a preset pitch, obtains a distance vector between the sample point and the common point, and substitutes the distance vector into the advance
  • the trained artificial neural network can obtain an excitation value output by the artificial neural network, and judge whether the two straight lines are paired lane lines according to the excitation value.
  • FIG. 6 is a schematic structural diagram of a device for detecting a pair of lane lines according to an embodiment of the present invention, which is described in detail below:
  • the device for detecting a pair of lane lines includes:
  • the lane line obtaining unit 601 is configured to acquire two lines to be detected, and select samples respectively on the two straight lines according to a preset spacing, and obtain between the sample points and a predetermined common point.
  • Distance vector
  • the calculating unit 602 is configured to substitute the distance vector into a preset artificial neural network to calculate an excitation value, where the weight vector of the artificial neural network is trained according to the pre-acquired pair of lane line sample data;
  • the determining unit 603 is configured to determine, according to the excitation value output by the artificial neural network, whether the two straight lines are paired lane lines.
  • the lane line acquiring unit includes:
  • a sample selection subunit configured to respectively select a sample point on the two straight lines according to a preset spacing
  • a common point acquisition subunit is configured to obtain a distance vector between the sample point and the common point by using a center point of the image as a common point.
  • the device further includes:
  • a sample collection unit for collecting a large number of pairs of lane line samples and unpaired lane line samples, root Selecting a sample on the lane line sample according to the spacing;
  • a distance calculation unit configured to calculate a distance between the sample point and the common point
  • the weight vector calculation unit is configured to substitute the distance into the neural cell layer of the artificial neural network, and calculate a corresponding weight vector of the neural cell layer of the artificial neural network according to whether the samples are paired.
  • the sample point includes N sample points selected on each lane line, and the N is a natural number greater than or equal to 2.
  • the determining unit comprises:
  • the comparing subunit has: an excitation value for obtaining the output of the artificial neural network, and comparing the excitation value with a preset threshold;
  • a pair of lane line determining subunits configured to determine that the two straight lines are paired lane lines if the excitation value is greater than the threshold value, and if the excitation value is less than the threshold value, determine the The two lines are not pairs of lane lines.
  • the device for detecting a pair of lane lines corresponds to the method for detecting the pair of lane lines
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division, and the actual implementation may have another division manner, for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not executed.
  • the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be electrical, mechanical or otherwise.
  • the unit described as a separate component may or may not be physically distributed, and the component displayed as a unit may or may not be a physical unit, that is, may be located in one place, or may be distributed to multiple On the network unit. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit. Realized.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention may contribute to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a removable hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk, and the like, which can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)

Abstract

一种成对车道线的检测方法,所述方法包括:获取待检测的两条直线,根据预先设定的间距在所述两条直线上分别选择的样点,获取所述样点与预定的公共点之间的距离向量;将所述距离向量代入预先设定的人工神经网络计算激励值,其中,所述人工神经网络的权重向量根据预先采集的成对车道线样本数据所训练获取;根据所述人工神经网络输出的激励值,判断所述两条直线是否为成对车道线。本方法可以有效的保证对成对车道线判断的实时性,又能够提高判断的准确性。

Description

发明名称:一种成对车道线的检测方法和装置 技术领域
[0001] 本发明属于自动驾驶领域, 尤其涉及一种成对车道线的检测方法和装置。
背景技术
[0002] 车道偏离预警***是一种通过报警的方式辅助驾驶员减少汽车因车道偏离而发 生交通事故的汽车驾驶的辅助***。 当车辆偏离行驶车道吋, 通过所述车道偏 离预警***可以发出预警提醒, 所述预警提醒可包括警报音、 方向盘震动或自 动改变转向等。
[0003] 在车道偏离预警***中, 为了保证预警的准确度, 需要对车道线进行正确的提 取和识别。 目前的成对车道线检测方法, 一般需要消耗较多的***资源, 当需 要较高的准确度吋, 则需要花费一定的计算吋间, 无法保证实吋检测; 或者, 为了提高检测的实吋性, 则可能造成漏检测, 导致误检率提高。
技术问题
[0004] 本发明的目的在于提供一种成对车道线的检测方法, 以解决现有技术在成对车 道线检测吋, 不能有效的保证准确率以及实吋性的问题。
问题的解决方案
技术解决方案
[0005] 第一方面, 本发明实施例提供了一种成对车道线的检测方法, 所述方法包括: [0006] 获取待检测的两条直线, 根据预先设定的间距在所述两条直线上分别选择的样 点, 获取所述样点与预定的公共点之间的距离向量;
[0007] 将所述距离向量代入预先设定的人工神经网络计算激励值, 其中, 所述人工神 经网络的权重向量根据预先采集的成对车道线样本数据所训练获取;
[0008] 根据所述人工神经网络输出的激励值, 判断所述两条直线是否为成对车道线。
[0009] 结合第一方面, 在第一方面的第一种可能实现方式中, 所述根据预先设定的间 距在所述两条直线上选择的样点, 获取所述样点与预定的公共点之间的距离向 量步骤包括: [0010] 根据预先设定的间距, 在所述两条直线上分别选择样点;
[0011] 将图像的中心点作为公共点, 获取所述样点与所述公共点之间的距离向量。
[0012] 结合第一方面, 在第一方面的第二种可能实现方式中, 在所述将所述距离向量 代入预先设定的人工神经网络计算激励值步骤之前, 所述方法还包括:
[0013] 采集大量的成对的车道线样本和不成对的车道线样本, 根据所述间距在所述车 道线样本上选择样点;
[0014] 计算所述样点与所述公共点之间的距离;
[0015] 将所述距离代入人工神经网络的神经细胞层, 根据样本是否成对, 计算人工神 经网络的神经细胞层的所对应的权重向量。
[0016] 结合第一方面, 在第一方面的第三种可能实现方式中, 所述样点包括在每条车 道线上选择的 N个样点, 所述 N为大于或等于 2的自然数。
[0017] 结合第一方面, 在第一方面的第四种可能实现方式中, 所述根据所述人工神经 网络输出的激励值, 判断所述两条直线是否为成对车道线步骤包括:
[0018] 获取所述人工神经网络输出的激励值, 将所述激励值与预先设定的阈值进行比 较;
[0019] 如果所述激励值大于所述阈值, 则确定所述两条直线为成对车道线, 如果所述 激励值小于所述阈值, 则确定所述两条直线不是成对车道线。
[0020] 第二方面, 本发明实施例提供了一种成对车道线的检测装置, 所述装置包括: [0021] 车道线获取单元, 用于获取待检测的两条直线, 根据预先设定的间距在所述两 条直线上分别选择的样点, 获取所述样点与预定的公共点之间的距离向量; [0022] 计算单元, 用于将所述距离向量代入预先设定的人工神经网络计算激励值, 其 中, 所述人工神经网络的权重向量根据预先采集的成对车道线样本数据所训练 获取;
[0023] 判断单元, 用于根据所述人工神经网络输出的激励值, 判断所述两条直线是否 为成对车道线。
[0024] 结合第二方面, 在第二方面的第一种可能实现方式中, 所述车道线获取单元包 括:
[0025] 样点选择子单元, 用于根据预先设定的间距, 在所述两条直线上分别选择样点 [0026] 公共点获取子单元, 用于将图像的中心点作为公共点, 获取所述样点与所述公 共点之间的距离向量。
[0027] 结合第二方面, 在第二方面的第二种可能实现方式中, 所述装置还包括: [0028] 样本采集单元, 用于采集大量的成对的车道线样本和不成对的车道线样本, 根 据所述间距在所述车道线样本上选择样点;
[0029] 距离计算单元, 用于计算所述样点与所述公共点之间的距离;
[0030] 权重向量计算单元, 用于将所述距离代入人工神经网络的神经细胞层, 根据样 本是否成对, 计算人工神经网络的神经细胞层的所对应的权重向量。
[0031] 结合第二方面, 在第二方面的第三种可能实现方式中, 所述样点包括在每条车 道线上选择的 N个样点, 所述 N为大于或等于 2的自然数。
[0032] 结合第二方面, 在第二方面的第四种可能实现方式中, 所述判断单元包括: [0033] 比较子单元有, 用于获取所述人工神经网络输出的激励值, 将所述激励值与预 先设定的阈值进行比较;
[0034] 成对车道线确定子单元, 用于如果所述激励值大于所述阈值, 则确定所述两条 直线为成对车道线, 如果所述激励值小于所述阈值, 则确定所述两条直线不是 成对车道线。
发明的有益效果
有益效果
[0035] 在本发明中, 获取待检测的两条直线, 根据预先设定的间距, 在所述两条直线 上选择样点, 获得样点与公共点之间的距离向量, 将所述距离向量代入预先训 练好的人工神经网络, 可得到人工神经网络输出的激励值, 根据所述激励值判 断所述两条直线是否为成对车道线。 采用本发明所述方法, 只需要将获取的距 离数据代入人工神经网络即可快速的确定是否为成对车道线, 即可有效的保证 对成对车道线判断的实吋性, 又能够提高判断的准确性。
对附图的简要说明
附图说明
[0036] 图 1是本发明实施例提供的成对车道线的检测方法的实现流程图; [0037] 图 2为本发明实施例提供的距离向量获取实例示意图;
[0038] 图 3为本发明实施例提供的又一距离向量获取实例示意图;
[0039] 图 4为本发明实施例提供的人工神经网络示意图;
[0040] 图 5是本发明实施例提供的人工神经网络训练的实现流程图;
[0041] 图 5a、 图 5b、 图 5c为本发明实施例提供的训练样本示意图;
[0042] 图 6是本发明实施例提供的成对车道线的检测装置的结构示意图。
本发明的实施方式
[0043] 为了使本发明的目的、 技术方案及优点更加清楚明白, 以下结合附图及实施例 , 对本发明进行进一步详细说明。 应当理解, 此处所描述的具体实施例仅仅用 以解释本发明, 并不用于限定本发明。
[0044] 本发明实施例所述成对车道线检测方法, 目的在于克服现有技术中就成对车道 线检测方法中, 为了提高成对车道线的检测正确率, 往往需要采用较为复杂的 检测算法, 导致检测计算过程需要消耗一定的吋长, 如果在汽车高速行驶状态 下, 则会造成检测结果滞后, 检测的实吋性较低的缺陷。 而如果采用简单的车 道线判断方法, 则容易出现检测结果出错, 影响用户判断。 下面结合附图, 对 本发明作进一步的说明。
[0045] 图 1示出了本发明实施例提供的成对车道线的检测方法的实现流程, 详述如下
[0046] 在步骤 S101中, 获取待检测的两条直线, 根据预先设定的间距在所述两条直线 上分别选择的样点, 获取所述样点与预定的公共点之间的距离向量。
[0047] 具体的, 本发明实施例所述对成车道线, 是指用于限定车辆行驶的车道的辅助 线。 由于在车辆行驶过程中, 除了车道线以外, 还可能包括其它的标识线, 比 如图 3所示, 除了车道线以外, 还包括箭头标识, 由箭头线和车道线构成的标识 , 则不应该识别为成对车道线。
[0048] 所述待检测的两条直线, 可以通过对图像进行识别获取。 比如, 所述直线的识 另 |J, 可以根据图像中颜色进行识别, 比如识别图像中颜色为白色, 或者颜色为 黄色的直线等。 [0049] 所述预先设定的间距, 可以根据图像的大小进行设定。 比如根据图像的宽度, 设置 1/3屏幕宽度为所述间距的长度。 当然, 还可以根据所需要的样点的个数, 选择所述间距的大小, 设定所述间距的长度, 使得选择的样点包括所述直线的 端部位置。
[0050] 所述公共点的选择, 可以根据用户的需要灵活设定。 比如可以设定图像中的上 部的中点作为所述公共点, 也可以设定图像中的下部的中点作为所述公共点, 还可以设定图像的中心点作为所述公共点。 根据公共点的选择方式的不同, 所 述人工神经网络所对应的权重向量也不相同。 并且在权重向量的训练过程中选 用的公共点的位置, 与所述待检测的两条直线对应的公共点的位置相同。
[0051] 所述样点与所述公共点之间的距离向量, 可以通过测量样点与公共点之间的距 离获取。 比如图 2中的两条直线, 在每条直线上选择两个样点 (出于示例选用的 其中一种实施方式) , 选择的样点包括四个, 所述公共点为图像的中心位置, 则左上角线段的距离为 7.7cm, 左下角线段的距离为 10.5cm, 右上角线段的距离 为 8.5cm, 右下角线段的距离为 12cm, 构成距离向量为<7.7,10.5,8.5,12>。
[0052] 如图 3所示的两条直线, 在每条直线上选择两个样点 (出于示例选用的其中一 种实施方式) , 所选择的样点包括四个, 所述公共点为图像的中心位置, 则左 上角线段的距离为 2.7cm, 左下角线段的距离为 8.2cm, 右上角线段的距离为 6.2c m, 右下角线段的距离为 4.3cm, 构成距离向量为<2.7,8.2,6.2,4.3>。
[0053] 在步骤 S102中, 将所述距离向量代入预先设定的人工神经网络计算激励值, 其 中, 所述人工神经网络的权重向量根据预先采集的成对车道线样本数据所训练 获取。
[0054] 具体的, 本发明实施例中所述人工神经网络的权重向量的计算, 可以根据预先 设定的多个样本训练获取, 其中, 所述权重向量的训练方法, 可以包括图 5所示 的下述步骤:
[0055] 在步骤 S501中, 采集大量的成对的车道线样本和不成对的车道线样本, 根据所 述间距在所述车道线样本上选择样点;
[0056] 在步骤 S02中, 计算所述样点与所述公共点之间的距离;
[0057] 在步骤 S503中, 将所述距离代入人工神经网络的神经细胞层, 根据样本是否成 对, 计算人工神经网络的神经细胞层的所对应的权重向量。
[0058] 具体的, 所述人工神经网络可以包括神经细胞层、 输出层。 在所述神经细胞层 与输出层之间, 还可以包括隐藏层。 如图 4为本发明实施例提供的一种人工神经 网络的结构示意图。 如图 4所示, 所述人工神经网络包括输入层 XI, X2, X3和 X 4、 神经细胞层 Yl、 Y2、 Zl和 Ζ2, 其中, Yl和 Υ2构成隐藏层, Z1和 Ζ2构成输出 层。
[0059] 其中, 所述输入层的输入个数, 根据输入向量的个数设定。 比如在图 5a、 图 5b 和图 5c中的样点为 4个, 相应的输入层的输入个数也为 4个。
[0060] 假设图 5a、 图 5b和图 5c是大量的成对车道样样本中的三个, 并且对于图 5a、 图 5b和图 5c的距离向量依次为:
[0061] 如图 5a所示的两条直线, 在每条直线上选择两个样点 (出于示例选用的其中一 种实施方式) , 所选择的样点包括四个, 所述公共点为图像的中心位置, 则左 上角线段的距离为 5cm, 左下角线段的距离为 14cm, 右上角线段的距离为 11cm , 右下角线段的距离为 9.5cm, 构成距离向量为<5,14,11,9.5>。
[0062] 如图 5b所示的两条直线, 在每条直线上选择两个样点 (出于示例选用的其中一 种实施方式) , 所选择的样点包括四个, 所述公共点为图像的中心位置, 则左 上角线段的距离为 5cm, 左下角线段的距离为 6.5cm, 右上角线段的距离为 10cm , 右下角线段的距离为 16cm, 构成距离向量为<5,6.5,10,16>。
[0063] 如图 5c所示的两条直线, 在每条直线上选择两个样点 (出于示例选用的其中一 种实施方式) , 所选择的样点包括四个, 所述公共点为图像的中心位置, 则左 上角线段的距离为 7cm, 左下角线段的距离为 11 cm, 右上角线段的距离为 7.7cm , 右下角线段的距离为 12cm, 构成距离向量为<7,11,7.7,12>。
[0064] 图 5a、 图 5b、 图 5c的输出结果均为是"成对车道线", 结合图 4所述的人工神经 网络, Wl l、 W13、 W15和 W17是神经细胞 Yl的四个输入对应的权重; W12、 W 14、 W16和 W18是神经细胞 Y2的四个输入对应的权重; W21和 W23是神经细胞 Z 1的两个输入对应的权重; W22和 W24是神经细胞 Z2的两个输入对应的权重。
[0065] 其中, 神经细胞的激励值为输入与权重的乘积和, 比如图 4中: Y1得到的激励 值 = X1*W11+X2*W13+X3*W15+X4*W17。 [0066] 在训练过程中, 可以为神经细胞设置激励函数, 如: 输出层的激励值超过一定 的阈值, 输出 1 ; 否则输出 0。 在此例中, 隐藏层的神经细胞 Y1和 Y2的激励函数 设置为: 输出 = =激励值。
[0067] 神经细胞 Z1和 Z2的输入即为 Y1和 Y2的输出, 即: Z1得到的激励值 =
Y1*W21+Y2*W23。 其中, 本发明对于 Z1和 Z2设置激励函数可以为: if (激励值>
=0), 输出 1, otherwise输出 0。
[0068] 根据上述激励函数, 可以确定人工神经网络各个权重, 即人工神经网络的训练
[0069] 在本发明中, 人工神经网络各个权重值初始化为 [-1, 1]之间的任意随机小数, 然后把训练集中的样本逐个输入人工神经网络, 调整各个权重值, 使所有 "正"样 本在 Z1产生 1的输出, 在 Z2产生 0的输出; 且所有 "负样本"在 Z1产生 0输出, 在 Z2 产生 1输出。 经过图 5a、 图 5b和图 5c中的三个向量逐一作为正样本输入, 反复训 练, 调整权重值。 当然, 还可以输入负样本进行训练。 最终得到神经细胞 Y1的 权重向量为 <0.8, -0.2, 0.65, -0.3>, Y2的输入权重向量为 <0.7, -0.3, 0.9, -0.4> , Z1的输入权重向量为 <1, -1>, Z2的输入权重向量为 <-1, 1>。
[0070] 在步骤 S103中, 根据所述人工神经网络输出的激励值, 判断所述两条直线是否 为成对车道线。
[0071] 根据图 2和图 3得到的距离向量, 分别代入所述人工神经网络, 可以计算人工神 经网络的输出层的激励值:
[0072] 对于图 2, 输入向量为: <7.7, 10.5, 8.5, 12>, Y1的权重向量为 <0.8, -0.2, 0.
65, -0.3>, 得到 Y1激励值 = 5.985; Y2的权重向量为 <0.7, -0.3, 0.9, -0.4>, 得 到 Y2激励值 = 5.09。
[0073] Y1和 Y2的激励函数设置为"输出 = =激励值", 则 Z1的激励值 =
<5.985, 5.09>*<1, -1> = 0.985; Z2激励值 = <5.985, 5.09>*<-1, 1> = -0.985。
[0074] Z1和 Z2的激励函数设置为" if (激励值>=0), 输出 1, otherwise
输出 0", 则 Z1输出 1, Z2输出 0, 得到"两条直线是成对车道线"的判定。
[0075] 对于图 4中的间两条线, 取得的向量值为<2.7, 8.2, 6.2, 4.3>, Y1对应的权重 向量为: <0.8, -0.2, 0.65, -0.3>, 得到 Y1激励值 = 3.26; Y2的激励向量为 <0.7 , -0.3, 0.9, -0.4>, 得到 Y2激励值 = 3.29。
[0076] Ζ1的激励值= <3.26, 3.29>*<1, -1> = -0.03; Z2激励值 =
<3.26, 3.29>*<-1, 1> = 0.03。 Z1输出 0, Z2输出 1, 得到"两条直线非成对车道 线"的判定。
[0077] 本发明获取待检测的两条直线, 根据预先设定的间距, 在所述两条直线上选择 样点, 获得样点与公共点之间的距离向量, 将所述距离向量代入预先训练好的 人工神经网络, 可得到人工神经网络输出的激励值, 根据所述激励值判断所述 两条直线是否为成对车道线。 采用本发明所述方法, 只需要将获取的距离数据 代入人工神经网络即可快速的确定是否为成对车道线, 即可以有效的保证对成 对车道线判断的实吋性, 又能够提高判断的准确性。
[0078] 图 6所示为本发明实施例提供的成对车道线的检测装置的结构示意图, 详述如 下:
[0079] 本发明实施例所述成对车道线的检测装置, 包括:
[0080] 车道线获取单元 601, 用于获取待检测的两条直线, 根据预先设定的间距在所 述两条直线上分别选择的样点, 获取所述样点与预定的公共点之间的距离向量
[0081] 计算单元 602, 用于将所述距离向量代入预先设定的人工神经网络计算激励值 , 其中, 所述人工神经网络的权重向量根据预先采集的成对车道线样本数据所 训练获取;
[0082] 判断单元 603, 用于根据所述人工神经网络输出的激励值, 判断所述两条直线 是否为成对车道线。
[0083] 优选的, 所述车道线获取单元包括:
[0084] 样点选择子单元, 用于根据预先设定的间距, 在所述两条直线上分别选择样点
[0085] 公共点获取子单元, 用于将图像的中心点作为公共点, 获取所述样点与所述公 共点之间的距离向量。
[0086] 优选的, 所述装置还包括:
[0087] 样本采集单元, 用于采集大量的成对的车道线样本和不成对的车道线样本, 根 据所述间距在所述车道线样本上选择样点;
[0088] 距离计算单元, 用于计算所述样点与所述公共点之间的距离;
[0089] 权重向量计算单元, 用于将所述距离代入人工神经网络的神经细胞层, 根据样 本是否成对, 计算人工神经网络的神经细胞层的所对应的权重向量。
[0090] 优选的, 所述样点包括在每条车道线上选择的 N个样点, 所述 N为大于或等于 2 的自然数。
[0091] 优选的,所述判断单元包括:
[0092] 比较子单元有, 用于获取所述人工神经网络输出的激励值, 将所述激励值与预 先设定的阈值进行比较;
[0093] 成对车道线确定子单元, 用于如果所述激励值大于所述阈值, 则确定所述两条 直线为成对车道线, 如果所述激励值小于所述阈值, 则确定所述两条直线不是 成对车道线。
[0094] 本发明实施例所述成对车道线的检测装置, 与上述成对车道线的检测方法对应
, 在此不作重复赘述。
[0095] 在本发明所提供的几个实施例中, 应该理解到, 所揭露的装置和方法, 可以通 过其它的方式实现。 例如, 以上所描述的装置实施例仅仅是示意性的, 例如, 所述单元的划分, 仅仅为一种逻辑功能划分, 实际实现吋可以有另外的划分方 式, 例如多个单元或组件可以结合或者可以集成到另一个***, 或一些特征可 以忽略, 或不执行。 另一点, 所显示或讨论的相互之间的耦合或直接耦合或通 信连接可以是通过一些接口, 装置或单元的间接耦合或通信连接, 可以是电性 , 机械或其它的形式。
[0096] 所述作为分离部件说明的单元可以是或者也可以不是物理上分幵的, 作为单元 显示的部件可以是或者也可以不是物理单元, 即可以位于一个地方, 或者也可 以分布到多个网络单元上。 可以根据实际的需要选择其中的部分或者全部单元 来实现本实施例方案的目的。
[0097] 另外, 在本发明各个实施例中的各功能单元可以集成在一个处理单元中, 也可 以是各个单元单独物理存在, 也可以两个或两个以上单元集成在一个单元中。 上述集成的单元既可以采用硬件的形式实现, 也可以采用软件功能单元的形式 实现。
[0098] 所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用 吋, 可以存储在一个计算机可读取存储介质中。 基于这样的理解, 本发明的技 术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分 可以以软件产品的形式体现出来, 该计算机软件产品存储在一个存储介质中, 包括若干指令用以使得一台计算机设备 (可以是个人计算机, 服务器, 或者网 络设备等) 执行本发明各个实施例所述方法的全部或部分。 而前述的存储介质 包括: U盘、 移动硬盘、 只读存储器 (ROM, Read-Only Memory) 、 随机存取 存储器 (RAM, Random Access Memory) 、 磁碟或者光盘等各种可以存储程序 代码的介质。
[0099] 以上所述仅为本发明的较佳实施例而已, 并不用以限制本发明, 凡在本发明的 精神和原则之内所作的任何修改、 等同替换和改进等, 均应包含在本发明的保 护范围之内。

Claims

权利要求书
一种成对车道线的检测方法, 其特征在于, 所述方法包括: 获取待检测的两条直线, 根据预先设定的间距在所述两条直线上分别 选择的样点, 获取所述样点与预定的公共点之间的距离向量; 将所述距离向量代入预先设定的人工神经网络计算激励值, 其中, 所 述人工神经网络的权重向量根据预先采集的成对车道线样本数据所训 练获取;
根据所述人工神经网络输出的激励值, 判断所述两条直线是否为成对 车道线。
根据权利要求 1所述方法, 其特征在于, 所述根据预先设定的间距在 所述两条直线上选择的样点, 获取所述样点与预定的公共点之间的距 离向量步骤包括:
根据预先设定的间距, 在所述两条直线上分别选择样点;
将图像的中心点作为公共点, 获取所述样点与所述公共点之间的距离 向量。
根据权利要求 1所述方法, 其特征在于, 在所述将所述距离向量代入 预先设定的人工神经网络计算激励值步骤之前, 所述方法还包括: 采集大量的成对的车道线样本和不成对的车道线样本, 根据所述间距 在所述车道线样本上选择样点;
计算所述样点与所述公共点之间的距离;
将所述距离代入人工神经网络的神经细胞层, 根据样本是否成对, 计 算人工神经网络的神经细胞层的所对应的权重向量。
根据权利要求 1所述方法, 其特征在于, 所述样点包括在每条车道线 上选择的 N个样点, 所述 N为大于或等于 2的自然数。
根据权利要求 1所述方法, 其特征在于, 所述根据所述人工神经网络 输出的激励值, 判断所述两条直线是否为成对车道线步骤包括: 获取所述人工神经网络输出的激励值, 将所述激励值与预先设定的阈 值进行比较; 如果所述激励值大于所述阈值, 则确定所述两条直线为成对车道线, 如果所述激励值小于所述阈值, 则确定所述两条直线不是成对车道线 一种成对车道线的检测装置, 其特征在于, 所述装置包括: 车道线获取单元, 用于获取待检测的两条直线, 根据预先设定的间距 在所述两条直线上分别选择的样点, 获取所述样点与预定的公共点之 间的距离向量;
计算单元, 用于将所述距离向量代入预先设定的人工神经网络计算激 励值, 其中, 所述人工神经网络的权重向量根据预先采集的成对车道 线样本数据所训练获取;
判断单元, 用于根据所述人工神经网络输出的激励值, 判断所述两条 直线是否为成对车道线。
根据权利要求 6所述装置, 其特征在于, 所述车道线获取单元包括: 样点选择子单元, 用于根据预先设定的间距, 在所述两条直线上分别 选择样点;
公共点获取子单元, 用于将图像的中心点作为公共点, 获取所述样点 与所述公共点之间的距离向量。
根据权利要求 6所述装置, 其特征在于, 所述装置还包括: 样本采集单元, 用于采集大量的成对的车道线样本和不成对的车道线 样本, 根据所述间距在所述车道线样本上选择样点;
距离计算单元, 用于计算所述样点与所述公共点之间的距离; 权重向量计算单元, 用于将所述距离代入人工神经网络的神经细胞层
, 根据样本是否成对, 计算人工神经网络的神经细胞层的所对应的权 重向量。
根据权利要求 6所述装置, 其特征在于, 所述样点包括在每条车道线 上选择的 N个样点, 所述 N为大于或等于 2的自然数。
根据权利要求 6所述装置, 其特征在于, 所述判断单元包括: 比较子单元有, 用于获取所述人工神经网络输出的激励值, 将所述激 励值与预先设定的阈值进行比较;
成对车道线确定子单元, 用于如果所述激励值大于所述阈值, 则确定 所述两条直线为成对车道线, 如果所述激励值小于所述阈值, 则确定 所述两条直线不是成对车道线。
PCT/CN2016/096761 2016-08-25 2016-08-25 一种成对车道线的检测方法和装置 WO2018035815A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2016/096761 WO2018035815A1 (zh) 2016-08-25 2016-08-25 一种成对车道线的检测方法和装置
CN201680000880.XA CN106415602B (zh) 2016-08-25 2016-08-25 一种成对车道线的检测方法和装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/096761 WO2018035815A1 (zh) 2016-08-25 2016-08-25 一种成对车道线的检测方法和装置

Publications (1)

Publication Number Publication Date
WO2018035815A1 true WO2018035815A1 (zh) 2018-03-01

Family

ID=58087907

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/096761 WO2018035815A1 (zh) 2016-08-25 2016-08-25 一种成对车道线的检测方法和装置

Country Status (2)

Country Link
CN (1) CN106415602B (zh)
WO (1) WO2018035815A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106462755B (zh) * 2016-09-26 2019-05-28 深圳市锐明技术股份有限公司 成对车道线检测方法及装置
CN106778791A (zh) * 2017-03-01 2017-05-31 成都天衡电科科技有限公司 一种基于多重感知器的木材视觉识别方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104063877A (zh) * 2014-07-16 2014-09-24 中电海康集团有限公司 一种候选车道线混合判断识别方法
CN104102905A (zh) * 2014-07-16 2014-10-15 中电海康集团有限公司 一种车道线自适应检测的方法
CN105046235A (zh) * 2015-08-03 2015-11-11 百度在线网络技术(北京)有限公司 车道线的识别建模方法和装置、识别方法和装置
US20160012300A1 (en) * 2014-07-11 2016-01-14 Denso Corporation Lane boundary line recognition device
CN105260713A (zh) * 2015-10-09 2016-01-20 东方网力科技股份有限公司 一种车道线检测方法和装置

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2848935B1 (fr) * 2002-12-20 2005-04-29 Valeo Vision Procede de detection de virages sur une route et systeme de mise en oeuvre
CN102201167B (zh) * 2010-04-07 2013-03-06 宫宁生 基于视频的汽车车道自动识别方法
KR101261409B1 (ko) * 2012-04-24 2013-05-10 이엔지정보기술 주식회사 영상 내 노면표시 인식시스템
CN102750825B (zh) * 2012-06-19 2014-07-23 银江股份有限公司 基于神经网络分类器级联融合的城市道路交通状态检测方法
CN105069415B (zh) * 2015-07-24 2018-09-11 深圳市佳信捷技术股份有限公司 车道线检测方法和装置
CN105260699B (zh) * 2015-09-10 2018-06-26 百度在线网络技术(北京)有限公司 一种车道线数据的处理方法及装置
CN105608429B (zh) * 2015-12-21 2019-05-14 重庆大学 基于差分激励的鲁棒车道线检测方法
CN105718916A (zh) * 2016-01-27 2016-06-29 大连楼兰科技股份有限公司 一种基于霍夫变换的车道线检测方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160012300A1 (en) * 2014-07-11 2016-01-14 Denso Corporation Lane boundary line recognition device
CN104063877A (zh) * 2014-07-16 2014-09-24 中电海康集团有限公司 一种候选车道线混合判断识别方法
CN104102905A (zh) * 2014-07-16 2014-10-15 中电海康集团有限公司 一种车道线自适应检测的方法
CN105046235A (zh) * 2015-08-03 2015-11-11 百度在线网络技术(北京)有限公司 车道线的识别建模方法和装置、识别方法和装置
CN105260713A (zh) * 2015-10-09 2016-01-20 东方网力科技股份有限公司 一种车道线检测方法和装置

Also Published As

Publication number Publication date
CN106415602A (zh) 2017-02-15
CN106415602B (zh) 2019-12-03

Similar Documents

Publication Publication Date Title
CN108229555B (zh) 样本权重分配方法、模型训练方法、电子设备及存储介质
WO2018053833A1 (zh) 一种成对车道线的快速检测方法和装置
KR102349910B1 (ko) 가상 주행 환경에서 사용되는 도메인 적응에 적용될 수 있는 gan을 이용하여, 실제 특징 맵과 동일하거나 유사한 특성을 가지는 가상 특징 맵을 생성하는 학습 방법 및 학습 장치
CN111009153B (zh) 一种轨迹预测模型的训练方法、装置和设备
CN108446618A (zh) 车辆定损方法、装置、电子设备及存储介质
CN109325418A (zh) 基于改进YOLOv3的道路交通环境下行人识别方法
CN107862270A (zh) 人脸分类器训练方法、人脸检测方法及装置、电子设备
US20130148856A1 (en) Method and apparatus for detecting road partition
CN106372666B (zh) 一种目标识别方法及装置
CN105404947A (zh) 用户质量侦测方法及装置
CN104463201B (zh) 一种识别驾驶状态、驾驶人的方法及装置
CN107291993A (zh) 一种多孔介质中预交联凝胶悬浮液微观流动的模拟方法
CN104573680B (zh) 图像检测方法、图像检测装置以及交通违法检测***
CN106248096B (zh) 路网权重的获取方法和装置
WO2018035815A1 (zh) 一种成对车道线的检测方法和装置
CN113065057B (zh) 一种数据信息真实性校验方法、装置、设备及存储介质
CN108248606A (zh) 车辆控制方法、装置、以及车辆
CN110149342A (zh) 一种联合门限与机器学习的边缘设备物理层认证方法
CN104636748B (zh) 一种号牌识别的方法及装置
CN105991290A (zh) 伪随机声纹密码文本生成方法及***
CN111881706B (zh) 活体检测、图像分类和模型训练方法、装置、设备及介质
CN105679092B (zh) 一种驾驶行为分析***及方法
Bahrami et al. Acoustic feature analysis for wet and dry road surface classification using two-stream CNN
CN107515876A (zh) 一种特征模型的生成、应用方法及装置
CN108680940A (zh) 一种自动驾驶车辆辅助定位方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16913837

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16913837

Country of ref document: EP

Kind code of ref document: A1