WO2019047643A1 - 用于无人车的控制方法和装置 - Google Patents

用于无人车的控制方法和装置 Download PDF

Info

Publication number
WO2019047643A1
WO2019047643A1 PCT/CN2018/098630 CN2018098630W WO2019047643A1 WO 2019047643 A1 WO2019047643 A1 WO 2019047643A1 CN 2018098630 W CN2018098630 W CN 2018098630W WO 2019047643 A1 WO2019047643 A1 WO 2019047643A1
Authority
WO
WIPO (PCT)
Prior art keywords
obstacle avoidance
unmanned vehicle
parameter
learning model
sensors
Prior art date
Application number
PCT/CN2018/098630
Other languages
English (en)
French (fr)
Inventor
郑超
郁浩
闫泳杉
唐坤
张云飞
姜雨
Original Assignee
百度在线网络技术(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 百度在线网络技术(北京)有限公司 filed Critical 百度在线网络技术(北京)有限公司
Publication of WO2019047643A1 publication Critical patent/WO2019047643A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar

Definitions

  • the present application relates to the field of computer technology, and in particular to the field of Internet technologies, and in particular, to a control method and apparatus for an unmanned vehicle.
  • An unmanned vehicle is a type of intelligent car, also known as a wheeled mobile robot. It mainly relies on a computer-based intelligent pilot in the car to achieve the goal of unmanned driving.
  • the unmanned vehicle can detect the road surface condition through the sensor.
  • a single sensor is used for detection, and the detection result is easily affected by the surrounding environment, and the stability is poor.
  • the purpose of the present application is to propose an improved control method and apparatus for an unmanned vehicle to solve the technical problems mentioned in the background section above.
  • an embodiment of the present application provides a control method for an unmanned vehicle, the method comprising: installing, by the unmanned vehicle, at least two sensors, the method comprising: acquiring data collected by at least two sensors; The data is input into a pre-trained obstacle avoidance learning model, wherein the obstacle avoidance deep learning model is used to characterize the correspondence between the data collected by the sensor and the obstacle avoidance parameter of the unmanned vehicle; and the unmanned vehicle that obtains the obstacle avoidance depth learning model output The obstacle avoidance parameter controls the unmanned vehicle based on the obstacle avoidance parameter.
  • the obstacle avoidance parameter includes a brake parameter and/or a steering parameter.
  • the at least two sensors include a camera, a lidar, and a millimeter wave radar.
  • the obstacle avoidance depth learning model is trained in an end-to-end manner.
  • the method before acquiring data collected by the at least two sensors, the method further comprises: acquiring data collected by the at least two sensors, and acquiring an current obstacle avoidance parameter of the unmanned vehicle, the obstacle avoidance parameter being driven by the user The behavior is generated; the acquired data and the current obstacle avoidance parameters are respectively used as input and output of the obstacle avoidance deep learning model to train the obstacle avoidance deep learning model.
  • the embodiment of the present application provides a control device for an unmanned vehicle, the device includes: the unmanned vehicle is equipped with at least two sensors, and the device includes: an acquiring unit configured to acquire at least two sensor acquisitions Data; an input unit configured to input the acquired data into a pre-trained obstacle avoidance deep learning model, wherein the obstacle avoidance depth learning model is used to characterize data collected by the sensor and obstacle avoidance parameters of the unmanned vehicle a control unit configured to acquire an obstacle avoidance parameter of the unmanned vehicle outputted by the obstacle avoidance deep learning model, to control the unmanned vehicle based on the obstacle avoidance parameter.
  • the obstacle avoidance parameter includes a brake parameter and/or a steering parameter.
  • the at least two sensors include a camera, a lidar, and a millimeter wave radar.
  • the obstacle avoidance depth learning model is trained in an end-to-end manner.
  • the apparatus further includes: a parameter acquisition unit configured to acquire data collected by the at least two sensors, and acquire an current obstacle avoidance parameter of the unmanned vehicle, wherein the obstacle avoidance parameter is generated by the driving behavior of the user;
  • the unit is configured to use the acquired data and the current obstacle avoidance parameter as input and output of the obstacle avoidance deep learning model, respectively, to train the obstacle avoidance deep learning model.
  • an embodiment of the present application provides an unmanned vehicle, including: one or more processors; and a storage device for storing one or more programs when one or more programs are used by one or more processors Executing, such that one or more processors implement a method as in any of the embodiments of the control method for an unmanned vehicle.
  • an embodiment of the present application provides a computer readable storage medium having stored thereon a computer program, the program being executed by a processor to implement a method as in any one of the methods for controlling an unmanned vehicle.
  • the control method and device for an unmanned vehicle provided by the embodiment of the present application, the unmanned vehicle is installed with at least two sensors, and the method comprises: first acquiring data collected by at least two sensors. Then, the acquired data is input into a pre-trained obstacle avoidance deep learning model, wherein the obstacle avoidance deep learning model is used to characterize the correspondence between the data collected by the sensor and the obstacle avoidance parameter of the unmanned vehicle. Finally, the obstacle avoidance parameters of the unmanned vehicle output from the obstacle avoidance deep learning model are obtained to control the unmanned vehicle based on the obstacle avoidance parameter. In the embodiment of the present application, the obstacle avoidance parameter is obtained through the obstacle avoidance deep learning model, and the driving of the unmanned vehicle is controlled.
  • FIG. 1 is an exemplary system architecture diagram to which the present application can be applied;
  • FIG. 2 is a flow chart of one embodiment of a method for controlling an unmanned vehicle according to the present application
  • FIG. 3 is a schematic diagram of an application scenario of a method for controlling an unmanned vehicle according to the present application
  • FIG. 4 is a flow chart of still another embodiment of a method for controlling an unmanned vehicle according to the present application.
  • Figure 5 is a schematic structural view of an embodiment of a control device for an unmanned vehicle according to the present application.
  • FIG. 6 is a schematic block diagram of a computer system suitable for use in implementing an unmanned vehicle of an embodiment of the present application.
  • FIG. 1 shows an exemplary system architecture 100 of an embodiment of a control method for an unmanned vehicle or a control device for an unmanned vehicle to which the present application can be applied.
  • system architecture 100 can include unmanned vehicle 101, network 102, and server 103.
  • the network 102 is used to provide a medium for communication links between the unmanned vehicle 101 and the server 103.
  • Network 102 can include a variety of connection types, such as wired, wireless communication links, fiber optic cables, and the like.
  • the user can use the unmanned vehicle 101 to interact through the network 102 server 103 to receive or send messages and the like.
  • Various communication client applications can be installed on the unmanned vehicle 101.
  • the unmanned vehicle 101 may be various electronic devices that support image acquisition and capable of image processing, and may be an unmanned vehicle or the like.
  • the server 103 may be a server that provides various services.
  • the server 103 can perform processing such as analysis and feed back the processing result to the unmanned vehicle.
  • the image processing method for an unmanned vehicle provided by the embodiment of the present application is generally performed by the unmanned vehicle 101. Accordingly, the image processing apparatus for the unmanned vehicle is generally disposed in the unmanned vehicle 101.
  • terminal devices, networks, and servers in Figure 1 is merely illustrative. Depending on the implementation needs, there can be any number of terminal devices, networks, and servers.
  • the control method for the unmanned vehicle includes the following steps:
  • Step 201 Acquire data collected by at least two sensors.
  • the unmanned vehicle is equipped with at least two sensors, and the unmanned vehicle on which the control method for the unmanned vehicle runs can obtain the above from a local or other electronic device through a wired connection or a wireless connection. Data collected by at least two sensors.
  • the number of each sensor may be one or two or more.
  • the at least two sensors may include a camera, a lidar, and a millimeter wave radar.
  • the camera can acquire image data or video stream data.
  • the laser radar uses a laser for detection, and the returned data is also a laser signal.
  • the millimeter wave radar uses millimeter waves for detection, and the collected data is millimeter wave signals.
  • Step 202 Input the acquired data into a pre-trained obstacle avoidance learning model.
  • the unmanned vehicle can input the acquired data into the pre-trained obstacle avoidance depth learning model, so that the obstacle avoidance depth learning model outputs according to the input data.
  • the obstacle avoidance depth learning model is used to characterize the correspondence between the data collected by the sensor and the obstacle avoidance parameters of the unmanned vehicle.
  • the obstacle avoidance parameter is the parameter involved in the unmanned vehicle avoiding the obstacle.
  • the obstacle avoidance parameter here is the data that the unmanned vehicle uses immediately.
  • the above-mentioned obstacle avoidance deep learning model may be trained by a Classifier such as a Support Vector Machine (SVM) or a Naive Bayesian Model (NBM) model.
  • SVM Support Vector Machine
  • NBM Naive Bayesian Model
  • the image processing model described above may also be pre-trained based on certain classification functions (eg, softmax functions, etc.).
  • the obstacle avoidance deep learning model is trained in an end-to-end manner.
  • the end-to-end training model can take the data collected by the sensor as an input and output the obstacle avoidance parameters to be adopted by the unmanned vehicle.
  • the obstacle avoidance deep learning model is a deep neural network, which can directly generate obstacle avoidance parameters of vehicles based on the collected data. Specifically, in the training process of the model, what kind of data is used as the input and output for training, in the application process of the model, the corresponding output data can be obtained according to the input data.
  • the obstacle avoidance deep learning model includes:
  • the feature extraction component extracts image features of the image acquired by the camera, extracts first data features of the data collected by the laser radar, and extracts second data features of the data acquired by the millimeter wave radar.
  • the feature extraction may be performed in the following manner, extracting image features from the image acquired by the camera, extracting the first data feature from the laser data collected by the laser radar, and extracting from the millimeter wave data collected by the millimeter wave radar Second data feature.
  • Step 203 Acquire an obstacle avoidance parameter of the unmanned vehicle outputted by the obstacle avoidance depth learning model, so as to control the unmanned vehicle based on the obstacle avoidance parameter.
  • the unmanned vehicle acquisition obstacle avoidance learning model controls the unmanned vehicle based on the obstacle avoidance parameter according to the obstacle avoidance parameter of the unmanned vehicle outputted according to the input data.
  • the obstacle avoidance parameter includes a brake parameter and/or a steering parameter.
  • the braking parameter is a parameter used by the brake, and may include the magnitude of the deceleration of the unmanned vehicle, and may also include the direction of the deceleration of the unmanned vehicle. In general, the direction of deceleration of an unmanned vehicle is opposite to the direction of travel.
  • the speed of the driving can be controlled by the braking parameters.
  • the steering parameter is a parameter for the unmanned vehicle to turn, and may be a steering angle such as a steering wheel angle or the like.
  • the steering direction can be controlled by steering parameters.
  • the above-mentioned unmanned vehicle can perform the obstacle avoiding operation by limiting or adjusting any of the above two obstacle avoidance parameters.
  • FIG. 3 is a schematic diagram of an application scenario of a control method for an unmanned vehicle according to the present embodiment.
  • the unmanned vehicle 301 acquires data 302 collected by at least two sensors installed in the unmanned vehicle. Thereafter, the unmanned vehicle inputs the acquired data into a pre-trained obstacle avoidance learning model, and the obstacle avoidance deep learning model is used to characterize the correspondence between the data collected by the sensor and the obstacle avoidance parameter of the unmanned vehicle.
  • the obstacle avoidance parameter 303 of the unmanned vehicle outputted by the obstacle avoidance deep learning model is acquired to control the unmanned vehicle 304 based on the obstacle avoidance parameter.
  • the method provided by the above embodiment of the present application obtains the obstacle avoidance parameter through the obstacle avoidance deep learning model, and realizes the control of driving.
  • the process 400 for the control method of the unmanned vehicle includes the following steps:
  • Step 401 Acquire data collected by at least two sensors, and obtain current obstacle avoidance parameters of the unmanned vehicle.
  • the model can be trained prior to applying the model.
  • the unmanned vehicle can acquire data collected by at least two sensors and obtain current obstacle avoidance parameters.
  • the above braking parameters are generated by the driving behavior of the user, that is, the obstacle avoidance parameters of the vehicle generated by the braking behavior taken by the user while driving the unmanned vehicle.
  • the unmanned vehicle here can be a designated unmanned vehicle or an arbitrary unmanned vehicle for model training. At least two sensors here are mounted on the unmanned vehicle.
  • Step 402 The acquired data and the current obstacle avoidance parameter are respectively used as input and output of the obstacle avoidance depth learning model to train the obstacle avoidance deep learning model.
  • the unmanned vehicle uses the data acquired in step 401 and the current obstacle avoidance parameter as the input and output of the obstacle avoidance depth learning model, respectively, to train the obstacle avoidance depth learning model.
  • Step 403 Acquire data collected by at least two sensors.
  • the unmanned vehicle is equipped with at least two sensors, and the unmanned vehicle on which the control method for the unmanned vehicle runs can obtain the above from a local or other electronic device through a wired connection or a wireless connection. Data collected by at least two sensors.
  • the number of each sensor may be one or two or more.
  • the at least two sensors may include a camera, a lidar, and a millimeter wave radar.
  • the camera can acquire image data or video stream data.
  • the laser radar uses a laser for detection, and the returned data is also a laser signal.
  • the millimeter wave radar uses millimeter waves for detection, and the collected data is millimeter wave signals.
  • step 404 the acquired data is input into a pre-trained obstacle avoidance learning model.
  • the unmanned vehicle can input the acquired data into the pre-trained obstacle avoidance depth learning model, so that the obstacle avoidance depth learning model outputs according to the input data.
  • the obstacle avoidance depth learning model is used to characterize the correspondence between the data collected by the sensor and the obstacle avoidance parameters of the unmanned vehicle.
  • the obstacle avoidance parameter is the parameter involved in the unmanned vehicle avoiding the obstacle.
  • the obstacle avoidance parameter here is the data that the unmanned vehicle uses immediately.
  • the above-mentioned obstacle avoidance deep learning model may be trained by a Classifier such as a Support Vector Machine (SVM) or a Naive Bayesian Model (NBM) model.
  • SVM Support Vector Machine
  • NBM Naive Bayesian Model
  • the image processing model described above may also be pre-trained based on certain classification functions (eg, softmax functions, etc.).
  • the obstacle avoidance deep learning model is trained in an end-to-end manner.
  • the end-to-end training model can take the data collected by the sensor as an input and output the obstacle avoidance parameters to be adopted by the unmanned vehicle.
  • the obstacle avoidance deep learning model is a deep neural network, which can directly generate obstacle avoidance parameters of vehicles based on the collected data. Specifically, in the training process of the model, what kind of data is used as the input and output for training, in the application process of the model, the corresponding output data can be obtained according to the input data.
  • Step 405 Acquire an obstacle avoidance parameter of the unmanned vehicle outputted by the obstacle avoidance depth learning model, so as to control the unmanned vehicle based on the obstacle avoidance parameter.
  • the unmanned vehicle acquisition obstacle avoidance learning model controls the unmanned vehicle based on the obstacle avoidance parameter according to the obstacle avoidance parameter of the unmanned vehicle outputted according to the input data.
  • the obstacle avoidance parameter includes a brake parameter and/or a steering parameter.
  • the braking parameter is a parameter used by the brake, and may include the magnitude of the deceleration of the unmanned vehicle, and may also include the direction of the deceleration of the unmanned vehicle. In general, the direction of deceleration of an unmanned vehicle is opposite to the direction of travel.
  • the steering parameter is a parameter for the unmanned vehicle to turn, and may be a steering angle such as a steering wheel angle or the like. The above-mentioned unmanned vehicle can perform the obstacle avoiding operation by limiting or adjusting any of the above two obstacle avoidance parameters.
  • the obstacle avoidance depth learning model is trained end-to-end, and the obstacle avoidance parameter can be accurately obtained in the application process of the model.
  • the present application provides an embodiment of a control device for an unmanned vehicle, the device embodiment corresponding to the method embodiment shown in FIG.
  • the device can be specifically applied to various electronic devices.
  • the control device 500 for an unmanned vehicle of the present embodiment includes an acquisition unit 501, an input unit 502, and a control unit 503.
  • the acquiring unit 501 is configured to acquire data collected by at least two sensors
  • the input unit 502 is configured to input the acquired data into a pre-trained obstacle avoidance learning model, wherein the obstacle avoidance deep learning model is used.
  • the control unit 503, configured to acquire the obstacle avoidance parameter of the unmanned vehicle output of the obstacle avoidance learning model, based on the The obstacle avoidance parameter controls the unmanned vehicle.
  • the acquiring unit 501 can acquire data collected by the at least two sensors from a local or other electronic device by using a wired connection manner or a wireless connection manner.
  • the number of each sensor may be one or two or more.
  • the input unit 502 can input the acquired data into the pre-trained obstacle avoidance depth learning model, so that the obstacle avoidance depth learning model outputs according to the input data.
  • the obstacle avoidance depth learning model is used to characterize the correspondence between the data collected by the sensor and the obstacle avoidance parameters of the unmanned vehicle.
  • the obstacle avoidance parameter is the parameter involved in the unmanned vehicle avoiding the obstacle.
  • the obstacle avoidance parameter here is the data that the unmanned vehicle uses immediately.
  • control unit 503 acquires the obstacle avoidance parameter of the unmanned vehicle output according to the input data by the obstacle avoidance depth learning model, and controls the unmanned vehicle based on the obstacle avoidance parameter.
  • the obstacle avoidance parameter includes a brake parameter and/or a steering parameter.
  • the at least two sensors include a camera, a lidar, and a millimeter wave radar.
  • the obstacle avoidance deep learning model is trained in an end-to-end manner.
  • the device further includes: a parameter acquiring unit configured to acquire data collected by the at least two sensors, and acquire an current obstacle avoidance parameter of the unmanned vehicle, where the obstacle avoidance parameter is determined by the user The driving behavior is generated; the training unit is configured to use the acquired data and the current obstacle avoidance parameter as input and output of the obstacle avoidance deep learning model, respectively, to train the obstacle avoidance deep learning model.
  • a parameter acquiring unit configured to acquire data collected by the at least two sensors, and acquire an current obstacle avoidance parameter of the unmanned vehicle, where the obstacle avoidance parameter is determined by the user The driving behavior is generated
  • the training unit is configured to use the acquired data and the current obstacle avoidance parameter as input and output of the obstacle avoidance deep learning model, respectively, to train the obstacle avoidance deep learning model.
  • FIG. 6 a block diagram of a computer system 600 suitable for use in implementing an unmanned vehicle of an embodiment of the present application is shown.
  • the unmanned vehicle shown in Fig. 6 is merely an example and should not impose any limitation on the function and scope of use of the embodiments of the present application.
  • computer system 600 includes a central processing unit (CPU) 601 that can be loaded into a program in random access memory (RAM) 603 according to a program stored in read only memory (ROM) 602 or from storage portion 608. And perform various appropriate actions and processes.
  • RAM random access memory
  • ROM read only memory
  • RAM random access memory
  • various programs and data required for the operation of the system 600 are also stored.
  • the CPU 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604.
  • An input/output (I/O) interface 605 is also coupled to bus 604.
  • the following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, etc.; an output portion 607 including, for example, a cathode ray tube (CRT), a liquid crystal display (LCD), and the like, and a storage portion 608 including a hard disk or the like. And a communication portion 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the Internet.
  • Driver 610 is also coupled to I/O interface 605 as needed.
  • a removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory or the like, is mounted on the drive 610 as needed so that a computer program read therefrom is installed into the storage portion 608 as needed.
  • an embodiment of the present disclosure includes a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for executing the method illustrated in the flowchart.
  • the computer program can be downloaded and installed from the network via communication portion 609, and/or installed from removable media 611.
  • the computer program is executed by the central processing unit (CPU) 601
  • the above-described functions defined in the method of the present application are performed.
  • the computer readable medium of the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two.
  • the computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections having one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
  • a computer readable storage medium may be any tangible medium that can contain or store a program, which can be used by or in connection with an instruction execution system, apparatus or device.
  • a computer readable signal medium may include a data signal that is propagated in the baseband or as part of a carrier, carrying computer readable program code. Such propagated data signals can take a variety of forms including, but not limited to, electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer readable signal medium can also be any computer readable medium other than a computer readable storage medium, which can transmit, propagate, or transport a program for use by or in connection with the instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium can be transmitted by any suitable medium, including but not limited to wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
  • each block of the flowchart or block diagram can represent a module, a program segment, or a portion of code that includes one or more of the logic functions for implementing the specified.
  • Executable instructions can also occur in a different order than that illustrated in the drawings. For example, two successively represented blocks may in fact be executed substantially in parallel, and they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts can be implemented in a dedicated hardware-based system that performs the specified function or operation. Or it can be implemented by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments of the present application may be implemented by software or by hardware.
  • the described unit may also be provided in the processor, for example, as a processor including an acquisition unit, an input unit, and a control unit.
  • the names of these units do not constitute a limitation on the unit itself under certain circumstances.
  • the acquisition unit may also be described as “a unit that acquires data collected by at least two sensors”.
  • the present application also provides a computer readable medium, which may be included in the apparatus described in the above embodiments, or may be separately present and not incorporated into the apparatus.
  • the computer readable medium carries one or more programs, when the one or more programs are executed by the device, causing the device to: acquire data collected by at least two sensors; and input the acquired data into a pre-trained obstacle avoidance
  • the deep learning model wherein the obstacle avoidance deep learning model is used to characterize the correspondence between the data collected by the sensor and the obstacle avoidance parameter of the unmanned vehicle; and the obstacle avoidance parameter of the unmanned vehicle output obtained by the obstacle avoidance deep learning model is obtained based on the avoidance
  • the obstacle parameter controls the unmanned vehicle.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

一种用于无人车(101)的控制方法和装置,控制方法包括,获取至少两个传感器采集的数据(201);将所获取的数据输入预先训练的避障深度学习模型(202),其中,避障深度学习模型用于表征传感器所采集的数据与无人车(101)的避障参数的对应关系;获取避障深度学习模型输出的该无人车的避障参数,以基于避障参数对该无人车(101)进行控制(203)。利用至少两个传感器提高检测稳定性,实现了对无人车(101)的行车进行控制。

Description

用于无人车的控制方法和装置
本专利申请要求于2017年9月5日提交的、申请号为201710791661.4、申请人为百度在线网络技术(北京)有限公司、发明名称为“用于无人车的控制方法和装置”的中国专利申请的优先权,该申请的全文以引用的方式并入本申请中。
技术领域
本申请涉及计算机技术领域,具体涉及互联网技术领域,尤其涉及用于无人车的控制方法和装置。
背景技术
无人车是智能汽车的一种,也称为轮式移动机器人,主要依靠车内的以计算机***为主的智能驾驶仪来实现无人驾驶的目标。
无人车在行驶的过程中,可以通过传感器对路面状况进行检测。但是在现有技术中,采用单一传感器进行检测,检测结果容易受到周围环境影响,稳定性差。
发明内容
本申请的目的在于提出一种改进的用于无人车的控制方法和装置,来解决以上背景技术部分提到的技术问题。
第一方面,本申请实施例提供了一种用于无人车的控制方法,该方法包括:无人车安装有至少两个传感器,方法包括:获取至少两个传感器采集的数据;将所获取的数据输入预先训练的避障深度学习模型,其中,避障深度学习模型用于表征传感器所采集的数据与无人车的避障参数的对应关系;获取避障深度学习模型输出的无人车的避障参数,以基于避障参数对无人车进行控制。
在一些实施例中,避障参数包括刹车参数和/或转向参数。
在一些实施例中,至少两个传感器包括摄像头、激光雷达和毫米波雷达。
在一些实施例中,避障深度学习模型是以端到端的方式训练得到的。
在一些实施例中,在获取至少两个传感器采集的数据之前,该方法还包括:获取至少两个传感器采集的数据,并获取无人车当前的避障参数,避障参数是由用户的驾驶行为生成的;将所获取的数据和当前的避障参数分别作为避障深度学习模型的输入和输出,以对避障深度学习模型进行训练。
第二方面,本申请实施例提供了一种用于无人车的控制装置,该装置包括:无人车安装有至少两个传感器,装置包括:获取单元,配置用于获取至少两个传感器采集的数据;输入单元,配置用于将所获取的数据输入预先训练的避障深度学习模型,其中,所述避障深度学习模型用于表征传感器所采集的数据与无人车的避障参数的对应关系;控制单元,配置用于获取所述避障深度学习模型输出的所述无人车的避障参数,以基于所述避障参数对所述无人车进行控制。
在一些实施例中,避障参数包括刹车参数和/或转向参数。
在一些实施例中,至少两个传感器包括摄像头、激光雷达和毫米波雷达。
在一些实施例中,避障深度学习模型是以端到端的方式训练得到的。
在一些实施例中,装置还包括:参数获取单元,配置用于获取至少两个传感器采集的数据,并获取无人车当前的避障参数,避障参数是由用户的驾驶行为生成的;训练单元,配置用于将所获取的数据和当前的避障参数分别作为避障深度学习模型的输入和输出,以对避障深度学习模型进行训练。
第三方面,本申请实施例提供了一种无人车,包括:一个或多个处理器;存储装置,用于存储一个或多个程序,当一个或多个程序被一个或多个处理器执行,使得一个或多个处理器实现如用于无人车的控制方法中任一实施例的方法。
第四方面,本申请实施例提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如用于无人车的控制方法中任一实施例的方法。
本申请实施例提供的用于无人车的控制方法和装置,无人车安装有至少两个传感器,方法包括:首先获取至少两个传感器采集的数据。之后,将所获取的数据输入预先训练的避障深度学习模型,其中,避障深度学习模型用于表征传感器所采集的数据与无人车的避障参数的对应关系。最后,获取避障深度学习模型输出的无人车的避障参数,以基于避障参数对无人车进行控制。本申请实施例通过避障深度学习模型获取避障参数,实现了对无人车的行车进行控制。
附图说明
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本申请的其它特征、目的和优点将会变得更明显:
图1是本申请可以应用于其中的示例性***架构图;
图2是根据本申请的用于无人车的控制方法的一个实施例的流程图;
图3是根据本申请的用于无人车的控制方法的一个应用场景的示意图;
图4是根据本申请的用于无人车的控制方法的又一个实施例的流程图;
图5是根据本申请的用于无人车的控制装置的一个实施例的结构示意图;
图6是适于用来实现本申请实施例的无人车的计算机***的结构示意图。
具体实施方式
下面结合附图和实施例对本申请作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释相关发明,而非对该发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与 有关发明相关的部分。
需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本申请。
图1示出了可以应用本申请的用于无人车的控制方法或用于无人车的控制装置的实施例的示例性***架构100。
如图1所示,***架构100可以包括无人车101,网络102和服务器103。网络102用以在无人车101和服务器103之间提供通信链路的介质。网络102可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。
用户可以使用无人车101通过网络102服务器103交互,以接收或发送消息等。无人车101上可以安装有各种通讯客户端应用。
无人车101可以是支持图像获取并能够进行图像处理的各种电子设备,可以是无人车等等。
服务器103可以是提供各种服务的服务器。服务器103可以进行分析等处理,并将处理结果反馈给无人车。
需要说明的是,本申请实施例所提供的用于无人车的图像处理方法一般由无人车101执行,相应地,用于无人车的图像处理装置一般设置于无人车101中。
应该理解,图1中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。
继续参考图2,示出了根据本申请的用于无人车的控制方法的一个实施例的流程200。该用于无人车的控制方法,包括以下步骤:
步骤201,获取至少两个传感器采集的数据。
在本实施例中,无人车安装有至少两个传感器,用于无人车的控制方法运行于其上的无人车可以通过有线连接方式或者无线连接方式从本地或者其他电子设备上获取上述的至少两个传感器采集的数据。在这里,每种传感器的数量可以是一个,也可以是两个或者以上。
在本实施例的一些可选的实现方式中,至少两个传感器可以包括摄像头、激光雷达和毫米波雷达。
具体地,摄像头可以采集图像数据或者视频流数据。激光雷达使用激光进行检测,所采集到的返回的数据也是激光信号。毫米波雷达使用毫米波进行检测,所采集到的返回的数据为毫米波信号。
步骤202,将所获取的数据输入预先训练的避障深度学习模型。
在本实施例中,上述无人车可以将所获取的数据输入预先训练的避障深度学习模型,以使该避障深度学习模型根据输入的数据进行输出。其中,避障深度学习模型用于表征传感器所采集的数据与无人车的避障参数的对应关系。避障参数为无人车避开障碍物所涉及的参数。这里的避障参数为无人车即时采用的数据。
上述避障深度学习模型可以是支持向量机(Support Vector Machine,SVM)、朴素贝叶斯模型(Naive Bayesian Model,NBM)模型等分类器(Classifier)训练得到的。此外,上述图像处理模型也可以基于某些分类函数(例如softmax函数等)而预先训练而成。
在本实施例的一些可选的实现方式中,避障深度学习模型是以端到端的方式训练得到的。
在本实施例中,端到端方式训练的模型可以以传感器采集到的数据作为输入,并输出无人车要采用的避障参数。
避障深度学习模型即是一种深度神经网络,能够根据采集的数据直接生成车辆的避障参数。具体地,在该模型的训练过程中,采用了何种数据作为输入和输出进行训练,则在模型的应用过程中,就能够根据该种输入数据,得到相应的输出数据。
在本实施例的一些可选的实现方式中,避障深度学习模型包括:
特征提取组件,提取摄像头采集的图像的图像特征,提取激光雷达采集的数据的第一数据特征,提取毫米波雷达采集的数据的第二数据特征。
在本实施例中,可以采用以下方式进行特征提取,从摄像头采集的图像中提取图像特征,从激光雷达采集的激光数据中提取第一数据特征,并从毫米波雷达采集的毫米波数据中提取第二数据特征。
需要说明的是,这里的“第一”和“第二”并不具有排名的含义,只是用以对数据特征进行区分。
步骤203,获取避障深度学习模型输出的无人车的避障参数,以基于避障参数对无人车进行控制。
在本实施例中,上述无人车获取避障深度学习模型根据输入的数据所输出的无人车的避障参数,以基于避障参数对无人车进行控制。
在本实施例的一些可选的实现方式中,避障参数包括刹车参数和/或转向参数。
在本实施例中,刹车参数为刹车所采用的参数,可以包括无人车的减速度的大小,同时还可以包括无人车的减速度的方向。通常来说,无人车减速度的方向与行进方向相反。通过刹车参数可以控制行车的速度。转向参数为无人车进行转向的参数,可以是转向角度,比如方向盘角度等等。通过转向参数可以控制行车的方向。上述无人车可以通过限定或调整上述两个避障参数中的任何一种,来进行避障操作。
继续参见图3,图3是根据本实施例的用于无人车的控制方法的应用场景的一个示意图。在图3的应用场景中,上述无人车301获取安装于该无人车的至少两个传感器采集的数据302。之后,上述无人车将所获取的数据输入预先训练的避障深度学习模型,避障深度学习模型用于表征传感器所采集的数据与无人车的避障参数的对应关系。获取避障深度学习模型输出的无人车的避障参数303,以基于避障参数对无人车进行控制304。
本申请的上述实施例提供的方法通过避障深度学习模型获取避障参数,实现了对行车的控制。
进一步参考图4,其示出了用于无人车的控制方法的又一个实施例的流程400。该用于无人车的控制方法的流程400,包括以下步骤:
步骤401,获取至少两个传感器采集的数据,并获取无人车当前的避障参数。
在本实施例中,在应用模型之前可以对模型进行训练。无人车可以获取至少两个传感器采集的数据,并且获取当前的避障参数。上述刹车参数是由用户的驾驶行为生成的,即用户在驾驶该无人车时,采 取的刹车行为所产生的车辆的避障参数。在这里的无人车可以是指定的无人车也可以是任意的用以进行模型训练的无人车。这里的至少两个传感器安装于该无人车上。
步骤402,将所获取的数据和当前的避障参数分别作为避障深度学习模型的输入和输出,以对避障深度学习模型进行训练。
在本实施例中,上述无人车将步骤401所获取的数据和当前的避障参数分别作为上述避障深度学习模型的输入和输出,以对该避障深度学习模型进行训练。
步骤403,获取至少两个传感器采集的数据。
在本实施例中,无人车安装有至少两个传感器,用于无人车的控制方法运行于其上的无人车可以通过有线连接方式或者无线连接方式从本地或者其他电子设备上获取上述的至少两个传感器采集的数据。在这里,每种传感器的数量可以是一个,也可以是两个或者以上。
在本实施例的一些可选的实现方式中,至少两个传感器可以包括摄像头、激光雷达和毫米波雷达。
具体地,摄像头可以采集图像数据或者视频流数据。激光雷达使用激光进行检测,所采集到的返回的数据也是激光信号。毫米波雷达使用毫米波进行检测,所采集到的返回的数据为毫米波信号。
步骤404,将所获取的数据输入预先训练的避障深度学习模型。
在本实施例中,上述无人车可以将所获取的数据输入预先训练的避障深度学习模型,以使该避障深度学习模型根据输入的数据进行输出。其中,避障深度学习模型用于表征传感器所采集的数据与无人车的避障参数的对应关系。避障参数为无人车避开障碍物所涉及的参数。这里的避障参数为无人车即时采用的数据。
上述避障深度学习模型可以是支持向量机(Support Vector Machine,SVM)、朴素贝叶斯模型(Naive Bayesian Model,NBM)模型等分类器(Classifier)训练得到的。此外,上述图像处理模型也可以基于某些分类函数(例如softmax函数等)而预先训练而成。
在本实施例的一些可选的实现方式中,避障深度学习模型是以端到端的方式训练得到的。
在本实施例中,端到端方式训练的模型可以以传感器采集到的数据作为输入,并输出无人车要采用的避障参数。
避障深度学习模型即是一种深度神经网络,能够根据采集的数据直接生成车辆的避障参数。具体地,在该模型的训练过程中,采用了何种数据作为输入和输出进行训练,则在模型的应用过程中,就能够根据该种输入数据,得到相应的输出数据。
步骤405,获取避障深度学习模型输出的无人车的避障参数,以基于避障参数对无人车进行控制。
在本实施例中,上述无人车获取避障深度学习模型根据输入的数据所输出的无人车的避障参数,以基于避障参数对无人车进行控制。
在本实施例的一些可选的实现方式中,避障参数包括刹车参数和/或转向参数。
在本实施例中,刹车参数为刹车所采用的参数,可以包括无人车的减速度的大小,同时还可以包括无人车的减速度的方向。通常来说,无人车减速度的方向与行进方向相反。转向参数为无人车进行转向的参数,可以是转向角度,比如方向盘角度等等。上述无人车可以通过限定或调整上述两个避障参数中的任何一种,来进行避障操作。
本实施例通过对避障深度学习模型进行端对端地训练,能够在模型的应用过程中准确地获得避障参数。
进一步参考图5,作为对上述各图所示方法的实现,本申请提供了一种用于无人车的控制装置的一个实施例,该装置实施例与图2所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。
如图5所示,本实施例的用于无人车的控制装置500包括:获取单元501、输入单元502和控制单元503。其中,获取单元501,配置用于获取至少两个传感器采集的数据;输入单元502,配置用于将所获取的数据输入预先训练的避障深度学习模型,其中,所述避障深度学习模型用于表征传感器所采集的数据与无人车的避障参数的对应关系;控制单元503,配置用于获取所述避障深度学习模型输出的所述无人车的避障参数,以基于所述避障参数对所述无人车进行控制。
在本实施例中,获取单元501可以通过有线连接方式或者无线连接方式从本地或者其他电子设备上获取上述的至少两个传感器采集的数据。在这里,每种传感器的数量可以是一个,也可以是两个或者以上。
在本实施例中,输入单元502可以将所获取的数据输入预先训练的避障深度学习模型,以使该避障深度学习模型根据输入的数据进行输出。其中,避障深度学习模型用于表征传感器所采集的数据与无人车的避障参数的对应关系。避障参数为无人车避开障碍物所涉及的参数。这里的避障参数为无人车即时采用的数据。
在本实施例中,控制单元503获取避障深度学习模型根据输入的数据所输出的无人车的避障参数,以基于避障参数对无人车进行控制。
在本实施例的一些可选的实现方式中,避障参数包括刹车参数和/或转向参数。
在本实施例的一些可选的实现方式中,至少两个传感器包括摄像头、激光雷达和毫米波雷达。
在本实施例的一些可选的实现方式中,避障深度学习模型是以端到端的方式训练得到的。
在本实施例的一些可选的实现方式中,装置还包括:参数获取单元,配置用于获取至少两个传感器采集的数据,并获取无人车当前的避障参数,避障参数是由用户的驾驶行为生成的;训练单元,配置用于将所获取的数据和当前的避障参数分别作为避障深度学习模型的输入和输出,以对避障深度学习模型进行训练。
下面参考图6,其示出了适于用来实现本申请实施例的无人车的计算机***600的结构示意图。图6示出的无人车仅仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。
如图6所示,计算机***600包括中央处理单元(CPU)601,其可以根据存储在只读存储器(ROM)602中的程序或者从存储部分608加载到随机访问存储器(RAM)603中的程序而执行各种适当的动作和处理。在RAM 603中,还存储有***600操作所需的各种程序和数 据。CPU 601、ROM 602以及RAM 603通过总线604彼此相连。输入/输出(I/O)接口605也连接至总线604。
以下部件连接至I/O接口605:包括键盘、鼠标等的输入部分606;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分607;包括硬盘等的存储部分608;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分609。通信部分609经由诸如因特网的网络执行通信处理。驱动器610也根据需要连接至I/O接口605。可拆卸介质611,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器610上,以便于从其上读出的计算机程序根据需要被安装入存储部分608。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分609从网络上被下载和安装,和/或从可拆卸介质611被安装。在该计算机程序被中央处理单元(CPU)601执行时,执行本申请的方法中限定的上述功能。需要说明的是,本申请的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的***、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本申请中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行***、装置或者器件使用或者与其结合使用。而在本申请中,计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信 号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行***、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、电线、光缆、RF等等,或者上述的任意合适的组合。
附图中的流程图和框图,图示了按照本申请各种实施例的***、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的***来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本申请实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。所描述的单元也可以设置在处理器中,例如,可以描述为:一种处理器包括获取单元、输入单元和控制单元。其中,这些单元的名称在某种情况下并不构成对该单元本身的限定,例如,获取单元还可以被描述为“获取至少两个传感器采集的数据的单元”。
作为另一方面,本申请还提供了一种计算机可读介质,该计算机可读介质可以是上述实施例中描述的装置中所包含的;也可以是单独存在,而未装配入该装置中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该装置执行时,使得该装置:获取至少两个传感器采集的数据;将所获取的数据输入预先训练的避障深度学习模型,其中,避障深度学习模型用于表征传感器所采集的数 据与无人车的避障参数的对应关系;获取避障深度学习模型输出的无人车的避障参数,以基于避障参数对无人车进行控制。
以上描述仅为本申请的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本申请中所涉及的发明范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述发明构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本申请中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。

Claims (12)

  1. 一种用于无人车的控制方法,其特征在于,所述无人车安装有至少两个传感器,所述方法包括:
    获取至少两个传感器采集的数据;
    将所获取的数据输入预先训练的避障深度学习模型,其中,所述避障深度学习模型用于表征传感器所采集的数据与无人车的避障参数的对应关系;
    获取所述避障深度学习模型输出的所述无人车的避障参数,以基于所述避障参数对所述无人车进行控制。
  2. 根据权利要求1所述的用于无人车的控制方法,其特征在于,所述避障参数包括刹车参数和/或转向参数。
  3. 根据权利要求1所述的用于无人车的控制方法,其特征在于,所述至少两个传感器包括摄像头、激光雷达和毫米波雷达。
  4. 根据权利要求1所述的用于无人车的控制方法,其特征在于,所述避障深度学习模型是以端到端的方式训练得到的。
  5. 根据权利要求1所述的用于无人车的控制方法,其特征在于,在所述获取所述至少两个传感器采集的数据之前,所述方法还包括:
    获取至少两个传感器采集的数据,并获取无人车当前的避障参数,所述避障参数是由用户的驾驶行为生成的;
    将所获取的数据和当前的避障参数分别作为所述避障深度学习模型的输入和输出,以对所述避障深度学习模型进行训练。
  6. 一种用于无人车的控制装置,其特征在于,所述无人车安装有至少两个传感器,所述装置包括:
    获取单元,配置用于获取至少两个传感器采集的数据;
    输入单元,配置用于将所获取的数据输入预先训练的避障深度学习模型,其中,所述避障深度学习模型用于表征传感器所采集的数据与无人车的避障参数的对应关系;
    控制单元,配置用于获取所述避障深度学习模型输出的所述无人车的避障参数,以基于所述避障参数对所述无人车进行控制。
  7. 根据权利要求6所述的用于无人车的控制装置,其特征在于,所述避障参数包括刹车参数和/或转向参数。
  8. 根据权利要求6所述的用于无人车的控制装置,其特征在于,所述至少两个传感器包括摄像头、激光雷达和毫米波雷达。
  9. 根据权利要求6所述的用于无人车的控制装置,其特征在于,所述避障深度学习模型是以端到端的方式训练得到的。
  10. 根据权利要求6所述的用于无人车的控制装置,其特征在于,所述装置还包括:
    参数获取单元,配置用于获取至少两个传感器采集的数据,并获取无人车当前的避障参数,所述避障参数是由用户的驾驶行为生成的;
    训练单元,配置用于将所获取的数据和当前的避障参数分别作为所述避障深度学习模型的输入和输出,以对所述避障深度学习模型进行训练。
  11. 一种无人车,包括:
    一个或多个处理器;
    存储装置,用于存储一个或多个程序,
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-5中任一所述的方法。
  12. 一种计算机可读存储介质,其上存储有计算机程序,其特征 在于,该程序被处理器执行时实现如权利要求1-5中任一所述的方法。
PCT/CN2018/098630 2017-09-05 2018-08-03 用于无人车的控制方法和装置 WO2019047643A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710791661.4A CN107515607A (zh) 2017-09-05 2017-09-05 用于无人车的控制方法和装置
CN201710791661.4 2017-09-05

Publications (1)

Publication Number Publication Date
WO2019047643A1 true WO2019047643A1 (zh) 2019-03-14

Family

ID=60725124

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/098630 WO2019047643A1 (zh) 2017-09-05 2018-08-03 用于无人车的控制方法和装置

Country Status (2)

Country Link
CN (1) CN107515607A (zh)
WO (1) WO2019047643A1 (zh)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107515607A (zh) * 2017-09-05 2017-12-26 百度在线网络技术(北京)有限公司 用于无人车的控制方法和装置
CN110298219A (zh) 2018-03-23 2019-10-01 广州汽车集团股份有限公司 无人驾驶车道保持方法、装置、计算机设备和存储介质
CN109141911B (zh) 2018-06-26 2019-11-26 百度在线网络技术(北京)有限公司 无人车性能测试的控制量的获取方法和装置
CN109324608B (zh) 2018-08-31 2022-11-08 阿波罗智能技术(北京)有限公司 无人车控制方法、装置、设备以及存储介质
CN110967991B (zh) * 2018-09-30 2023-05-26 百度(美国)有限责任公司 车辆控制参数的确定方法、装置、车载控制器和无人车
CN109693672B (zh) * 2018-12-28 2020-11-06 百度在线网络技术(北京)有限公司 用于控制无人驾驶汽车的方法和装置
CN113705381B (zh) * 2021-08-11 2024-02-02 北京百度网讯科技有限公司 雾天的目标检测方法、装置、电子设备以及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009089369A1 (en) * 2008-01-08 2009-07-16 Raytheon Sarcos, Llc Point and go navigation system and method
CN105843229A (zh) * 2016-05-17 2016-08-10 中外合资沃得重工(中国)有限公司 无人驾驶智能小车及控制方法
CN106080590A (zh) * 2016-06-12 2016-11-09 百度在线网络技术(北京)有限公司 车辆控制方法和装置以及决策模型的获取方法和装置
CN106292666A (zh) * 2016-08-29 2017-01-04 无锡卓信信息科技股份有限公司 基于超声波距离检测的无人驾驶汽车避障方法及***
CN106515728A (zh) * 2016-12-22 2017-03-22 深圳市招科智控科技有限公司 一种无人驾驶公交车的防撞避障***和方法
CN206231471U (zh) * 2016-10-11 2017-06-09 深圳市招科智控科技有限公司 一种出租车模式的无人驾驶公交车
CN107515607A (zh) * 2017-09-05 2017-12-26 百度在线网络技术(北京)有限公司 用于无人车的控制方法和装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106394555A (zh) * 2016-08-29 2017-02-15 无锡卓信信息科技股份有限公司 基于3d摄像头的无人驾驶汽车避障***及方法
CN106292704A (zh) * 2016-09-07 2017-01-04 四川天辰智创科技有限公司 规避障碍物的方法及装置
CN106742717A (zh) * 2016-11-15 2017-05-31 江苏智石科技有限公司 一种基于3d摄像头的智能料盒运输车
CN106873566B (zh) * 2017-03-14 2019-01-22 东北大学 一种基于深度学习的无人驾驶物流车
CN106950964B (zh) * 2017-04-26 2020-03-24 北京理工大学 无人电动大学生方程式赛车及其控制方法
CN107065890B (zh) * 2017-06-02 2020-09-15 北京航空航天大学 一种无人车智能避障方法及***

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009089369A1 (en) * 2008-01-08 2009-07-16 Raytheon Sarcos, Llc Point and go navigation system and method
CN105843229A (zh) * 2016-05-17 2016-08-10 中外合资沃得重工(中国)有限公司 无人驾驶智能小车及控制方法
CN106080590A (zh) * 2016-06-12 2016-11-09 百度在线网络技术(北京)有限公司 车辆控制方法和装置以及决策模型的获取方法和装置
CN106292666A (zh) * 2016-08-29 2017-01-04 无锡卓信信息科技股份有限公司 基于超声波距离检测的无人驾驶汽车避障方法及***
CN206231471U (zh) * 2016-10-11 2017-06-09 深圳市招科智控科技有限公司 一种出租车模式的无人驾驶公交车
CN106515728A (zh) * 2016-12-22 2017-03-22 深圳市招科智控科技有限公司 一种无人驾驶公交车的防撞避障***和方法
CN107515607A (zh) * 2017-09-05 2017-12-26 百度在线网络技术(北京)有限公司 用于无人车的控制方法和装置

Also Published As

Publication number Publication date
CN107515607A (zh) 2017-12-26

Similar Documents

Publication Publication Date Title
WO2019047643A1 (zh) 用于无人车的控制方法和装置
WO2019047646A1 (zh) 车辆避障方法和装置
WO2020107974A1 (zh) 用于无人驾驶车的避障方法和装置
US11308391B2 (en) Offline combination of convolutional/deconvolutional and batch-norm layers of convolutional neural network models for autonomous driving vehicles
WO2019047641A1 (zh) 车载摄像头的姿态误差估计方法和装置
WO2019047651A1 (zh) 驾驶行为预测方法和装置、无人车
EP3451230A1 (en) Method and apparatus for recognizing object
WO2019047649A1 (zh) 用于确定无人车的驾驶行为的方法和装置
US20210132614A1 (en) Control method and apparatus for autonomous vehicle
WO2019047644A1 (zh) 用于控制无人驾驶车辆的方法和装置
US20200167436A1 (en) Online self-driving car virtual test and development system
CN110654381B (zh) 用于控制车辆的方法和装置
CN109407679B (zh) 用于控制无人驾驶汽车的方法和装置
CN112001287A (zh) 障碍物的点云信息生成方法、装置、电子设备和介质
WO2019047674A1 (zh) 用于车辆的图像处理方法和装置
CN110096051B (zh) 用于生成车辆控制指令的方法和装置
CN109606383B (zh) 用于生成模型的方法和装置
CN115339453B (zh) 车辆换道决策信息生成方法、装置、设备和计算机介质
CN109693672A (zh) 用于控制无人驾驶汽车的方法和装置
CN110456798B (zh) 用于控制车辆行驶的方法及装置
JP7196189B2 (ja) 移動ロボットを制御するための方法、装置及び制御システム
CN115817463A (zh) 车辆避障方法、装置、电子设备和计算机可读介质
CN111382695A (zh) 用于检测目标的边界点的方法和装置
CN112649011B (zh) 车辆避障方法、装置、设备和计算机可读介质
CN110654380A (zh) 用于控制车辆的方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18854236

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 05/08/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18854236

Country of ref document: EP

Kind code of ref document: A1