WO2020168660A1 - 调整车辆行驶方向方法、装置、计算机设备及存储介质 - Google Patents

调整车辆行驶方向方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2020168660A1
WO2020168660A1 PCT/CN2019/091843 CN2019091843W WO2020168660A1 WO 2020168660 A1 WO2020168660 A1 WO 2020168660A1 CN 2019091843 W CN2019091843 W CN 2019091843W WO 2020168660 A1 WO2020168660 A1 WO 2020168660A1
Authority
WO
WIPO (PCT)
Prior art keywords
sample
angle value
neural network
convolutional neural
picture
Prior art date
Application number
PCT/CN2019/091843
Other languages
English (en)
French (fr)
Inventor
王义文
张文龙
王健宗
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020168660A1 publication Critical patent/WO2020168660A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/02Control of vehicle driving stability
    • B60W30/045Improving turning performance
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/18Propelling the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/04Traffic conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Definitions

  • This application relates to the technical field of neural networks, and in particular to methods, devices, computer equipment, and storage media for adjusting the driving direction of a vehicle.
  • autonomous driving has become one of the key directions of current scientific research. Especially in the field of automobile driving, autonomous driving technology will assist or even replace the driver in driving the car, greatly reducing the driver’s burden. Received a warm welcome from the market.
  • the embodiments of the present application provide a method, a device, a computer device, and a storage medium for adjusting the driving direction of a vehicle to solve the problem of inaccurate turning angles of existing automatic driving.
  • a method for adjusting the driving direction of a vehicle including:
  • the various road condition pictures are sequentially input to the pre-trained fast convolutional neural network to obtain the various angle values sequentially output by the fast convolutional neural network,
  • the angle value refers to the angle required for the target vehicle to turn when facing current road conditions
  • the respective control instructions are sequentially sent to the central control system of the target vehicle, so that the central control system of the target vehicle adjusts the driving direction of the target vehicle according to the control instructions.
  • a device for adjusting the driving direction of a vehicle includes:
  • the image acquisition module is used to acquire real-time images of the road conditions in front of the target vehicle through the camera to obtain the target video;
  • a video frame extraction module configured to extract each video frame as each road condition picture from the target video at equal intervals
  • the road condition picture input module is configured to sequentially input the various road condition pictures to the pre-trained fast convolutional neural network according to the time sequence of the respective road condition pictures in the target video to obtain the fast convolutional neural network
  • Each of the angle values sequentially output, where the angle value refers to the angle required for the target vehicle to turn when facing current road conditions;
  • the instruction conversion module is used to convert the respective angle values into respective control instructions according to preset instruction conversion rules
  • the instruction sending module is configured to send the various control instructions to the central control system of the target vehicle in turn, so that the central control system of the target vehicle adjusts the driving direction of the target vehicle according to the control instructions.
  • a computer device comprising a memory, a processor, and computer readable instructions stored in the memory and capable of running on the processor, and the processor implements the aforementioned adjustment of the driving direction of a vehicle when the processor executes the computer readable instructions Method steps.
  • One or more non-volatile readable storage media storing computer readable instructions, the computer readable storage medium storing computer readable instructions so that the one or more processors execute the above method for adjusting the driving direction of a vehicle A step of.
  • FIG. 1 is a schematic diagram of an application environment of the method for adjusting the driving direction of a vehicle in an embodiment of the present application
  • FIG. 2 is a flowchart of a method for adjusting the driving direction of a vehicle in an embodiment of the present application
  • FIG. 3 is a schematic flowchart of pre-training a fast convolutional neural network in an application scenario in the method for adjusting the driving direction of a vehicle in an embodiment of the present application;
  • step 205 is a schematic flowchart of step 205 of the method for adjusting the driving direction of a vehicle in an application scenario in an embodiment of the present application;
  • FIG. 5 is a schematic structural diagram of a fast convolutional neural network in an application scenario in an embodiment of this application;
  • FIG. 6 is a schematic flowchart of the automatic collection and generation of training samples in an application scenario in the method for adjusting the driving direction of a vehicle in an embodiment of the present application;
  • FIG. 7 is a schematic structural diagram of the device for adjusting the driving direction of a vehicle in an application scenario in an embodiment of the present application
  • FIG. 8 is a schematic structural diagram of the device for adjusting the driving direction of a vehicle in another application scenario in an embodiment of the present application;
  • FIG. 9 is a schematic structural diagram of a sample picture input module in an embodiment of the present application.
  • Fig. 10 is a schematic diagram of a computer device in an embodiment of the present application.
  • the method for adjusting the driving direction of a vehicle can be applied to the application environment as shown in Fig. 1, in which the terminal device communicates with the server through the network.
  • the terminal device can be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices.
  • the terminal device can be a device loaded with a vehicle central control system.
  • the server can be implemented as an independent server or a server cluster composed of multiple servers.
  • a method for adjusting the driving direction of a vehicle is provided. Taking the method applied to the server in FIG. 1 as an example for description, the method includes the following steps:
  • a camera may be installed on the target vehicle in advance, for example, near the front of the target vehicle, and the direction of the camera is pointed at the front of the vehicle, so that the server can collect images of the road conditions in front of the target vehicle through the camera in real time to obtain the target video.
  • each video frame in the target video is an image of the road condition in front of the target vehicle. These images contain road condition information. Therefore, the server needs to extract video frames from the target video at regular intervals to obtain various road condition pictures in order to Provided to fast convolutional neural network for recognition.
  • the interval extraction interval time can be determined according to actual usage, for example, it can be set to 0.5 milliseconds, that is, one video frame is extracted every 0.5 milliseconds.
  • the interval time for the server to extract video frames can be determined according to the current speed of the target vehicle. The interval time is negatively related to the current speed. The faster the speed, the shorter the interval time.
  • the angle value refers to the angle required for the target vehicle to turn when facing current road conditions
  • the server can pre-train the fast convolutional neural network.
  • the fast convolutional neural network can recognize the input road condition picture, and output the corresponding angle value according to the road condition information contained in the road condition picture. The angle required for the target vehicle to turn when facing the current road conditions. In automatic driving, the target vehicle should respond promptly and accurately according to the actual road conditions, and should control the vehicle in the order of the actual road conditions encountered. Therefore, the server can follow the various road conditions pictures in the target video. In time sequence, the various road condition pictures are sequentially input to the pre-trained fast convolutional neural network to obtain the various angle values sequentially output by the fast convolutional neural network.
  • the fast convolutional neural network will be described in detail below. As further shown in Figure 3, the fast convolutional neural network is pre-trained through the following steps:
  • each sample picture For each sample picture, input each sample picture to the fast convolutional neural network to obtain a training angle value output by the fast convolutional neural network and corresponding to each sample picture;
  • the sample video obtained by collecting images of the road conditions in front of the test vehicle can be pre-collected.
  • cameras can be allocated to multiple test vehicles. These test vehicles will record the driving process of the test vehicle when they are driving. Middle and front road conditions to form multiple sample videos.
  • the driver will control the form of the vehicle according to the road conditions ahead.
  • the control log of the driver controlling the test vehicle can be recorded through the device preset on the test vehicle, including controlling the acceleration and deceleration, and forwarding of the vehicle.
  • Retreat, turning angle and other information It can be seen that both the sample video and the control log correspond to the system time, so that the two can be correlated through the system time.
  • the system time of a sample video on the same test vehicle is 9:00-10:00 on February 2, 2018, and the system time of a certain control log is also 9:00-10:00 on February 2, 2018 , So this sample video corresponds to this control log.
  • the foregoing step 202 is the same as the foregoing step 102, and the server may extract each video frame from the sample video at an equal interval as each sample picture, which will not be repeated here.
  • each of these sample pictures contains different road condition information, and the driver may take different control instructions when driving the test vehicle.
  • the server needs to associate each sample picture with the control instruction in the control log. Therefore, the server can extract each control instruction corresponding to each sample picture in time from the control log. . For example, suppose the system time of a sample picture is 9:00 on February 2, 2018, and the system time of a certain control instruction in the control log of the same test vehicle is also 9:00 on February 2, 2018. The sample picture corresponds to the control command.
  • the server After the server extracts the various control commands, it also needs to convert the various control commands into various sample angle values according to a preset command conversion rule.
  • the command conversion rule records the difference between the control command and the angle value. Therefore, the server can convert the control command into a sample angle value. For example, if the control instruction is "control the vehicle to turn right by 30 degrees", the server can convert it to obtain a sample angle value of "+30 degrees”; if the control instruction is "control the vehicle to turn left by 20 degrees", the server can Convert it to get a sample angle value of "-20 degrees".
  • the server may input each sample picture to the fast convolutional neural network to obtain the training angle corresponding to each sample picture output by the fast convolutional neural network value. It should be noted that before the server puts the sample image into the fast convolutional neural network, it can first convert the sample image into a data matrix, and then input the data matrix into the fast convolutional neural network. This is because the sample image is digitized The latter is more conducive to the recognition and training of fast convolutional neural networks.
  • the processing efficiency of the server operation is required. Therefore, when sample pictures or road conditions pictures are put into the fast convolutional neural network, the faster the operation efficiency of the fast convolutional neural network, the better .
  • the existing convolutional neural network is modified to obtain the fast convolutional neural network.
  • the calculation supplement in the convolutional layer is slightly different. However, the amount of network calculation can be greatly reduced, thereby improving the computational efficiency of fast convolutional neural networks.
  • the following describes the calculation process of the convolutional layer after putting the sample picture into the fast convolutional neural network.
  • the convolutional layer of the fast convolutional neural network is provided with a preset number of two-dimensional convolution kernels and 1*1 convolution kernels, and the step 205 may include:
  • Each sample vector is convolved with the preset number of two-dimensional convolution kernels to obtain the first-layer convolution output on each convolution channel, where each sample vector refers to each The vector obtained after vectorization of the sample image;
  • step 301 please refer to Figure 5.
  • the sample vector is a 5*5*2 matrix
  • it is divided into two 5*5 one-dimensional feature maps (I1, I2).
  • This fast convolution There are a total of 2 3*3 two-dimensional convolution kernels (K1, K2) in the convolution layer of the neural network, then I1 and K1 are convolved to get the first layer of convolution output F1, I2 and K2 are convolved to get the first layer of convolution Output F2.
  • step 302 it is assumed that there are 3 convolution channels, and each convolution channel is provided with a 1*1 convolution kernel, which is P1, P2, and P3. Therefore, the first layer of convolution is obtained in step 301 After outputting F1 and F2, F1 and F2 are convolved with P1, P2, and P3, respectively, to obtain the second-level convolution output O1, O2, and O3.
  • the server can put these second-layer convolution outputs O1, O2, and O3 into the fully connected layer of the fast convolutional neural network, Obtain the training angle value output by the fast convolutional neural network and corresponding to each sample picture.
  • the fast convolutional neural network provided by this embodiment has less calculation and faster operation efficiency than the existing convolutional neural network. as follows:
  • the calculation amount is: the calculation amount of the first step is L*L*N*C *C, the calculation amount of the second step is L*L*M*N*1*1.
  • the fast convolutional neural network greatly reduces the amount of calculation compared with the existing convolutional neural network.
  • the parameters of the fast convolutional neural network can be adjusted to try to make the training angle value output by the fast convolutional neural network correspond to the sample picture.
  • the sample angle value of is approximated, that is, the error is the smallest.
  • the server can adjust the parameters of the fast convolutional neural network To make the training angle value gradually move closer to -10 degrees.
  • the server can determine that each sample image corresponds to Whether the error between the training angle value and the sample angle value meets the preset conditions, if it is satisfied, it means that the various parameters of the fast convolutional neural network have been adjusted in place, and it can be determined that the fast convolutional neural network has been trained; If it is not satisfied, it means that the fast convolutional neural network needs to continue training.
  • server can specifically set the preset condition according to actual usage, and the details will be described in detail below.
  • this method can also determine whether the training of the fast convolutional neural network is completed by the following method one or two.
  • Method one includes the following steps 401-402:
  • the server can set the first error value according to actual usage, such as 3%, when the error between the training angle value corresponding to each sample picture and the sample angle value is less than 3% , It means that the results obtained by the fast convolutional neural network in identifying these sample pictures are similar to the real sample angle values, and the error is within an acceptable range, so it can be considered that the fast convolutional neural network has been trained.
  • Method two includes the following steps 403-404:
  • the server can set the second error value according to the actual use situation, and the sample picture with the error between the training angle value and the sample angle value less than the second error value can be called a qualified sample Pictures.
  • the proportion of eligible sample pictures exceeds the preset proportion threshold, such as more than 98%, it also means that the fast convolutional neural network recognizes these sample pictures as a whole.
  • the sample angle values are similar, and the error is within an acceptable range, so it can be considered that the fast convolutional neural network has been trained.
  • the collection of training samples Install a camera on the vehicle and drive by a professional driver to collect the control angle and corresponding real-time video images of the driver during driving, thereby automatically generating a large number of training samples. Therefore, in this embodiment, as shown in FIG. 6, before step 201, the method may further include:
  • a camera can be installed on each test vehicle in advance, and the installation position of the camera can be adjusted according to the actual situation of the test vehicle, as long as the camera can capture the front of the test vehicle during driving Road conditions can be.
  • the camera can be installed on the left or right side of the rearview mirror of the test vehicle, or on the top of the central control system, so that the shooting angle of the camera is aimed directly ahead.
  • the driver can drive the test vehicle.
  • the driver should preferably drive through road sections with different road conditions during the process of driving the test vehicle.
  • the server can collect images of the road conditions in front of the test vehicle in real time through a camera pre-installed on the test vehicle to obtain each sample video;
  • the server in addition to obtaining each sample video, the server also needs to obtain the driver's response to the forward road conditions in the sample video, that is, the control instruction for the test vehicle. Therefore, the server can send an extraction request to the central control system of the test vehicle by communicating with the test vehicle, so that the central control system can extract the control log of the test vehicle and provide it to the server.
  • the control log includes control instructions generated when the driver drives the test vehicle and used to control the turning of the test vehicle. For example, a driver drives the test vehicle A for one hour to generate a sample video S, the server obtains the sample video S, and requests the central control system of the test vehicle A to extract the control log C during this hour.
  • the server can establish the correspondence between each sample video and the control log according to the system time recorded on the sample video and the control log.
  • the server obtained the sample video S.
  • the system time of the sample video S is 19:00-20:00 on February 1, 2018, and the system time of the control log C is 19:00 on February 1, 2018- At 20:00, the system time of the two is the same, so that the corresponding relationship between the sample video S and the control log C can be established.
  • the server can pre-set an instruction conversion rule, which is the same as the instruction conversion rule described in step 204 above.
  • the instruction conversion rule records the corresponding relationship between the control instruction and the angle value. Therefore, the server
  • the respective angle values can be converted into respective control commands according to the command conversion rule. For example, if a certain angle value is "+30 degrees", the server can convert it to obtain a control instruction of "control the vehicle to turn right 30 degrees”; if a certain angle value is "-20 degrees”, the server can convert it to The converted control command is "control the vehicle to turn left 20 degrees".
  • the default angle value in the instruction conversion rule is a positive value, which means that the vehicle is controlled to turn right, otherwise, the angle value is negative, which means that the vehicle is controlled to turn left.
  • the target vehicle should respond promptly and accurately according to actual road conditions, and control the vehicle in the order of actual road conditions encountered.
  • the server should sequentially send each control instruction to the central control system of the target vehicle, so that the central control system of the target vehicle adjusts the driving direction of the target vehicle according to the control instruction.
  • this application uses a pre-trained fast convolutional neural network to identify the road conditions in front of the target vehicle, and output the angle value in time, and then convert the angle value into a control command to control the driving direction of the target vehicle, which can accurately control the turning of the target vehicle. And it improves the response speed of the turning control in the self-driving car.
  • a device for adjusting the traveling direction of a vehicle corresponds to the method for adjusting the traveling direction of the vehicle in the foregoing embodiment.
  • the device for adjusting the driving direction of the vehicle includes an image acquisition module 601, a video frame extraction module 602, a road condition picture input module 603, an instruction conversion module 604, and an instruction sending module 605.
  • the detailed description of each functional module is as follows:
  • the image acquisition module 601 is used to acquire real-time images of the road conditions in front of the target vehicle through the camera to obtain the target video;
  • the video frame extraction module 602 is configured to extract each video frame as each road condition picture from the target video at equal intervals;
  • the road condition picture input module 603 is configured to sequentially input the various road condition pictures to the pre-trained fast convolutional neural network according to the time sequence of the respective road condition pictures in the target video to obtain the fast convolutional neural network Each angle value sequentially output by the network, where the angle value refers to the angle required for the target vehicle to turn when facing current road conditions;
  • the command conversion module 604 is configured to convert the respective angle values into respective control commands according to preset command conversion rules
  • the instruction sending module 605 is configured to sequentially send the various control instructions to the central control system of the target vehicle, so that the central control system of the target vehicle adjusts the driving direction of the target vehicle according to the control instructions.
  • the fast convolutional neural network can be pre-trained through the following modules:
  • the sample video acquisition module 606 is configured to acquire a sample video obtained by collecting images of road conditions in front of a test vehicle, and a control log for the test vehicle corresponding to the sample video;
  • the sample picture extraction module 607 is configured to extract each video frame from the sample video at an equal interval as each sample picture
  • the control instruction extraction module 608 is configured to extract each control instruction corresponding to each sample picture in time from the control log;
  • the sample angle value conversion module 609 is configured to convert each control instruction into each sample angle value according to a preset instruction conversion rule
  • the sample picture input module 610 is configured to input each sample picture into the fast convolutional neural network for each sample picture, to obtain the output of the fast convolutional neural network and the same for each sample picture The corresponding training angle value;
  • the network parameter adjustment module 611 is configured to use the output training angle value as an adjustment target and adjust the parameters of the fast convolutional neural network to minimize the obtained training angle value corresponding to each sample picture The error between the angle values of the samples;
  • the training completion determining module 612 is configured to determine that the fast convolutional neural network has been trained if the error between the training angle value corresponding to each sample picture and the sample angle value meets a preset condition.
  • the convolution layer of the fast convolutional neural network is provided with a preset number of two-dimensional convolution kernels and 1*1 convolution kernels, and the sample picture input module 610 may include:
  • the first convolution unit 6101 is configured to convolve each sample vector with the preset number of two-dimensional convolution kernels to obtain the first-layer convolution output on each convolution channel, and each sample A vector refers to a vector obtained after vectorization of each sample picture;
  • the second convolution unit 6102 is configured to convolve each first-layer convolution output with a 1*1 convolution kernel on each convolution channel to obtain a second-layer convolution output;
  • the training angle value output unit 6103 is configured to input the second-layer convolution output to the fully connected layer of the fast convolutional neural network to obtain the output of the fast convolutional neural network and the image of each sample The corresponding training angle value.
  • the device for adjusting the driving direction of the vehicle may further include:
  • the first judgment module is configured to judge whether the error between the training angle value corresponding to each sample picture and the sample angle value is less than a preset first error value
  • the first determining module is configured to determine that the error between the training angle value corresponding to each sample picture and the sample angle value meets a preset condition if the judgment result of the first judgment module is yes;
  • the second judgment module is used to judge whether the proportion of qualified sample pictures in all the sample pictures exceeds a preset ratio threshold, and the qualified sample pictures refer to the difference between the training angle value and the sample angle value Sample pictures with an error less than the preset second error value;
  • the second determination module is configured to determine that the error between the training angle value corresponding to each sample picture and the sample angle value meets a preset condition if the judgment result of the second judgment module is yes.
  • the device for adjusting the driving direction of the vehicle may further include:
  • the camera acquisition module is used to collect images of the road conditions in front of the test vehicle in real time through a camera pre-installed on the test vehicle during the driving of the test vehicle to obtain each sample video;
  • the control log extraction module is used to request the central control system of the test vehicle to extract the control log of the test vehicle.
  • the control log includes the control log generated when the driver drives the test vehicle and is used to control the test vehicle Turn control instructions;
  • the relationship establishment module is used to establish the corresponding relationship between each sample video and the control log according to the system time recorded on the sample video and the control log.
  • Each module in the device for adjusting the driving direction of a vehicle can be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • a computer device is provided.
  • the computer device may be a server, and its internal structure diagram may be as shown in FIG. 10.
  • the computer equipment includes a processor, a memory, a network interface and a database connected through a system bus. Among them, the processor of the computer device is used to provide calculation and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system, computer readable instructions, and a database.
  • the internal memory provides an environment for the operation of the operating system and computer-readable instructions in the non-volatile storage medium.
  • the database of the computer equipment is used to store the data involved in the method of adjusting the driving direction of the vehicle.
  • the network interface of the computer device is used to communicate with an external terminal through a network connection.
  • the computer-readable instruction is executed by the processor to realize a method for adjusting the driving direction of the vehicle.
  • a computer device including a memory, a processor, and computer-readable instructions stored in the memory and capable of running on the processor.
  • the processor executes the computer-readable instructions, the adjustments in the foregoing embodiments are implemented.
  • the steps of the vehicle driving direction method for example, step 101 to step 105 shown in FIG. 2.
  • the processor executes the computer-readable instruction, the function of each module/unit of the device for adjusting the driving direction of the vehicle in the foregoing embodiment is realized, for example, the function of the module 601 to the module 605 shown in FIG. To avoid repetition, I won’t repeat it here.
  • a computer-readable storage medium the one or more non-volatile readable storage media storing computer-readable instructions, when the computer-readable instructions are executed by one or more processors , Enabling one or more processors to execute the computer-readable instructions to implement the steps of the method for adjusting the driving direction of the vehicle in the foregoing method embodiments, or the one or more non-volatile readable storage media storing computer-readable instructions
  • the one or more processors execute the computer-readable instructions to realize the functions of each module/unit in the device for adjusting the driving direction of the vehicle in the foregoing device embodiment. To avoid repetition, I won’t repeat them here.
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

本申请公开了一种调整车辆行驶方向方法、装置、计算机设备及存储介质,应用于神经网络技术领域,用于解决现有自动驾驶转弯角度不准确的问题。本申请提供的方法包括:通过摄像头实时采集目标车辆前方路况的图像,得到目标视频;从目标视频中等间隔提取视频帧,得到各个路况图片;按照各个路况图片在目标视频中的时间顺序,将各个路况图片依次输入至预先训练好的快速卷积神经网络,得到快速卷积神经网络依次输出的各个角度值,角度值是指目标车辆面对当前路况转弯所需的角度;根据预设的指令转换规则将各个角度值分别转换为各个控制指令;依次将各个控制指令发送至目标车辆的中控***,使得目标车辆的中控***按照控制指令调整目标车辆的行驶方向。

Description

调整车辆行驶方向方法、装置、计算机设备及存储介质
本申请以2019年02月19日提交的申请号为201910124097.X,名称为“调整车辆行驶方向方法、装置、计算机设备及存储介质”的中国发明专利申请为基础,并要求其优先权。
技术领域
本申请涉及神经网络技术领域,尤其涉及调整车辆行驶方向方法、装置、计算机设备及存储介质。
背景技术
随着智能化技术的快速发展,自动驾驶已经成为当前科研的重点方向之一,尤其是在汽车驾驶领域中,自动驾驶技术将可以辅助甚至替代驾驶员驾驶汽车,大大减轻了驾驶员的负担,受到市场的热烈欢迎。
但是发明人意识到,目前自动驾驶汽车技术仍不成熟,尤其是在面对复杂路况时的车辆转弯上,常常出现转弯角度不准确或者错误的情况。因此,寻找一种能够准确控制汽车转弯的自动驾驶方法成为本领域技术人员亟需解决的问题。
发明内容
本申请实施例提供一种调整车辆行驶方向方法、装置、计算机设备及存储介质,以解决现有自动驾驶转弯角度不准确的问题。
一种调整车辆行驶方向方法,包括:
通过摄像头实时采集目标车辆前方路况的图像,得到目标视频;
从所述目标视频中等间隔提取各个视频帧作为各个路况图片;
按照所述各个路况图片在所述目标视频中的时间顺序,将所述各个路况图片依次输入至预先训练好的快速卷积神经网络,得到所述快速卷积神经网络依次输出的各个角度值,所述角度值是指所述目标车辆面对当前路况转弯所需的角度 ;
根据预设的指令转换规则将所述各个角度值分别转换为各个控制指令;
依次将所述各个控制指令发送至所述目标车辆的中控***,使得所述目标车辆的中控***按照所述控制指令调整所述目标车辆的行驶方向。
一种调整车辆行驶方向装置,包括:
图像采集模块,用于通过摄像头实时采集目标车辆前方路况的图像,得到目标视频;
视频帧提取模块,用于从所述目标视频中等间隔提取各个视频帧作为各个路况图片;
路况图片输入模块,用于按照所述各个路况图片在所述目标视频中的时间顺序,将所述各个路况图片依次输入至预先训练好的快速卷积神经网络,得到所述快速卷积神经网络依次输出的各个角度值,所述角度值是指所述目标车辆面对当前路况转弯所需的角度;
指令转换模块,用于根据预设的指令转换规则将所述各个角度值分别转换为各个控制指令;
指令发送模块,用于依次将所述各个控制指令发送至所述目标车辆的中控***,使得所述目标车辆的中控***按照所述控制指令调整所述目标车辆的行驶方向。
一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现上述调整车辆行驶方向方法的步骤。
一个或多个存储有计算机可读指令的非易失性可读存储介质,所述计算机可读存储介质存储有计算机可读指令,使得所述一个或多个处理器执行上述调整车辆行驶方向方法的步骤。
本申请的一个或多个实施例的细节在下面的附图和描述中提出,本申请的其他特征和优点将从说明书、附图以及权利要求变得明显。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一实施例中调整车辆行驶方向方法的一应用环境示意图;
图2是本申请一实施例中调整车辆行驶方向方法的一流程图;
图3是本申请一实施例中调整车辆行驶方向方法在一个应用场景下预先训练快速卷积神经网络的流程示意图;
图4是本申请一实施例中调整车辆行驶方向方法步骤205在一个应用场景下的流程示意图;
图5为本申请一实施例中快速卷积神经网络在一个应用场景下的结构示意图;
图6是本申请一实施例中调整车辆行驶方向方法在一个应用场景下自动采集和生成训练样本的流程示意图;
图7是本申请一实施例中调整车辆行驶方向装置在一个应用场景下的结构示意图;
图8是本申请一实施例中调整车辆行驶方向装置在另一个应用场景下的结构示意图;
图9是本申请一实施例中样本图片输入模块的结构示意图;
图10是本申请一实施例中计算机设备的一示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请提供的调整车辆行驶方向方法,可应用在如图1的应用环境中,其中,终端设备通过网络与服务器进行通信。其中,该终端设备可以但不限于各种个人计算机、笔记本电脑、智能手机、平板电脑和便携式可穿戴设备,比如可以 是装载车辆中控***的设备。服务器可以用独立的服务器或者是多个服务器组成的服务器集群来实现。
在一实施例中,如图2所示,提供一种调整车辆行驶方向方法,以该方法应用在图1中的服务器为例进行说明,包括如下步骤:
101、通过摄像头实时采集目标车辆前方路况的图像,得到目标视频;
本实施例中,可以预先在目标车辆上安装摄像头,比如安装在目标车辆的车头位置附近,摄像头的方向对准车辆前方,从而服务器可以通过摄像头实时采集目标车辆前方路况的图像,得到目标视频。
102、从所述目标视频中等间隔提取各个视频帧作为各个路况图片;
可以理解的是,该目标视频中的各个视频帧均为目标车辆前方路况的图像,这些图像包含了路况信息,因此,服务器需要从所述目标视频中等间隔提取视频帧,得到各个路况图片,以便提供给快速卷积神经网络进行识别。
需要说明的是,服务器在等间隔提取视频帧时,间隔提取的间隔时间可以根据实际使用情况确定,比如可以设定为0.5毫秒,即每0.5毫秒提取一个视频帧。优选地,由于目标车辆行驶速度越快,前方路况变动越快,反之,目标车辆行驶速度越慢,前方路快变动越慢。因此,为了保证提取到的各个路况图片能够及时地反应目标车辆的前方路况,服务器提取视频帧的间隔时间可以依据目标车辆的当前速度决定,其中,间隔时间与当前速度负相关,目标车辆的当前速度越快,则间隔时间越短。
103、按照所述各个路况图片在所述目标视频中的时间顺序,将所述各个路况图片依次输入至预先训练好的快速卷积神经网络,得到所述快速卷积神经网络依次输出的各个角度值,所述角度值是指所述目标车辆面对当前路况转弯所需的角度;
本实施例中,服务器可以预先训练好该快速卷积神经网络,该快速卷积神经网络可以识别输入的路况图片,依据路况图片中包含的路况信息输出对应的角度值,该角度值是指所述目标车辆面对当前路况转弯所需的角度。在自动驾驶中,目标车辆应当按照实际路况作出及时、准确的反应,并且应当按照实际遇到的路况的先后顺序来控制车辆,因此,服务器可以按照所述各个路况图片在所 述目标视频中的时间顺序,将所述各个路况图片依次输入至预先训练好的快速卷积神经网络,得到所述快速卷积神经网络依次输出的各个角度值。
为便于理解,下面将对该快速卷积神经网络进行详细描述。如图3所示进一步地,所述快速卷积神经网络通过以下步骤预先训练好:
201、获取通过采集测试车辆前方路况的图像得到的样本视频,以及与所述样本视频对应的针对所述测试车辆的控制日志;
202、从所述样本视频中等间距提取各个视频帧作为各个样本图片;
203、从所述控制日志中提取出与所述各个样本图片在时间上对应的各个控制指令;
204、根据预设的指令转换规则将所述各个控制指令转换为各个样本角度值;
205、针对每个样本图片,将所述每个样本图片分别输入至所述快速卷积神经网络,得到所述快速卷积神经网络输出的、与所述每个样本图片对应的训练角度值;
206、以输出的所述训练角度值作为调整目标,调整所述快速卷积神经网络的参数,以最小化得到的所述训练角度值与所述每个样本图片对应的样本角度值之间的误差;
207、若所述各个样本图片对应的训练角度值与样本角度值之间的误差满足预设条件,则确定所述快速卷积神经网络已训练好。
对于上述步骤201,可以理解的是,可以预先采集测试车辆前方路况的图像得到的样本视频,例如,可以通过向多辆测试车辆分配摄像头,这些测试车辆平日驾驶时摄像头会记录测试车辆在行驶过程中前方路况,从而形成多个样本视频。同时,在形成样本视频的过程中,驾驶员会根据前方路况控制车辆形式,这个过程中可以通过测试车辆上预设的装置记录驾驶员控制测试车辆的控制日志,包括控制车辆的加减速、前进后退、转弯角度等信息。可知,不论是样本视频还是控制日志中均对应有***时间,从而两者可以通过***时间对应起来。例如,同一测试车辆上某个样本视频的***时间为2018年2月2日9:00-10:00,某段控制日志的***时间也为2018年2月2日9:00-10:00,因此这个样本视频与这段控制日志对应。
上述步骤202与上述步骤102同理,服务器可以从所述样本视频中等间距提取各个视频帧作为各个样本图片,此处不再赘述。
对于上述步骤203,可以理解的是,服务器在提取得到各个样本图片后,这些样本图片中每个样本图片包含的路况信息均不相同,驾驶员在驾驶测试车辆时可能采取不同的控制指令,为了训练该快速卷积神经网络,服务器需要将各个样本图片与控制日志中的控制指令对应起来,因此,服务器可以从所述控制日志中提取出与所述各个样本图片在时间上对应的各个控制指令。例如,假设某个样本图片的***时间为2018年2月2日9:00,同一测试车辆的控制日志中的某条控制指令的***时间也为2018年2月2日9:00,则该样本图片与该控制指令对应。
对于上述步骤204,服务器在提取出所述各个控制指令后,还需要根据预设的指令转换规则将所述各个控制指令转换为各个样本角度值,该指令转换规则记录了控制指令与角度值之间的对应关系,因此,服务器可以将控制指令转换为样本角度值。例如,假设该控制指令为“控制车辆右转30度”,则服务器可以将其转换得到样本角度值为“+30度”;若该控制指令为“控制车辆左转20度”,则服务器可以将其转换得到样本角度值为“-20度”。
对于上述步骤205,在训练快速卷积神经网络时,无需按照各个样本图片的次序进行,只需投入这些样本图片分别进行训练即可。因此,针对每个样本图片,服务器可以将所述每个样本图片分别输入至所述快速卷积神经网络,得到所述快速卷积神经网络输出的、与所述每个样本图片对应的训练角度值。需要说明的是,服务器在将样本图片投入快速卷积神经网络之前,可以先将样本图片转换为数据矩阵,再将该数据矩阵输入到该快速卷积神经网络中,这是因为,样本图片数字化后更有利于快速卷积神经网络的识别和训练。
在自动驾驶汽车的应用场景中,对服务器运算的处理效率要求较高,因此,在将样本图片或路况图片投入到该快速卷积神经网络,该快速卷积神经网络的运算效率越快越好。为此,本实施例中对现有的卷积神经网络的基础上改动得到该快速卷积神经网络,其对比现有的卷积神经网络来说,在卷积层的运算补充略有不同,但可以大幅减少网络计算量,从而提升了快速卷积神经网络的运算 效率。下面以将该样本图片投入到该快速卷积神经网络后卷积层的计算过程展开描述。
更进一步地,如图4所示,所述快速卷积神经网络的卷积层设置有预设数量个二维卷积核和1*1卷积核,所述步骤205可以包括:
301、将每个样本向量与所述预设数量个二维卷积核分别进行卷积,得到各个卷积通道上的第一层卷积输出,所述每个样本向量是指所述每个样本图片在向量化后得到的向量;
302、将各个所述第一层卷积输出分别在各个所述卷积通道上与1*1卷积核进行卷积,得到第二层卷积输出;
303、将所述第二层卷积输出投入至所述快速卷积神经网络的全连接层,得到所述快速卷积神经网络输出的、与所述每个样本图片对应的训练角度值。
对于上述步骤301,请参阅图5,假设该样本向量为5*5*2的矩阵,卷积时,共分为2个5*5的一维特征图(I1,I2),该快速卷积神经网络的卷积层中共2个3*3的二维卷积核(K1,K2),则I1与K1卷积得到第一层卷积输出F1,I2与K2卷积得到第一层卷积输出F2。
对于上述步骤302,假设共3个卷积通道,每个卷积通道上均设有1个1*1卷积核,分别为P1、P2和P3,因此,在步骤301得到第一层卷积输出F1和F2之后,F1和F2分别与P1、P2和P3进行卷积,从而得到第二层卷积输出O1、O2和O3。
对于上述步骤303,在得到第二层卷积输出O1、O2和O3之后,服务器即可将这些第二层卷积输出O1、O2和O3投入至所述快速卷积神经网络的全连接层,得到所述快速卷积神经网络输出的、与所述每个样本图片对应的训练角度值。
从上述快速卷积神经网络的卷积层的运算可以看出,本实施例提供的快速卷积神经网络相比现有的卷积神经网络来说,计算量更少,运算效率更快,论证如下:
现有的卷积神经网络卷积层的运算过程中,假设面对N个通道L*L输入,使用M个C*C的N通道卷积核进行计算,网络的输出通道数为M,则其运算量为L*L*N*C*C*M。
本实施例提供的快速卷积神经网络卷积层的运算过程中,假设面对N个通道L* L输入,设置N个C*C的二维卷积核,逐通道卷积后,再使用N通道的1*1卷积核进行叠加,卷积核个数与现有卷积神经网络输出的通道数M一致,其运算量为:第一步的运算量为L*L*N*C*C,第二步的运算量为L*L*M*N*1*1。
可知,该快速卷积神经网络与现有的卷积神经网络在卷积层的计算量相比为:
(L*L*N*C*C+L*L*M*N*1*1)/(L*L*N*C*C*M)=1/M+1/(C*C)
因此,该快速卷积神经网络相比现有的卷积神经网络大大减少了计算量,如上述步骤301-303中所举例子,M=3,C=3,1/M+1/(C*C)=0.44。
对于上述步骤206,可以理解的是,在训练快速卷积神经网络的过程中,可以通过调整该快速卷积神经网络的参数,尽量使得该快速卷积神经网络输出的训练角度值与样本图片对应的样本角度值逼近,也即误差最小。假设当前的样本图片对应的样本角度值为-10度,而该样本图片投入该快速卷积神经网络后输出的训练角度值为-15度,则服务器可以通过调整该快速卷积神经网络的参数来使得训练角度值逐渐向-10度靠拢。
对于上述步骤207,在执行上述步骤205和步骤206,将所有样本图片均投入到该快速卷积神经网络之后,为了验证该快速卷积神经网络是否训练完成,服务器可以判断所述各个样本图片对应的训练角度值与样本角度值之间的误差是否满足预设条件,若满足,则说明该快速卷积神经网络中的各个参数已经调整到位,可以确定该快速卷积神经网络已训练完成;反之,若不满足,则说明该快速卷积神经网络还需要继续训练。
需要说明的是,服务器可以根据实际使用情况具体设置该预设条件,详细内容将在下面进行具体描述。
更进一步地,在步骤207之前,本方法还可以通过下述方式一或方式二来判定该快速卷积神经网络是否训练完成。
方式一包括下述步骤401-402:
401、判断所述每个样本图片对应的训练角度值与样本角度值之间的误差是否均小于预设的第一误差值;
402、若所述每个样本图片对应的训练角度值与样本角度值之间的误差均小于预设的第一误差值,则确定所述各个样本图片对应的训练角度值与样本角度值 之间的误差满足预设条件。
对于方式一,可以理解的是,服务器可以根据实际使用情况设定第一误差值,比如3%,当所述每个样本图片对应的训练角度值与样本角度值之间的误差小于3%时,则说明该快速卷积神经网络在识别这些样本图片得到的结果与真实的样本角度值相差不多,误差在可以接受的范围之内,因此可以认为该快速卷积神经网络已训练完成。
方式二包括下述步骤403-404:
403、判断符合条件的样本图片在所有所述样本图片中的占比是否超过预设的比例阈值,所述符合条件的样本图片是指训练角度值与样本角度值之间的误差小于预设的第二误差值的样本图片;
404、若符合条件的样本图片在所有所述样本图片中的占比超过预设的比例阈值,则确定所述各个样本图片对应的训练角度值与样本角度值之间的误差满足预设条件。
对于上述方式二,可以理解的是,服务器可以根据实际使用情况设定第二误差值,训练角度值与样本角度值之间的误差小于该第二误差值的样本图片可以称为符合条件的样本图片,在所有样本图片中,如果符合条件的样本图片所占比例超过预设比例阈值,比如超过98%,则也说明该快速卷积神经网络在整体上识别这些样本图片得到的结果与真实的样本角度值相差不多,误差在可以接受的范围之内,因此可以认为该快速卷积神经网络已训练完成。
更进一步地,训练样本的采集:安装摄像头在车辆上,由专业车手驾驶,采集车手驾驶过程中的控制角度和对应的实时视频图像,从而自动生成大量的训练样本。因此,本实施例中,如图6所示,在步骤201之前,本方法还可以包括:
501、在测试车辆行驶的过程中,通过预先安装在测试车辆上的摄像头实时采集所述测试车辆前方路况的图像,得到各个样本视频;
502、请求所述测试车辆的中控***,提取所述测试车辆的控制日志,所述控制日志包括驾驶员驾驶所述测试车辆时所产生的、用于控制所述测试车辆转弯的控制指令;
503、根据样本视频和控制日志上记录的***时间建立所述各个样本视频与所 述控制日志之间的对应关系。
对于上述步骤501,本实施例中,可以预先在各个测试车辆上安装摄像头,该摄像头的安装位置可以根据测试车辆的实际情况调整,只要使得该摄像头能否拍摄到测试车辆在行驶过程中的前方路况即可。一般来说,可以将摄像头安装在测试车辆的车内后视镜左侧或右侧,或者安装在中控***的上方,将摄像头的拍摄角度对准正前方。
在一台测试车辆上安装好摄像头后,可以让驾驶员驾驶测试车辆行驶,为了样本的多样性,驾驶员在驾驶测试车辆过程中应当优选行驶经过不同路况的路段。这样,在测试车辆行驶的过程中,服务器即可通过预先安装在测试车辆上的摄像头实时采集所述测试车辆前方路况的图像,得到各个样本视频;
对于上述步骤502,另一方面,服务器除了获取各个样本视频以外,还需要获取驾驶员面对样本视频中的前方路况所作出的应对操作,也即对测试车辆的控制指令。因此,服务器可以通过与该测试车辆通信连接,向该测试车辆的中控***发送提取请求,从而中控***可以提取所述测试车辆的控制日志并提供给服务器。该控制日志包括了驾驶员驾驶所述测试车辆时所产生的、用于控制所述测试车辆转弯的控制指令。举例说明,某驾驶员驾驶测试车辆A行驶了一个小时,从而产生样本视频S,服务器获取该样本视频S,并且,请求该测试车辆A的中控***提取出这一个小时内的控制日志C。
对于上述步骤503,可以理解的是,服务器在获取到样本视频和控制日志之后,可以根据样本视频和控制日志上记录的***时间建立所述各个样本视频与所述控制日志之间的对应关系。承接上述举例,服务器获取到样本视频S,该样本视频S的***时间为2018年2月1日19:00-20:00,控制日志C的***时间为2018年2月1日19:00-20:00,两者***时间相同,从而可以建立样本视频S与控制日志C的对应关系。
104、根据预设的指令转换规则将所述各个角度值分别转换为各个控制指令;
可以理解的是,服务器可以预先设置指令转换规则,该指令转换规则与上述步骤204中所述的指令转换规则相同,该指令转换规则记录了控制指令与角度值之间的对应关系,因此,服务器可以依据该指令转换规则将所述各个角度值分别 转换为各个控制指令。例如,若某个角度值为“+30度”,则服务器可以将其转换得到控制指令为“控制车辆右转30度”;若某个角度值为“-20度”,则服务器可以将其转换得到控制指令为“控制车辆左转20度”。可知,该指令转换规则中默认为角度值为正值,表示控制车辆右转,反之,角度值为负值,表示控制车辆左转。
105、依次将所述各个控制指令发送至所述目标车辆的中控***,使得所述目标车辆的中控***按照控制指令调整所述目标车辆的行驶方向。
可以理解的是,如上述步骤103中所述,在自动驾驶中,目标车辆应当按照实际路况作出及时、准确的反应,并且应当按照实际遇到的路况的先后顺序来控制车辆,因此,在得到各个控制指令之后,服务器应当依次将所述各个控制指令发送至所述目标车辆的中控***,使得所述目标车辆的中控***按照控制指令调整所述目标车辆的行驶方向。
本申请实施例中,首先,通过摄像头实时采集目标车辆前方路况的图像,得到目标视频;然后,从所述目标视频中等间隔提取各个视频帧作为各个路况图片;接着,按照所述各个路况图片在所述目标视频中的时间顺序,将所述各个路况图片依次输入至预先训练好的快速卷积神经网络,得到所述快速卷积神经网络依次输出的各个角度值,所述角度值是指所述目标车辆面对当前路况转弯所需的角度;再之,根据预设的指令转换规则将所述各个角度值分别转换为各个控制指令;最后,依次将所述各个控制指令发送至所述目标车辆的中控***,使得所述目标车辆的中控***按照控制指令调整所述目标车辆的行驶方向。可见,本申请利用预先训练好的快速卷积神经网络可以识别目标车辆前方路况,并及时输出角度值,然后将角度值转换为控制指令来控制目标车辆的行驶方向,可以准确控制目标车辆转弯,且提高了自动驾驶汽车时转弯控制的响应速度。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
在一实施例中,提供一种调整车辆行驶方向装置,该调整车辆行驶方向装置与上述实施例中调整车辆行驶方向方法一一对应。如图7所示,该调整车辆行驶方向装置包括图像采集模块601、视频帧提取模块602、路况图片输入模块603、指令转换模块604和指令发送模块605。各功能模块详细说明如下:
图像采集模块601,用于通过摄像头实时采集目标车辆前方路况的图像,得到目标视频;
视频帧提取模块602,用于从所述目标视频中等间隔提取各个视频帧作为各个路况图片;
路况图片输入模块603,用于按照所述各个路况图片在所述目标视频中的时间顺序,将所述各个路况图片依次输入至预先训练好的快速卷积神经网络,得到所述快速卷积神经网络依次输出的各个角度值,所述角度值是指所述目标车辆面对当前路况转弯所需的角度;
指令转换模块604,用于根据预设的指令转换规则将所述各个角度值分别转换为各个控制指令;
指令发送模块605,用于依次将所述各个控制指令发送至所述目标车辆的中控***,使得所述目标车辆的中控***按照控制指令调整所述目标车辆的行驶方向。
如图8所示,进一步地,所述快速卷积神经网络可以通过以下模块预先训练好:
样本视频采集模块606,用于获取通过采集测试车辆前方路况的图像得到的样本视频,以及与所述样本视频对应的针对所述测试车辆的控制日志;
样本图片提取模块607,用于从所述样本视频中等间距提取各个视频帧作为各个样本图片;
控制指令提取模块608,用于从所述控制日志中提取出与所述各个样本图片在时间上对应的各个控制指令;
样本角度值转换模块609,用于根据预设的指令转换规则将所述各个控制指令转换为各个样本角度值;
样本图片输入模块610,用于针对每个样本图片,将所述每个样本图片分别输 入至所述快速卷积神经网络,得到所述快速卷积神经网络输出的、与所述每个样本图片对应的训练角度值;
网络参数调整模块611,用于以输出的所述训练角度值作为调整目标,调整所述快速卷积神经网络的参数,以最小化得到的所述训练角度值与所述每个样本图片对应的样本角度值之间的误差;
训练完成确定模块612,用于若所述各个样本图片对应的训练角度值与样本角度值之间的误差满足预设条件,则确定所述快速卷积神经网络已训练好。
如图9所示,进一步地,所述快速卷积神经网络的卷积层设置有预设数量个二维卷积核和1*1卷积核,所述样本图片输入模块610可以包括:
第一卷积单元6101,用于将每个样本向量与所述预设数量个二维卷积核分别进行卷积,得到各个卷积通道上的第一层卷积输出,所述每个样本向量是指所述每个样本图片在向量化后得到的向量;
第二卷积单元6102,用于将各个所述第一层卷积输出分别在各个所述卷积通道上与1*1卷积核进行卷积,得到第二层卷积输出;
训练角度值输出单元6103,用于将所述第二层卷积输出投入至所述快速卷积神经网络的全连接层,得到所述快速卷积神经网络输出的、与所述每个样本图片对应的训练角度值。
进一步地,所述调整车辆行驶方向装置还可以包括:
第一判断模块,用于判断所述每个样本图片对应的训练角度值与样本角度值之间的误差是否均小于预设的第一误差值;
第一确定模块,用于若所述第一判断模块的判断结果为是,则确定所述各个样本图片对应的训练角度值与样本角度值之间的误差满足预设条件;
第二判断模块,用于判断符合条件的样本图片在所有所述样本图片中的占比是否超过预设的比例阈值,所述符合条件的样本图片是指训练角度值与样本角度值之间的误差小于预设的第二误差值的样本图片;
第二确定模块,用于若所述第二判断模块的判断结果为是,则确定所述各个样本图片对应的训练角度值与样本角度值之间的误差满足预设条件。
进一步地,所述调整车辆行驶方向装置还可以包括:
摄像头采集模块,用于在测试车辆行驶的过程中,通过预先安装在测试车辆上的摄像头实时采集所述测试车辆前方路况的图像,得到各个样本视频;
控制日志提取模块,用于请求所述测试车辆的中控***,提取所述测试车辆的控制日志,所述控制日志包括驾驶员驾驶所述测试车辆时所产生的、用于控制所述测试车辆转弯的控制指令;
关系建立模块,用于根据样本视频和控制日志上记录的***时间建立所述各个样本视频与所述控制日志之间的对应关系。
关于调整车辆行驶方向装置的具体限定可以参见上文中对于调整车辆行驶方向方法的限定,在此不再赘述。上述调整车辆行驶方向装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图10所示。该计算机设备包括通过***总线连接的处理器、存储器、网络接口和数据库。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作***、计算机可读指令和数据库。该内存储器为非易失性存储介质中的操作***和计算机可读指令的运行提供环境。该计算机设备的数据库用于存储调整车辆行驶方向方法中涉及到的数据。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机可读指令被处理器执行时以实现一种调整车辆行驶方向方法。
在一个实施例中,提供了一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机可读指令,处理器执行计算机可读指令时实现上述实施例中调整车辆行驶方向方法的步骤,例如图2所示的步骤101至步骤105。或者,处理器执行计算机可读指令时实现上述实施例中调整车辆行驶方向装置的各模块/单元的功能,例如图7所示模块601至模块605的功能。为避免重复 ,这里不再赘述。
在一个实施例中,提供了一种计算机可读存储介质,该一个或多个存储有计算机可读指令的非易失性可读存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行计算机可读指令时实现上述方法实施例中调整车辆行驶方向方法的步骤,或者,该一个或多个存储有计算机可读指令的非易失性可读存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行计算机可读指令时实现上述装置实施例中调整车辆行驶方向装置中各模块/单元的功能。为避免重复,这里不再赘述。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征 进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。
发明概述
技术问题
问题的解决方案
发明的有益效果

Claims (20)

  1. 一种调整车辆行驶方向方法,其特征在于,包括:
    通过摄像头实时采集目标车辆前方路况的图像,得到目标视频;
    从所述目标视频中等间隔提取出各个视频帧作为各个路况图片;
    按照所述各个路况图片在所述目标视频中的时间顺序,将所述各个路况图片依次输入至预先训练好的快速卷积神经网络,得到所述快速卷积神经网络依次输出的各个角度值,所述角度值是指所述目标车辆面对当前路况转弯所需的角度;
    根据预设的指令转换规则将所述各个角度值分别转换为各个控制指令;
    依次将所述各个控制指令发送至所述目标车辆的中控***,使得所述目标车辆的中控***按照所述控制指令调整所述目标车辆的行驶方向。
  2. 根据权利要求1所述的调整车辆行驶方向方法,其特征在于,所述快速卷积神经网络通过以下步骤预先训练好:
    获取通过采集测试车辆前方路况的图像得到的样本视频,以及与所述样本视频对应的针对所述测试车辆的控制日志;
    从所述样本视频中等间距提取各个视频帧作为各个样本图片;
    从所述控制日志中提取出与所述各个样本图片在时间上对应的各个控制指令;
    根据预设的指令转换规则将所述各个控制指令转换为各个样本角度值;
    针对每个样本图片,将所述每个样本图片分别输入至所述快速卷积神经网络,得到所述快速卷积神经网络输出的、与所述每个样本图片对应的训练角度值;
    以输出的所述训练角度值作为调整目标,调整所述快速卷积神经网络的参数,以最小化得到的所述训练角度值与所述每个样本图片对应的样本角度值之间的误差;
    若所述各个样本图片对应的训练角度值与样本角度值之间的误差满足预设条件,则确定所述快速卷积神经网络已训练好。
  3. 根据权利要求2所述的调整车辆行驶方向方法,其特征在于,所述快速卷积神经网络的卷积层设置有预设数量个二维卷积核和1*1卷积核,所述针对每个样本图片,将所述每个样本图片分别输入至所述快速卷积神经网络,得到所述快速卷积神经网络输出的、与所述每个样本图片对应的训练角度值包括:
    将每个样本向量与所述预设数量个二维卷积核分别进行卷积,得到各个卷积通道上的第一层卷积输出,所述每个样本向量是指所述每个样本图片在向量化后得到的向量;
    将各个所述第一层卷积输出分别在各个所述卷积通道上与1*1卷积核进行卷积,得到第二层卷积输出;
    将所述第二层卷积输出投入至所述快速卷积神经网络的全连接层,得到所述快速卷积神经网络输出的、与所述每个样本图片对应的训练角度值。
  4. 根据权利要求2所述的调整车辆行驶方向方法,其特征在于,在确定所述快速卷积神经网络已训练好之前,还包括:
    判断所述每个样本图片对应的训练角度值与样本角度值之间的误差是否均小于预设的第一误差值;
    若所述每个样本图片对应的训练角度值与样本角度值之间的误差均小于预设的第一误差值,则确定所述各个样本图片对应的训练角度值与样本角度值之间的误差满足预设条件;
    判断符合条件的样本图片在所有所述样本图片中的占比是否超过预设的比例阈值,所述符合条件的样本图片是指训练角度值与样本角度值之间的误差小于预设的第二误差值的样本图片;
    若符合条件的样本图片在所有所述样本图片中的占比超过预设的比例阈值,则确定所述各个样本图片对应的训练角度值与样本角 度值之间的误差满足预设条件。
  5. 根据权利要求2至4中任一项所述的调整车辆行驶方向方法,其特征在于,在获取通过采集测试车辆前方路况的图像得到的样本视频,以及与所述样本视频对应的针对所述测试车辆的控制日志之前,还包括:
    在测试车辆行驶的过程中,通过预先安装在测试车辆上的摄像头实时采集所述测试车辆前方路况的图像,得到各个样本视频;
    请求所述测试车辆的中控***,提取所述测试车辆的控制日志,所述控制日志包括驾驶员驾驶所述测试车辆时所产生的、用于控制所述测试车辆转弯的控制指令;
    根据样本视频和控制日志上记录的***时间建立所述各个样本视频与所述控制日志之间的对应关系。
  6. 一种调整车辆行驶方向装置,其特征在于,包括:
    图像采集模块,用于通过摄像头实时采集目标车辆前方路况的图像,得到目标视频;
    视频帧提取模块,用于从所述目标视频中等间隔提取各个视频帧作为各个路况图片;
    路况图片输入模块,用于按照所述各个路况图片在所述目标视频中的时间顺序,将所述各个路况图片依次输入至预先训练好的快速卷积神经网络,得到所述快速卷积神经网络依次输出的各个角度值,所述角度值是指所述目标车辆面对当前路况转弯所需的角度;
    指令转换模块,用于根据预设的指令转换规则将所述各个角度值分别转换为各个控制指令;
    指令发送模块,用于依次将所述各个控制指令发送至所述目标车辆的中控***,使得所述目标车辆的中控***按照所述控制指令调整所述目标车辆的行驶方向。
  7. 根据权利要求6所述的调整车辆行驶方向装置,其特征在于,所述 快速卷积神经网络通过以下模块预先训练好:
    样本视频采集模块,用于获取通过采集测试车辆前方路况的图像得到的样本视频,以及与所述样本视频对应的针对所述测试车辆的控制日志;
    样本图片提取模块,用于从所述样本视频中等间距提取各个视频帧作为各个样本图片;
    控制指令提取模块,用于从所述控制日志中提取出与所述各个样本图片在时间上对应的各个控制指令;
    样本角度值转换模块,用于根据预设的指令转换规则将所述各个控制指令转换为各个样本角度值;
    样本图片输入模块,用于针对每个样本图片,将所述每个样本图片分别输入至所述快速卷积神经网络,得到所述快速卷积神经网络输出的、与所述每个样本图片对应的训练角度值;
    网络参数调整模块,用于以输出的所述训练角度值作为调整目标,调整所述快速卷积神经网络的参数,以最小化得到的所述训练角度值与所述每个样本图片对应的样本角度值之间的误差;
    训练完成确定模块,用于若所述各个样本图片对应的训练角度值与样本角度值之间的误差满足预设条件,则确定所述快速卷积神经网络已训练好。
  8. 根据权利要求7所述的调整车辆行驶方向装置,其特征在于,所述快速卷积神经网络的卷积层设置有预设数量个二维卷积核和1*1卷积核,所述样本图片输入模块包括:
    第一卷积单元,用于将每个样本向量与所述预设数量个二维卷积核分别进行卷积,得到各个卷积通道上的第一层卷积输出,所述每个样本向量是指所述每个样本图片在向量化后得到的向量;
    第二卷积单元,用于将各个所述第一层卷积输出分别在各个所述卷积通道上与1*1卷积核进行卷积,得到第二层卷积输出;
    训练角度值输出单元,用于将所述第二层卷积输出投入至所述快 速卷积神经网络的全连接层,得到所述快速卷积神经网络输出的、与所述每个样本图片对应的训练角度值。
  9. 根据权利要求7所述的调整车辆行驶方向装置,其特征在于,所述调整车辆行驶方向装置还包括:
    第一判断模块,用于判断所述每个样本图片对应的训练角度值与样本角度值之间的误差是否均小于预设的第一误差值;
    第一确定模块,用于若所述第一判断模块的判断结果为是,则确定所述各个样本图片对应的训练角度值与样本角度值之间的误差满足预设条件;
    第二判断模块,用于判断符合条件的样本图片在所有所述样本图片中的占比是否超过预设的比例阈值,所述符合条件的样本图片是指训练角度值与样本角度值之间的误差小于预设的第二误差值的样本图片;
    第二确定模块,用于若所述第二判断模块的判断结果为是,则确定所述各个样本图片对应的训练角度值与样本角度值之间的误差满足预设条件。
  10. 根据权利要求7至9中任一项所述的调整车辆行驶方向装置,其特征在于,所述调整车辆行驶方向装置还包括:
    摄像头采集模块,用于在测试车辆行驶的过程中,通过预先安装在测试车辆上的摄像头实时采集所述测试车辆前方路况的图像,得到各个样本视频;
    控制日志提取模块,用于请求所述测试车辆的中控***,提取所述测试车辆的控制日志,所述控制日志包括驾驶员驾驶所述测试车辆时所产生的、用于控制所述测试车辆转弯的控制指令;
    关系建立模块,用于根据样本视频和控制日志上记录的***时间建立所述各个样本视频与所述控制日志之间的对应关系。
  11. 一种计算机设备,包括存储器、处理器以及存储在所述存储器中 并可在所述处理器上运行的计算机可读指令,其特征在于,所述处理器执行所述计算机可读指令时实现如下步骤:
    通过摄像头实时采集目标车辆前方路况的图像,得到目标视频;
    从所述目标视频中等间隔提取出各个视频帧作为各个路况图片;
    按照所述各个路况图片在所述目标视频中的时间顺序,将所述各个路况图片依次输入至预先训练好的快速卷积神经网络,得到所述快速卷积神经网络依次输出的各个角度值,所述角度值是指所述目标车辆面对当前路况转弯所需的角度;
    根据预设的指令转换规则将所述各个角度值分别转换为各个控制指令;
    依次将所述各个控制指令发送至所述目标车辆的中控***,使得所述目标车辆的中控***按照所述控制指令调整所述目标车辆的行驶方向。
  12. 根据权利要求11所述的计算机设备,其特征在于,所述快速卷积神经网络通过以下步骤预先训练好:
    获取通过采集测试车辆前方路况的图像得到的样本视频,以及与所述样本视频对应的针对所述测试车辆的控制日志;
    从所述样本视频中等间距提取各个视频帧作为各个样本图片;
    从所述控制日志中提取出与所述各个样本图片在时间上对应的各个控制指令;
    根据预设的指令转换规则将所述各个控制指令转换为各个样本角度值;
    针对每个样本图片,将所述每个样本图片分别输入至所述快速卷积神经网络,得到所述快速卷积神经网络输出的、与所述每个样本图片对应的训练角度值;
    以输出的所述训练角度值作为调整目标,调整所述快速卷积神经网络的参数,以最小化得到的所述训练角度值与所述每个样本图片对应的样本角度值之间的误差;
    若所述各个样本图片对应的训练角度值与样本角度值之间的误差满足预设条件,则确定所述快速卷积神经网络已训练好。
  13. 根据权利要求12所述的计算机设备,其特征在于,所述快速卷积神经网络的卷积层设置有预设数量个二维卷积核和1*1卷积核,所述针对每个样本图片,将所述每个样本图片分别输入至所述快速卷积神经网络,得到所述快速卷积神经网络输出的、与所述每个样本图片对应的训练角度值包括:
    将每个样本向量与所述预设数量个二维卷积核分别进行卷积,得到各个卷积通道上的第一层卷积输出,所述每个样本向量是指所述每个样本图片在向量化后得到的向量;
    将各个所述第一层卷积输出分别在各个所述卷积通道上与1*1卷积核进行卷积,得到第二层卷积输出;
    将所述第二层卷积输出投入至所述快速卷积神经网络的全连接层,得到所述快速卷积神经网络输出的、与所述每个样本图片对应的训练角度值。
  14. 根据权利要求12所述的计算机设备,其特征在于,在确定所述快速卷积神经网络已训练好之前,所述处理器执行所述计算机可读指令时还实现如下步骤:
    判断所述每个样本图片对应的训练角度值与样本角度值之间的误差是否均小于预设的第一误差值;
    若所述每个样本图片对应的训练角度值与样本角度值之间的误差均小于预设的第一误差值,则确定所述各个样本图片对应的训练角度值与样本角度值之间的误差满足预设条件;
    判断符合条件的样本图片在所有所述样本图片中的占比是否超过预设的比例阈值,所述符合条件的样本图片是指训练角度值与样本角度值之间的误差小于预设的第二误差值的样本图片;
    若符合条件的样本图片在所有所述样本图片中的占比超过预设的 比例阈值,则确定所述各个样本图片对应的训练角度值与样本角度值之间的误差满足预设条件。
  15. 根据权利要求2至4中任一项所述的计算机设备,其特征在于,在获取通过采集测试车辆前方路况的图像得到的样本视频,以及与所述样本视频对应的针对所述测试车辆的控制日志之前,所述处理器执行所述计算机可读指令时还实现如下步骤:
    在测试车辆行驶的过程中,通过预先安装在测试车辆上的摄像头实时采集所述测试车辆前方路况的图像,得到各个样本视频;
    请求所述测试车辆的中控***,提取所述测试车辆的控制日志,所述控制日志包括驾驶员驾驶所述测试车辆时所产生的、用于控制所述测试车辆转弯的控制指令;
    根据样本视频和控制日志上记录的***时间建立所述各个样本视频与所述控制日志之间的对应关系。
  16. 一个或多个存储有计算机可读指令的非易失性可读存储介质,其特征在于,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
    通过摄像头实时采集目标车辆前方路况的图像,得到目标视频;
    从所述目标视频中等间隔提取出各个视频帧作为各个路况图片;
    按照所述各个路况图片在所述目标视频中的时间顺序,将所述各个路况图片依次输入至预先训练好的快速卷积神经网络,得到所述快速卷积神经网络依次输出的各个角度值,所述角度值是指所述目标车辆面对当前路况转弯所需的角度;
    根据预设的指令转换规则将所述各个角度值分别转换为各个控制指令;
    依次将所述各个控制指令发送至所述目标车辆的中控***,使得所述目标车辆的中控***按照所述控制指令调整所述目标车辆的行驶方向。
  17. 根据权利要求16所述的非易失性可读存储介质,其特征在于,所 述快速卷积神经网络通过以下步骤预先训练好:
    获取通过采集测试车辆前方路况的图像得到的样本视频,以及与所述样本视频对应的针对所述测试车辆的控制日志;
    从所述样本视频中等间距提取各个视频帧作为各个样本图片;
    从所述控制日志中提取出与所述各个样本图片在时间上对应的各个控制指令;
    根据预设的指令转换规则将所述各个控制指令转换为各个样本角度值;
    针对每个样本图片,将所述每个样本图片分别输入至所述快速卷积神经网络,得到所述快速卷积神经网络输出的、与所述每个样本图片对应的训练角度值;
    以输出的所述训练角度值作为调整目标,调整所述快速卷积神经网络的参数,以最小化得到的所述训练角度值与所述每个样本图片对应的样本角度值之间的误差;
    若所述各个样本图片对应的训练角度值与样本角度值之间的误差满足预设条件,则确定所述快速卷积神经网络已训练好。
  18. 根据权利要求17所述的非易失性可读存储介质,其特征在于,所述快速卷积神经网络的卷积层设置有预设数量个二维卷积核和1*1卷积核,所述针对每个样本图片,将所述每个样本图片分别输入至所述快速卷积神经网络,得到所述快速卷积神经网络输出的、
    与所述每个样本图片对应的训练角度值包括:
    将每个样本向量与所述预设数量个二维卷积核分别进行卷积,得到各个卷积通道上的第一层卷积输出,所述每个样本向量是指所述每个样本图片在向量化后得到的向量;
    将各个所述第一层卷积输出分别在各个所述卷积通道上与1*1卷积核进行卷积,得到第二层卷积输出;
    将所述第二层卷积输出投入至所述快速卷积神经网络的全连接层,得到所述快速卷积神经网络输出的、与所述每个样本图片对应 的训练角度值。
  19. 根据权利要求17所述的非易失性可读存储介质,其特征在于,在确定所述快速卷积神经网络已训练好之前,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行如下步骤:
    判断所述每个样本图片对应的训练角度值与样本角度值之间的误差是否均小于预设的第一误差值;
    若所述每个样本图片对应的训练角度值与样本角度值之间的误差均小于预设的第一误差值,则确定所述各个样本图片对应的训练角度值与样本角度值之间的误差满足预设条件;
    判断符合条件的样本图片在所有所述样本图片中的占比是否超过预设的比例阈值,所述符合条件的样本图片是指训练角度值与样本角度值之间的误差小于预设的第二误差值的样本图片;
    若符合条件的样本图片在所有所述样本图片中的占比超过预设的比例阈值,则确定所述各个样本图片对应的训练角度值与样本角度值之间的误差满足预设条件。
  20. 根据权利要求17至19中任一项所述的非易失性可读存储介质,其特征在于,在获取通过采集测试车辆前方路况的图像得到的样本视频,以及与所述样本视频对应的针对所述测试车辆的控制日志之前,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行如下步骤:
    在测试车辆行驶的过程中,通过预先安装在测试车辆上的摄像头实时采集所述测试车辆前方路况的图像,得到各个样本视频;
    请求所述测试车辆的中控***,提取所述测试车辆的控制日志,所述控制日志包括驾驶员驾驶所述测试车辆时所产生的、用于控制所述测试车辆转弯的控制指令;
    根据样本视频和控制日志上记录的***时间建立所述各个样本视 频与所述控制日志之间的对应关系。
PCT/CN2019/091843 2019-02-19 2019-06-19 调整车辆行驶方向方法、装置、计算机设备及存储介质 WO2020168660A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910124097.XA CN109934119B (zh) 2019-02-19 2019-02-19 调整车辆行驶方向方法、装置、计算机设备及存储介质
CN201910124097.X 2019-02-19

Publications (1)

Publication Number Publication Date
WO2020168660A1 true WO2020168660A1 (zh) 2020-08-27

Family

ID=66985757

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/091843 WO2020168660A1 (zh) 2019-02-19 2019-06-19 调整车辆行驶方向方法、装置、计算机设备及存储介质

Country Status (2)

Country Link
CN (1) CN109934119B (zh)
WO (1) WO2020168660A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112364695A (zh) * 2020-10-13 2021-02-12 杭州城市大数据运营有限公司 一种行为预测方法、装置、计算机设备和存储介质
CN112766307A (zh) * 2020-12-25 2021-05-07 北京迈格威科技有限公司 图像处理方法、装置、电子设备及可读存储介质
CN112785466A (zh) * 2020-12-31 2021-05-11 科大讯飞股份有限公司 一种硬件的ai赋能方法、装置、存储介质及设备
CN113537002A (zh) * 2021-07-02 2021-10-22 安阳工学院 一种基于双模神经网络模型的驾驶环境评估方法及装置
CN114639037A (zh) * 2022-03-03 2022-06-17 青岛海信网络科技股份有限公司 确定高速服务区的车辆饱和的方法及电子设备

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110347043B (zh) * 2019-07-15 2023-03-10 武汉天喻信息产业股份有限公司 一种智能驾驶控制方法及装置
CN113963307A (zh) * 2020-07-02 2022-01-21 上海际链网络科技有限公司 目标上内容的识别及视频采集方法、装置、存储介质、计算机设备
CN114018275A (zh) * 2020-07-15 2022-02-08 广州汽车集团股份有限公司 一种车辆在路口处的行驶控制方法、***及计算机可读存储介质
CN113095266B (zh) * 2021-04-19 2024-05-10 北京经纬恒润科技股份有限公司 一种角度识别方法、装置及设备

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108803604A (zh) * 2018-06-06 2018-11-13 深圳市易成自动驾驶技术有限公司 车辆自动驾驶方法、装置以及计算机可读存储介质
CN109165562A (zh) * 2018-07-27 2019-01-08 深圳市商汤科技有限公司 神经网络的训练方法、横向控制方法、装置、设备及介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NL1031867C1 (nl) * 2005-07-08 2007-01-09 Everhardus Fransiscu Weijdeven Werkwijze voor het bepalen van voertuiggegevens.
CN109204308B (zh) * 2017-07-03 2020-04-07 上海汽车集团股份有限公司 车道保持算法的确定方法、车道保持的控制方法及***
CN107633220A (zh) * 2017-09-13 2018-01-26 吉林大学 一种基于卷积神经网络的车辆前方目标识别方法
WO2019127271A1 (zh) * 2017-12-28 2019-07-04 深圳市锐明技术股份有限公司 针对肢体冲突行为的告警方法、装置、存储介质及服务器
CN108491827B (zh) * 2018-04-13 2020-04-10 腾讯科技(深圳)有限公司 一种车辆检测方法、装置及存储介质

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108803604A (zh) * 2018-06-06 2018-11-13 深圳市易成自动驾驶技术有限公司 车辆自动驾驶方法、装置以及计算机可读存储介质
CN109165562A (zh) * 2018-07-27 2019-01-08 深圳市商汤科技有限公司 神经网络的训练方法、横向控制方法、装置、设备及介质

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112364695A (zh) * 2020-10-13 2021-02-12 杭州城市大数据运营有限公司 一种行为预测方法、装置、计算机设备和存储介质
CN112766307A (zh) * 2020-12-25 2021-05-07 北京迈格威科技有限公司 图像处理方法、装置、电子设备及可读存储介质
CN112785466A (zh) * 2020-12-31 2021-05-11 科大讯飞股份有限公司 一种硬件的ai赋能方法、装置、存储介质及设备
CN113537002A (zh) * 2021-07-02 2021-10-22 安阳工学院 一种基于双模神经网络模型的驾驶环境评估方法及装置
CN113537002B (zh) * 2021-07-02 2023-01-24 安阳工学院 一种基于双模神经网络模型的驾驶环境评估方法及装置
CN114639037A (zh) * 2022-03-03 2022-06-17 青岛海信网络科技股份有限公司 确定高速服务区的车辆饱和的方法及电子设备
CN114639037B (zh) * 2022-03-03 2024-04-09 青岛海信网络科技股份有限公司 确定高速服务区的车辆饱和的方法及电子设备

Also Published As

Publication number Publication date
CN109934119B (zh) 2023-10-31
CN109934119A (zh) 2019-06-25

Similar Documents

Publication Publication Date Title
WO2020168660A1 (zh) 调整车辆行驶方向方法、装置、计算机设备及存储介质
WO2021196873A1 (zh) 车牌字符识别方法、装置、电子设备和存储介质
EP3830716B1 (en) Storage edge controller with a metadata computational engine
US9786036B2 (en) Reducing image resolution in deep convolutional networks
WO2021016873A1 (zh) 基于级联神经网络的注意力检测方法、计算机装置及计算机可读存储介质
WO2021063341A1 (zh) 图像增强方法以及装置
US20180284574A1 (en) Method and device for camera rapid automatic focusing
US11912203B2 (en) Virtual mirror with automatic zoom based on vehicle sensors
US20220046161A1 (en) Image acquisition method and apparatus, device, and storage medium
CN107613262B (zh) 一种视觉信息处理***与方法
US11120275B2 (en) Visual perception method, apparatus, device, and medium based on an autonomous vehicle
WO2021175006A1 (zh) 车辆图像检测方法、装置、计算机设备及存储介质
US20210342593A1 (en) Method and apparatus for detecting target in video, computing device, and storage medium
WO2021047587A1 (zh) 手势识别方法、电子设备、计算机可读存储介质和芯片
JP2023523745A (ja) コンピュータビジョンに基づく文字列認識方法、装置、機器及び媒体
WO2013128822A1 (ja) 解析処理システム
US20200010016A1 (en) Lateral image processing apparatus and method of mirrorless car
US20180109717A1 (en) Method and apparatus for enabling precise focusing
CN113901871A (zh) 一种驾驶员危险动作识别方法、装置以及设备
CN112109729A (zh) 用于车载***的人机交互方法、装置和***
WO2022183321A1 (zh) 图像检测方法、装置和电子设备
Cheng et al. Edge-assisted lightweight region-of-interest extraction and transmission for vehicle perception
WO2021129712A1 (zh) 一种车辆验证方法和***
EP3817374A1 (en) Method, apparatus and system for adjusting field of view of observation, and storage medium and mobile apparatus
WO2022253085A1 (zh) 服务器及由服务器执行的数据处理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19916274

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 12.10.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19916274

Country of ref document: EP

Kind code of ref document: A1