CN117774992A - Driving intention recognition method, device and system - Google Patents

Driving intention recognition method, device and system Download PDF

Info

Publication number
CN117774992A
CN117774992A CN202311563786.3A CN202311563786A CN117774992A CN 117774992 A CN117774992 A CN 117774992A CN 202311563786 A CN202311563786 A CN 202311563786A CN 117774992 A CN117774992 A CN 117774992A
Authority
CN
China
Prior art keywords
driving
intention
data
recognition model
driver
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311563786.3A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kingfar International Inc
Original Assignee
Kingfar International Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kingfar International Inc filed Critical Kingfar International Inc
Priority to CN202311563786.3A priority Critical patent/CN117774992A/en
Publication of CN117774992A publication Critical patent/CN117774992A/en
Pending legal-status Critical Current

Links

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The embodiment of the application provides a driving intention recognition method, device and system, which recognize the driving intention of a driver through data information of people, vehicles, roads and rings, wherein the data information of the people is driver data, the data information of the vehicles is vehicle data, the data information of the roads is road condition information, and the data information of the rings is weather information, and the method comprises the following steps: acquiring environment information of a position of a vehicle, wherein the environment information comprises: road condition information and weather information, obtaining a corresponding driving intention recognition model according to the environmental information, obtaining first vehicle data and first driver data of the vehicle according to the driving intention recognition model, and inputting the first vehicle data and the first driver data into the driving intention recognition model to obtain the driving intention of the driver. According to the embodiment of the application, the driving intention of the driver can be accurately identified.

Description

Driving intention recognition method, device and system
Technical Field
The application relates to the technical field of intelligent driving, in particular to a driving intention identification method, device and system.
Background
In the intelligent driving field, the driving task of the driver can be assisted by an intelligent driving assistance system. The intelligent driving assistance system can infer and identify the driving intention of the driver so as to more comprehensively understand the driving intention of the driver and better assist the driving task of the driver. However, how to recognize the driving intention of the driver more accurately is a problem to be solved.
Disclosure of Invention
The application provides a driving intention recognition method, device and system, which can more accurately recognize the driving intention of a driver.
In a first aspect, an embodiment of the present application provides a driving intention recognition method, which recognizes a driving intention of a driver through data information of a person, a vehicle, a road, and a ring, wherein the data information of the person is driver data, the data information of the vehicle is vehicle data, the data information of the road is road condition information, and the data information of the ring is weather information, and the method includes: acquiring environment information of a position of a vehicle; the environment information includes: road condition information and weather information; acquiring a corresponding driving intention recognition model according to the environmental information; acquiring first vehicle data and first driver data of a vehicle according to a driving intention recognition model; the first vehicle data and the first driver data are input into a driving intention recognition model, and the driving intention output by the driving intention recognition model is taken as the driving intention of the driver. According to the method, on the basis of comprehensively considering the influence of the first vehicle data and the first driver data on the driving intention of the driver, the influence of environmental information such as road condition information and weather information on the driving intention of the driver is also considered, so that the factors considered when the driving intention of the driver is recognized are more comprehensive, and the accuracy of the driving intention recognition is improved.
In one possible implementation, the driving intent of the driver includes: the driving behavior intention, the driving intention recognition model includes: a driving behavior intention recognition model;
inputting the first vehicle data and the first driver data into a driving intention recognition model, taking the driving intention output by the driving intention recognition model as the driving intention of the driver, comprising: the first vehicle data and the first driver data are input into a driving behavior intention recognition model, and the driving behavior intention output by the driving behavior intention recognition model is used as the driving behavior intention of the driver.
In one possible implementation, the driving behavior intent includes at least one of: lane change intention, overtaking intention, parking intention, turning around intention, accelerating intention, decelerating intention and turning intention.
In one possible implementation, the driving intent of the driver includes: the driving style intention, the driving intention recognition model includes: a driving style intention recognition model;
inputting the first vehicle data and the first driver data into a driving intention recognition model, taking the driving intention output by the driving intention recognition model as the driving intention of the driver, comprising: the first vehicle data and the first driver data are input into a driving style intention recognition model, and the driving style intention output by the driving style intention recognition model is used as the driving style intention of the driver.
In one possible implementation, the driving style intent includes at least one of: conservative, aggressive, and robust.
In one possible implementation, the environmental information may include: weather information and/or road condition information.
In one possible implementation, when the environmental information includes weather information, acquiring the weather information of the location where the vehicle is located includes: acquiring a weather image of the position of the vehicle; and inputting the weather image into a preset weather information identification model, and taking the weather information output by the weather information identification model as the weather information of the position of the vehicle.
In one possible implementation, when the environmental information includes road condition information, acquiring the road condition information of the location where the vehicle is located includes: acquiring a road condition image of the position of a vehicle; inputting the road condition image into a preset road condition information identification model, and taking the road condition information output by the road condition information identification model as the road condition information of the position of the vehicle.
In one possible implementation, the driving intent of the driver includes: driving behavior intention; the method further comprises the steps of: acquiring early warning conditions of driving behavior intention; and when the early warning condition is met, early warning is carried out aiming at the driving behavior intention.
In one possible implementation, obtaining the early warning condition of the driving behavior intention includes: and acquiring the early warning condition of the driving behavior intention from the early warning condition corresponding to the environmental information.
In one possible implementation, the driving intention of the driver further includes: driving style intent;
the method for acquiring the early warning condition of the driving behavior intention comprises the following steps: and acquiring the early warning condition of the driving behavior intention from the early warning condition corresponding to the environmental information and/or the driving style intention.
In one possible implementation, the driving behavior intent includes a first driving behavior intent, and the pre-warning condition of the first driving behavior intent includes: a number of times threshold within a first time period;
when the early warning condition is met, early warning is carried out aiming at the driving behavior intention, and the method comprises the following steps: calculating the number of times of executing the first driving behavior in a first duration with the driving intention of the driver as the ending time; the first driving behavior is a driving behavior corresponding to the first driving behavior intention; and when the number of times of executing the first driving behavior in the first duration is not smaller than the number threshold, early warning is conducted according to the first driving behavior intention.
The first driving behavior intention may be, for example, a lane change intention or a cut-in intention.
In one possible implementation, the driving behavior intent includes a second driving behavior intent, and the pre-warning condition of the second driving behavior intent includes: a vehicle speed threshold;
when the early warning condition is met, early warning is carried out aiming at the driving behavior intention, and the method comprises the following steps: acquiring the speed of a vehicle; and when the vehicle speed is not less than the vehicle speed threshold value, early warning is carried out aiming at the second driving behavior intention.
The second driving behavior intention may be, for example, a turning intention, an acceleration intention, or a turning intention.
In one possible implementation, the driving intent of the driver includes: the driving style intent, the method further comprising: and when the driving style intention is the preset driving style intention, early warning is carried out aiming at the driving style intention.
In one possible implementation manner, the training method of the driving behavior intention recognition model includes: acquiring a plurality of groups of fragment original data of each driving behavior intention under the environment information corresponding to the driving behavior intention recognition model; generating sample data of each driving behavior intention according to the multiple groups of fragment original data of each driving behavior intention; training the initial driving behavior intention recognition model by using sample data of each driving behavior intention to obtain a driving behavior intention recognition model.
In one possible implementation, generating sample data of each driving behavior intention from the sets of segment raw data of each driving behavior intention includes:
for a set of segment raw data of a driving behavior intention, extracting first target data from the segment raw data; the first target data includes: second vehicle data and second driver data;
performing correlation analysis on the first target data to obtain second target data;
and generating a sample of driving behavior intention according to the second target data.
In one possible implementation, the training method of the driving style intention recognition model includes: acquiring a plurality of groups of fragment original data under the environment information corresponding to the driving style intention recognition model; determining the driving style intention of the original data of each group of fragments; generating sample data according to the original data of each group of fragments and the driving style intention; training the initial driving style intention recognition model by using the sample data to obtain a driving style intention recognition model.
In one possible implementation manner, obtaining multiple sets of fragment raw data under environment information corresponding to a driving style intention recognition model includes: acquiring original data of the driving style intention recognition model under the corresponding environment information; and extracting the fragment original data from the original data by taking the window length as a preset second duration.
In one possible implementation, determining the driving style intent of each set of segment raw data includes: for each group of fragment original data, calculating the scale scores of the fragment original data according to a preset multi-dimensional driver style scale; calculating expert scores of the fragment original data; calculating the comprehensive score of the fragment original data according to the scale score and the expert score of the fragment original data; and taking the driving style intention corresponding to the scoring interval to which the comprehensive score of the segment original data belongs as the driving style intention of the segment original data.
In a second aspect, an embodiment of the present application provides a driving intention recognition apparatus, including:
the acquisition unit is used for acquiring environment information of the position of the vehicle, and the environment information comprises: road condition information and weather information; acquiring a corresponding driving intention recognition model according to the environmental information, and acquiring first vehicle data and first driver data of the vehicle according to the driving intention recognition model;
and the processing unit is used for inputting the first vehicle data and the first driver data into the driving intention recognition model to obtain the driving intention of the driver.
In a third aspect, an embodiment of the present application provides an electronic device, including:
A processor, a memory; wherein one or more computer programs are stored in the memory, the one or more computer programs comprising instructions, which when executed by the processor, cause the electronic device to perform the method of any of the first aspects.
In a fourth aspect, embodiments of the present application provide a driving system, including: the system comprises a data source acquisition device, a vehicle-mounted device and an electronic device, wherein the electronic device is used for executing the method of any one of the first aspect.
In a fifth aspect, embodiments of the present application provide a computer-readable storage medium having a computer program stored therein, which when run on a computer, causes the computer to perform the method of any of the first aspects.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a driving system to which the driving intention recognition method according to the embodiment of the present application is applicable;
Fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 3A is a schematic flow chart of a driving intention recognition method according to an embodiment of the present application;
fig. 3B is a schematic structural diagram of a VGG network;
fig. 3C is a flowchart illustrating an example of a driving intention recognition method according to an embodiment of the present application;
fig. 4 is another flow chart of the driving intention recognition method provided in the embodiment of the present application;
fig. 5 is a schematic diagram of an early warning implementation manner of an electronic device according to an embodiment of the present application;
fig. 6 is a schematic flow chart of a training method of a driving behavior intention recognition model according to an embodiment of the present application;
fig. 7 is a schematic flow chart of a training method of a driving style intention recognition model according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a driving intention recognition device according to an embodiment of the present application.
Detailed Description
The terminology used in the description section of the present application is for the purpose of describing particular embodiments of the present application only and is not intended to be limiting of the present application.
In the intelligent driving field, the driving task of the driver can be assisted by an intelligent driving assistance system. The intelligent driving assistance system can infer and identify the driving intention of the driver so as to more comprehensively understand the driving intention of the driver and better assist the driving task of the driver. However, how to recognize the driving intention of the driver more accurately is a problem to be solved.
In one embodiment, the driving intention recognition of the driver may be performed based on the behavior data of the driver driving the vehicle. However, factors influencing the driving intention of the driver are not only behavior data of the driver when driving the vehicle, but also factors of the driver itself, road environment, weather environment, and the like may influence the driving intention of the driver. Moreover, it has been studied that in the intelligent driving situation, the driver's own factors are the main cause of the accident, and the serious accident caused by the driver's own factors accounts for 87.5% of the total number of accidents. In the accidents caused by the factors of the drivers, the accidents caused by improper operation of the drivers occupy a large proportion, and the traffic accidents can be directly caused by the fact that the drivers cannot correctly do corresponding driving behaviors according to road condition information.
Therefore, the embodiment of the application provides a driving intention recognition method and device, different driving intention recognition models are trained based on weather information and/or road condition information, and training of the driving intention recognition models is carried out by using factors of a driver and/or vehicle data, so that influence of the factors on the driving intention of the driver is considered in driving intention recognition, and compared with the fact that the driving intention is recognized only based on behavior data of the driving vehicle of the driver, the accuracy of driving intention recognition is improved.
Furthermore, the early warning process can be carried out on the driver according to the identified driving intention, so that the improper operation of the driver is reduced, and the driving safety is improved.
Hereinafter, an architecture of a system to which the driving intention recognition method of the embodiment of the present application may be applied is exemplarily described with reference to fig. 1. As shown in fig. 1, a schematic structural diagram of a driving system to which the driving intention recognition method according to the embodiment of the present application is applied includes: camera 110, eye tracker 120, electroencephalograph 130, radar 140, in-vehicle device 150, electronic device 160, and the like. Wherein,
the camera 110 is used to capture still images or video. The number of cameras 110 may be one or more. The camera 110 is used for capturing behavior images of a driver, weather images of a vehicle location, road condition images of the vehicle location, and the like.
The position of the camera in the vehicle, which captures an image of the driver's behavior, may specifically be located around the driver, and the still image or video captured by the camera 110 when the driver sits in the driver's seat includes the driver's behavior image. For example, if it is desired to detect the facial motion of the driver, one camera 110 may be disposed in front of the driver to be aimed at the driver's face, so that the still image or video captured by the camera 110 includes the driver's facial image when the driver is seated in the driving position, to detect the facial motion of the driver.
It should be noted that, in the embodiment of the present application, the behavior of the driver may include a driving behavior performed by the driver for performing vehicle control, and may also include a behavior of the driver that is not related to vehicle control during driving.
The camera for photographing the weather image may be provided at any position of the vehicle where the weather image can be photographed, for example, the front of the vehicle, the roof of the vehicle, the rear of the vehicle, or the like.
The cameras for photographing road condition images may be provided at the front and rear of the vehicle to photograph road condition images in front and rear of the vehicle, or may be provided around the vehicle to photograph road condition images around the vehicle.
The object generates an optical image through the lens and projects the optical image onto the photosensitive element of the camera 110. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, and then transmits the electrical signal to the in-vehicle apparatus 150, and the in-vehicle apparatus 150 converts the electrical signal into a digital image signal.
In some embodiments, an image signal processor (image signal processor, ISP) may be disposed in the camera 110, and the electrical signal may be converted into a digital image signal by the ISP in the camera 110, so that the camera 110 may directly transmit the digital image signal converted by the ISP to the vehicle-mounted device 150, and at this time, the vehicle-mounted device 150 does not need to perform conversion from the electrical signal to the digital image signal.
The eye movement meter 120 is used to detect eye movement data of the driver. The eye-movement device 120 may be a head-mounted eye-movement device or a non-head-mounted eye-movement device, for example, a remote-measurement eye-movement device, as long as the eye-movement data of the driver can be detected. Eye tracker 120 may transmit eye movement data to in-vehicle device 150.
The electroencephalograph 130 is used for detecting an electroencephalogram signal of a driver. In some embodiments, the electroencephalograph 130 can communicate the detected electroencephalogram signals to the in-vehicle device 150, and the in-vehicle device 150 converts the electroencephalogram signals to electroencephalogram data. In other embodiments, the electroencephalograph 130 can convert the electroencephalogram signals into electroencephalogram data and then transmit the electroencephalogram data to the in-vehicle apparatus 150.
In some embodiments, the road condition information of the vehicle may be detected not by the road condition image captured by the camera, but by using a radar, or the road condition information of the vehicle may be detected by a combination of the image detection and the radar detection, where the system shown in fig. 1 may further include a radar 140. The radar 140 may transmit radar signals to the vehicle-mounted device 150, and the vehicle-mounted device 150 generates road condition information of the vehicle according to the radar signals.
In some embodiments, the devices that provide basic data for the vehicle-mounted device 150 and the electronic device 160, such as the camera 110, the eye tracker 120, the electroencephalograph 130, and the radar 140, may be collectively referred to as a data source acquisition device of the driving system.
The in-vehicle apparatus 150 is a control apparatus of the vehicle for realizing control of the vehicle based on a user operation. The in-vehicle apparatus 150 may also perform detection of driver behavior data from the behavior image. In some embodiments, the in-vehicle device may also be referred to as a smart cockpit.
The in-vehicle device 150 may transmit driver data such as eye movement data, brain electrical data, behavior data, and the like of the driver, and vehicle data to the electronic device 160.
The electronic device 160 in this embodiment may be a car networking terminal, a computer such as a notebook computer or a tablet computer (PAD), a handheld communication device such as a mobile phone, or a handheld computing device.
The electronic device 160 may obtain eye movement data, brain electrical data, behavior data, and vehicle data of the driver from the in-vehicle device 150, and execute the driving intention recognition method of the embodiment of the present application.
In other embodiments provided herein, the vehicle-mounted device 150 may be used only as a relay device between the electronic device 160 and the devices such as the camera 110, the eye tracker 120, and/or the electroencephalograph 130, and the electronic device 160 may detect the eye movement data, the electroencephalogram data, and the behavior data of the driver.
In other embodiments provided herein, the driving intent recognition method of the embodiments herein may also be performed directly by the in-vehicle apparatus 150.
In other embodiments provided herein, the camera 110, the eye tracker 120, and/or the electroencephalograph 130 may also be directly connected to the electronic device 160, and then the electronic device 160 may directly obtain the eye movement data, the electroencephalogram data, and/or the behavior data of the driver from the camera 110, the eye tracker 120, and/or the electroencephalograph 130.
Fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 200 includes: processor 210, memory 220, display 230, speaker 240, etc. The electronic device 160 described above may be implemented by the electronic device 200 shown in fig. 2.
Optionally, for the electronic device to function more perfectly, the electronic device may further include: the embodiments of the present application are not limited to one or more devices of an antenna, a mobile communication module, a wireless communication module, an audio module, a receiver, a microphone, an earphone interface, a charging management module, a power management module, a battery, and the like.
Processor 210 may include one or more processing units such as, for example: processor 210 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an ISP, a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The electrical signals transmitted by the camera may be converted to digital image signals by the ISP. The ISP may output the digital image signal to DSP processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 210 for storing instructions and data. In some embodiments, the memory in the processor 210 is a cache memory. The memory may hold instructions or data that the processor 210 has just used or recycled. If the processor 210 needs to reuse the instruction or data, it may be called directly from the memory. Repeated accesses are avoided and the latency of the processor 210 is reduced, thereby improving the efficiency of the system.
Memory 220 may be used to store computer executable program code that includes instructions. The memory 220 may include a stored program area and a stored data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 200 (e.g., audio data, etc.), and so on. In addition, the memory 220 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 210 performs various functional applications of the electronic device 200 and data processing by executing instructions stored in the memory 220 and/or instructions stored in a memory provided in the processor.
In the embodiment shown in fig. 2, the memory 220 is provided in the electronic device 200 as an example, and in other embodiments provided in the embodiments of the present application, the memory 220 may not be provided in the electronic device 200, and in this case, the memory 220 may be connected to the electronic device 200 through an interface provided by the electronic device 200, and may further be connected to the processor 210 in the electronic device 200.
The display 230 is used to display images, videos, and the like. In some embodiments, the electronic device 200 may include 1 or more display screens 230.
The speaker 240 is used for converting an audio electric signal into a sound signal. The electronic device 100 may play sound signals such as music through the speaker 240.
Hereinafter, the driving intention recognition method of the embodiment of the present application will be described in detail in connection with the above-described system architecture and the structure of the electronic device.
The driving intention recognition method provided by the embodiment of the present application may be executed by the above-described electronic device 160 or the in-vehicle device 150. In the following description, the electronic device 160 is taken as an example to execute the driving intention recognition method provided in the embodiment of the present application, and it is understood that the electronic device in the following embodiment may be replaced by an in-vehicle device.
The driving intention recognition method provided by the embodiment of the application can recognize the driving intention of the driver. Alternatively, the driving intention of the driver in the embodiment of the present application may include: driving behavior intent and/or driving style intent.
The driving behavior intention is a prediction of a driving behavior of the driver, which refers to a control behavior performed by the driver on the vehicle, and may include, for example: lane changing, overtaking, parking, turning around, accelerating, decelerating, turning, etc., the driving behavior intention corresponds to the driving behavior, may include: lane change intention, overtaking intention, parking intention, turning around intention, accelerating intention, decelerating intention, turning intention, and/or the like.
The driving style intent is a prediction of the driving style of the driver, which is used to characterize the behavior and attitude exhibited by the driver during driving. The driving style may have different dividing conditions and standards, and in this embodiment, the driving style includes: for example, conservation, aggressiveness, and robustness, and accordingly, driving style intents may also include: conservative, aggressive, and robust.
When the driving intention recognition is performed, the driving intention recognition can be realized through a driving intention recognition model obtained through pre-training. When the driving intention includes a driving behavior intention, the driving intention recognition model may include: and the driving behavior intention recognition model is used for recognizing the driving behavior intention of the driver. When the driving intention includes a driving style intention, the driving intention recognition model may include: a driving style intention recognition model for recognizing a driving style intention of the driver.
In the embodiment of the application, the corresponding driving behavior intention recognition model and/or driving style intention recognition model are preset in the electronic equipment according to different environmental information. The above-mentioned environmental information may include: weather information and/or road condition information.
Optionally, the weather information may be consistent with weather information division in weather fields such as weather forecast, or may be autonomously divided, which is not limited in the embodiment of the present application. In some embodiments, the weather information may be referred to as a weather type. It should be noted that, the more finely divided the weather information is, the more accurate the driving behavior intention is recognized in the driving intention recognition method of the embodiment of the present application. For example:
in one example, the weather information is consistent with weather information in the weather field, such as weather forecast, and may include, but is not limited to: sunny days, cloudy days, light rain, medium rain, heavy rain, light snow, medium snow, heavy snow, and the like.
In another example, the weather information may be divided according to a preset rule, and may include: fine, fog, rainy, snowy, etc.
Alternatively, the dividing condition and standard of the traffic information may include, but is not limited to: the road type where the vehicle is located, the traffic flow in the line of sight area of the vehicle, the traffic flow distribution in the line of sight area of the vehicle, and/or the traffic flow distribution in the line of sight area of the vehicle. The line-of-sight area of the vehicle may be an area within 200m of the front and rear of the vehicle. In some embodiments, the traffic information may be referred to as a traffic type. It should be noted that, the more finely divided the road condition information is, the more accurate the driving behavior intention is identified in the driving intention identification method in the embodiment of the present application.
The road types may include: urban roads, provinces, counties, highways, rural roads, etc., or may be subdivided further, embodiments of the present application are not limited.
The vehicle flow refers to the number of vehicles passing through a certain road section in a unit time, and in the embodiment of the present application, the total number of vehicles in the vehicle sight area may be used as the vehicle flow.
The people flow refers to the number of people passing through a certain road section in unit time, and the number of people in the sight area of the vehicle can be used as the people flow in the embodiment of the application.
The traffic distribution refers to the distribution position of the vehicle other than the host vehicle, for example, whether the vehicle is located around the host vehicle, and is far or near.
The above-mentioned people stream distribution situation refers to a distribution position of people, for example, whether the people are located in front of the own vehicle, whether the distance between the people located in front of the own vehicle and the own vehicle is far or near, whether the people are randomly distributed near a crosswalk of an intersection, and the like.
In one embodiment, if divided by traffic flow and people flow, the traffic information may include: the road condition is not complicated, the road condition is slightly complicated, the road condition is moderately complicated, and the road condition is extremely complicated. Wherein,
the road conditions are not complex: no other vehicles and no pedestrians are present;
the road conditions are slightly complex: low traffic (e.g., a total number of vehicles in the line of sight area greater than 0, less than or equal to 20), low traffic (e.g., a total number of pedestrians in the line of sight area greater than 0, less than or equal to 20);
The road condition is moderately complex: medium traffic (e.g., total number of vehicles in line of sight area greater than 20, less than or equal to 60), medium traffic (e.g., total number of pedestrians in line of sight area greater than 20, less than or equal to 60);
the road conditions are extremely complex: high traffic (e.g., total number of vehicles in line of sight area greater than 60), high traffic (e.g., total number of pedestrians in line of sight area greater than 60).
In the electronic device, corresponding identifiers can be set for different road condition information, for example, an uncomplicated road condition identifier is 0, a slightly complicated road condition identifier is 1, a moderately complicated road condition identifier is 2, and a extremely complicated road condition identifier is 3.
Fig. 3A is a schematic flow chart of a driving intention recognition method according to an embodiment of the present application, and an electronic device is taken as an example to execute the method. As shown in fig. 3A, the method may include:
step 301: and acquiring the environment information of the position of the vehicle.
Alternatively, the environmental information may include: weather information and/or road condition information.
The weather information dividing method in this step may refer to the foregoing description, which is not repeated here.
In one embodiment, if the division of the weather information is consistent with the division of the weather information in the meteorological field, the electronic device may acquire a weather forecast of the location of the vehicle through the network, and determine the weather information of the location of the vehicle according to the weather forecast.
In another embodiment, a weather information identification model may be preset in the electronic device, and the input of the weather information identification model may be a weather image, and the output may be weather information. In the step, the electronic equipment can acquire a real-time weather image of the position of the vehicle, and input the weather image into a weather information identification model to obtain the weather information of the position of the vehicle. Alternatively, the electronic device may use a camera of the electronic device to capture a weather image, or obtain, through the vehicle-mounted device, a weather image of the location of the vehicle from the camera in the system shown in fig. 1, or directly obtain, from the camera in the system shown in fig. 1, a weather image of the location of the vehicle.
The training method of the weather information identification model is exemplified as follows: pre-constructing an initial weather information identification model, wherein the initial weather information identification model can be realized through a neural network; and acquiring images of different weather information as samples, and training the initial weather information identification model to obtain the weather information identification model. Specific implementations of training the initial weather information identification model using the samples may refer to related model training techniques, and embodiments of the present application are not limited.
Alternatively, the initial weather information identification model may be implemented by a deep convolutional neural network model, such as the visual geometry group network (visual geometry group net, VGGnet) model. As shown in fig. 3B, the VGG network model may be provided with 19 layers, uniformly using a 3x3 filter (filter) with a step size (stride) of 1, and a 2x2 max pooling (MaxPooling) with a step size of 2, where the calculation formula of the convolution kernel feature size is: out is provided with size =(in size -F size +2P)/S+1, wherein, out size Representing the size of the feature of the output, in size Representing the size of the input feature, F size Representing the filter size, P representing the number of pixels pooled, S representing the step size, the structure and layer number of the model making it more compact and deep than the previous model.
The weather information dividing method in this step may refer to the foregoing description, which is not repeated here.
In one embodiment, a first road condition information identification model may be preset in the electronic device, where an input of the first road condition information identification model may be a road condition image of a location where the vehicle is located, and an output may be a probability of various road condition information. In the step, the electronic equipment can acquire a real-time road condition image of the position of the vehicle, and input the road condition image into the first road condition information identification model to obtain the road condition information of the position of the vehicle. Alternatively, the electronic device may obtain the road condition image of the position of the vehicle from the camera in the system shown in fig. 1 through the vehicle-mounted device, or directly obtain the road condition image of the position of the vehicle from the camera in the system shown in fig. 1. The real-time road condition image may be road condition images in front of and behind the vehicle, or road condition images around the vehicle, which is not limited in the embodiment of the present application.
The training method of the first road condition information recognition model is exemplarily described as follows: constructing a first initial road condition information recognition model, wherein the first initial road condition information recognition model can be realized through a target detection network, such as a YOLOv7 network; road condition images of different road condition information are obtained as samples, and the first initial road condition information recognition model is trained to obtain the first road condition information recognition model. Specific implementation of training the first initial road condition information recognition model by using the sample can refer to a relevant model training technology, and the embodiment of the application is not limited.
In another embodiment, a second traffic information recognition model may be preset in the electronic device, where an input of the second traffic information recognition model may be a radar signal of a location where the vehicle is located, and an output may be a probability of various traffic information. The electronic equipment can acquire real-time radar signals of the position of the vehicle, and input the radar signals into the second road condition information identification model to obtain the road condition information of the position of the vehicle. Alternatively, the electronic device may obtain the radar signal of the position of the vehicle from the radar in the system shown in fig. 1, or directly obtain the radar signal of the position of the vehicle from the radar in the system shown in fig. 1.
The training method of the second road condition information recognition model is exemplified as follows: constructing a second initial road condition information recognition model, wherein the second initial road condition information recognition model can be realized through a target detection network, such as a YOLOv7 network; and acquiring radar signals of different road condition information as samples, and training the second initial road condition information recognition model to obtain a second road condition information recognition model. Specific implementation of training the second initial road condition information recognition model by using the sample can refer to a related model training technology, and the embodiment of the application is not limited.
It can be understood that if the driving intention recognition model preset in the electronic device corresponds to the weather information, only the weather information of the position where the vehicle is located may be obtained in this step; if the driving intention recognition model preset in the electronic equipment corresponds to the road condition information, only the road condition information of the position of the vehicle can be obtained in the step; if the driving intention recognition model preset in the electronic equipment corresponds to the weather information and the road condition information, the weather information and the road condition information of the position of the vehicle can be obtained in the step.
Step 302: and acquiring a corresponding driving intention recognition model according to the environment information.
In the embodiment of the application, the corresponding driving intention recognition model can be preset for different environment information, so that the corresponding driving intention recognition model can be obtained according to the environment information of the position of the vehicle in the step.
In one embodiment, if the driving behavior intention recognition model and/or the driving style intention recognition model corresponding to different weather information are preset in the electronic device, the corresponding driving behavior intention recognition model and/or driving style intention recognition model may be obtained according to the weather information in this step.
For example, assuming that the weather information includes a sunny day, a foggy day, a rainy day, and a snowy day, the driving intention recognition model includes a driving behavior intention recognition model and a driving style intention recognition model, and then the driving behavior intention recognition model and the driving style intention recognition model corresponding to the sunny day may be preset in the electronic device, respectively, the driving behavior intention recognition model and the driving style intention recognition model corresponding to the foggy day, the driving behavior intention recognition model and the driving style intention recognition model corresponding to the rainy day, the driving behavior intention recognition model and the driving style intention recognition model corresponding to the snowy day, and if the weather information of the position where the vehicle is located is acquired in step 501 is a sunny day, the driving behavior intention recognition model and the driving style intention recognition model corresponding to the sunny day may be acquired according to the weather information of "sunny day" in this step.
In another embodiment, if the driving behavior intention recognition model and/or the driving style intention recognition model corresponding to different road condition information are preset in the electronic device, the corresponding driving behavior intention recognition model and/or driving style intention recognition model may be obtained according to the road condition information in this step. Specific examples can be given to the examples of the weather information, and the difference is mainly that the weather information is replaced by the road condition information.
In still another embodiment, if the driving behavior intention recognition model and/or the driving style intention recognition model corresponding to different combinations of weather information and road condition information are preset in the electronic device, the corresponding driving behavior intention recognition model and/or driving style intention recognition model may be obtained according to the weather information and the road condition information in this step. Specific examples can be given as examples of the weather information, and the difference is mainly that the weather information is replaced by a combination of the weather information and the road condition information.
Taking the weather information including a sunny day, a foggy day, a rainy day, and a snowy day, the road condition information including a non-complex road condition, a slightly complex road condition, a moderately complex road condition, and a extremely complex road condition as an example, a plurality of different combinations composed of the weather information and the road condition information can be obtained by combining the weather information and the road condition information in pairs, for example (sunny day, non-complex road condition), (sunny day, slightly complex road condition), and the like, and a corresponding driving behavior intention recognition model and/or driving style intention recognition model can be preset for each combination in the electronic device.
Step 303: vehicle data and driver data of the vehicle are acquired according to the driving intention recognition model.
The driver data refers to data related to the driver during driving of the driver, and may include, for example, behavior data, eye movement data, electrocardiographic data, and/or electroencephalogram data of the driver.
The behavior data of the driver is used for recording the behavior of the driver during driving. The behavior of the driver may include driving behavior performed by the driver for vehicle control, or may include behavior performed by the driver during driving that is not related to vehicle control, and the embodiment of the present application is not limited. For example, driving behavior performed by the driver for vehicle control may include: shift, steering wheel rotation, acceleration, braking, etc., behavior unrelated to vehicle control may include: the user makes a call, facial movements, etc.
The driver's eye movement data may include: eye movement coordinate data, pupil diameter data, blink event data, etc. of the driver during driving.
The brain electrical data of the driver may include: time-sequence electroencephalogram signals or frequency-domain electroencephalogram signals obtained by converting the time-sequence electroencephalogram signals.
The vehicle data is used for recording related data generated in the using process of the vehicle, and can specifically comprise related data generated in the driving process of the vehicle, such as steering wheel angle information, vehicle speed information, direction light information, accelerator information, brake information, clutch information, gear information, lane change frequency, overtaking frequency and/or the like.
In this embodiment of the present application, the input parameters of the different driving behavior intention recognition models preset in the electronic device are independent of each other, that is, the input parameters may be the same or different. For example, if the driving behavior intention recognition model 1 corresponding to a sunny day and the driving behavior intention recognition model 2 corresponding to a foggy day are preset in the electronic device, the input parameters of the driving behavior intention recognition model 1 may be: the input parameters of the driving behavior intention recognition model 2 may be parameters 1 to 5, 1 to 4, or 1 to 6, or 6 to 10, etc. Similar to the driving behavior intention recognition model, the input parameters of different driving style intention recognition models preset in the electronic equipment can be mutually independent. Based on this, in this step, the vehicle data and the driver data that need to be acquired in this step may be determined from the input parameters that are required for the driving behavior intention recognition model and/or the driving style intention recognition model acquired in step 502.
Step 304: and inputting the acquired vehicle data and the driver data into a driving intention recognition model to obtain the driving intention of the driver.
Optionally, the driving intention of the driver includes: the driving behavior intention, the driving intention recognition model includes: when the driving behavior intention is to identify the model, this step may include: and inputting the acquired vehicle data and/or driver data into a driving behavior intention recognition model to obtain the driving behavior intention of the driver.
Optionally, the driving intention of the driver includes: the driving style intention, the driving intention recognition model includes: when the driving style intention identifies the model, this step may include: and inputting the acquired vehicle data and/or driver data into a driving style intention recognition model to obtain the driving behavior intention of the driver.
In this step, the implementation of the driving behavior intention recognition model and the driving style intention recognition model and the training method may refer to the subsequent embodiments, which are not described herein.
The implementation of fig. 3A is illustrated by fig. 3C, where weather information includes sunny days, rainy days, foggy days, and snowy days, and road condition information includes uncomplicated, slightly complex, moderately complex, and extremely complex road conditions. As shown in fig. 3C, a weather information identification model implemented through a VGG network may be preset in the electronic device, through which weather information of a position where the vehicle is located may be identified, and a traffic information identification model implemented through YOLOv7 may also be preset in the electronic device, through which traffic information of a position where the vehicle is located may be identified; based on the weather information and the road condition information identified above, a corresponding driving behavior intention identification model and a driving style intention identification model can be determined; the electronic device collects driver data such as behavior data, eye movement data and electroencephalogram data and vehicle data, and inputs the driving behavior intention recognition model and the driving style intention recognition model respectively, so that the driving behavior intention and the driving style intention can be obtained, and the driving intention is recognized.
In the method shown in fig. 3A, a corresponding driving intention recognition model is obtained according to environmental information, such as weather information and/or road condition information, of a vehicle, and first vehicle data and/or first driver data of the vehicle are input into the driving intention recognition model to obtain the driving intention of the driver.
Fig. 4 is another flow chart of the driving intent recognition method according to the embodiment of the present application, and with respect to the method described in fig. 3, the following step 305 may be further included after step 304.
Step 305: and carrying out early warning processing according to the driving intention of the driver.
In one embodiment, when the driving intention of the driver includes the driving behavior intention, the electronic device may preset early warning conditions of various driving behavior intentions under different environmental information, and correspondingly, in this step, the electronic device may obtain, according to the driving behavior intention of the driver obtained in step 304, an early warning condition corresponding to the driving behavior intention from the early warning conditions corresponding to the environmental information obtained in step 301, and when the early warning condition is satisfied, early warning is performed with respect to the driving behavior intention.
For example, assuming that the environmental information includes weather information and road condition information, the electronic device may preset early warning conditions of various driving behavior intentions under different combinations of the weather information and the road condition information, for example, the electronic device may preset early warning conditions of different driving behavior intentions under (sunny day, road condition is not complicated), the electronic device may preset early warning conditions of different driving behavior intentions under (sunny day, road condition is slightly complicated), etc., if the weather information and the road condition information acquired in step 301 are (sunny day, road condition is not complicated), the driving behavior intentions identified in step 304 are "lane changing intentions", the electronic device in this step may acquire the early warning conditions of "lane changing intentions" from the early warning conditions under (sunny day, road condition is not complicated), and when the early warning conditions are satisfied, early warning is performed for "lane changing intentions".
Optionally, the worse the weather condition and road condition, the more severe the early warning conditions of various driving behavior intentions, and vice versa.
For example:
the early warning condition of the driving behavior intention such as overtaking intention or lane change intention can be a threshold value of occurrence times within a preset duration. For example, the early warning condition of the overtaking intention may be 6 times in 1 minute (on a sunny day, the road condition is slightly complex), the early warning condition of the overtaking intention may be 4 times in 1 minute (on a sunny day, the road condition is slightly complex), the early warning condition of the overtaking intention may be 2 times in 1 minute (on a sunny day, the road condition is extremely complex), and the early warning condition of the overtaking may be 0 times in 1 minute, for example. At this time, if the driving behavior intention identified in step 304 is "intention to cut-in", if the weather information and the road condition information acquired in step 301 are (sunny day, road condition is not complicated), the early warning condition for the "intention to cut-in" may be obtained in this step for 6 times within 1 minute, and if the electronic device determines that the number of times of cut-in of the vehicle within 1 minute before the driving behavior intention is identified in step 304 has reached or exceeded 6 times, the early warning may be made for the "intention to cut-in".
The early warning condition corresponding to driving behavior intentions such as turning around, accelerating, decelerating, turning and the like can be a real-time speed threshold value of the vehicle. For example, the early warning condition of the acceleration intention may be that the real-time speed threshold of the vehicle is 110 km/h on a sunny day and the road condition is not complex, and the early warning condition of the acceleration intention may be that the real-time speed threshold of the vehicle is 90 km/h on a sunny day and the road condition is slightly complex). At this time, if the driving behavior intention identified in step 304 is "acceleration intention", if the weather information and the road condition information acquired in step 301 are (sunny days, road condition is not complicated), the early warning condition for the "acceleration intention" may be obtained in this step as the real-time speed threshold value of the vehicle is 120 km/h, and if the electronic device determines that the real-time speed of the vehicle reaches or exceeds 110 km/h, early warning may be performed for the "acceleration intention".
In some embodiments, the pre-warning conditions of different driving behavior intentions may be independent of each other, without necessarily linking. For example, the pre-warning conditions of different driving behaviors intentions can be the same or different under the same weather information and road condition information.
Optionally, the implementation manner of the early warning in this step may be: and prompting characters and/or graphics on a screen of the electronic equipment, and playing voice prompts and the like by the electronic equipment.
For example, if the predicted driving behavior intention is a lane change intention and the electronic device determines that the pre-warning condition corresponding to the lane change intention is satisfied, as shown in fig. 5, the electronic device may display a graphic prompt on the screen and may play a preset alert sound or a prompt sound such as "you are in a frequent lane change state, please pay attention to driving safety", etc.
The implementation of this step when the driving intention of the driver includes the driving behavior intention and the pre-warning conditions of various driving behavior intentions under different weather information preset in the electronic device may refer to this embodiment, which is not described here in detail.
The implementation of this step when the driving intention of the driver includes the driving behavior intention and the pre-warning conditions of various driving behavior intentions under different road condition information preset in the electronic device may refer to this embodiment, which is not described herein.
In another embodiment, the driving intention of the driver includes a driving behavior intention and a driving style intention, and early warning conditions of various driving behavior intentions under different combinations of weather information, road condition information and driving style intention can be preset in the electronic device.
For example, assuming that the electronic device is preset with (sunny day, road condition is not complex, and conservation type) pre-warning conditions of different driving behavior intentions, (sunny day, road condition is slightly complex, conservation type) pre-warning conditions of different driving behavior intentions, etc., if the weather information and the road condition information acquired in step 301 are (sunny day, road condition is not complex), the driving behavior intentions acquired in step 304 are "lane changing intentions", and the driving style intentions are "conservation type", the electronic device in this step may acquire the pre-warning conditions of "lane changing intentions" from the pre-warning conditions under (sunny day, road condition is not complex, conservation type), and when the pre-warning conditions are satisfied, pre-warning is performed for the "lane changing intentions".
It can be understood that the worse the weather conditions and road conditions, the more aggressive the driving style intentions, and the more severe the early warning conditions of driving behavior intentions, and vice versa. For example, assume that driving style intents include: the driving style intentions are more aggressive under the same weather information and road condition information, and the early warning conditions of the driving behavior intentions are more severe. For example: the early warning condition of the driving behavior intention such as overtaking intention or lane change intention can be a threshold value of occurrence times within a preset duration. For example, the early warning condition of the overtaking intention may be 7 times within 1 minute (on a sunny day, road conditions are not complicated, and robustness), the early warning condition of the overtaking intention may be 5 times within 1 minute (on a sunny day, road conditions are not complicated, and aggressive), the early warning condition of the overtaking intention may be 3 times within 1 minute, and so on.
In the third embodiment, the driving intention of the driver includes a driving style intention, and the electronic device may perform early warning on one or more preset driving style intents.
For example, still including with driving style intent: for example, the conservative type, the robust type and the aggressive type are examples, the electronic device may preset to early warn the aggressive driving style intention, and correspondingly, if the driving style intention of the driver is identified in the step 304 as the aggressive type, the electronic device in the step determines that the driving style intention of the driver identified in the step 304 is the preset "aggressive type" driving style, and early warn is performed for the aggressive driving style intention.
It can be understood that the electronic device may preset to pre-warn 2 or more driving styles, and the pre-warn modes of different driving styles may be the same or different.
In the method shown in fig. 4, the early warning process can be further performed according to the driving intention of the driver, and the early warning of the driving intention of the driver is relatively more accurate because the driving intention is more accurately identified, so that the improper operation of the driver in the driving process can be reduced, and the safety of the driver in the driving process is improved.
The following exemplifies a training method of the driving behavior intention recognition model and the driving style intention recognition model.
Fig. 6 is a schematic flow chart of a training method of a driving behavior intention recognition model according to an embodiment of the present application, and as shown in fig. 6, the training method may include:
step 601: and acquiring a plurality of groups of fragment original data of each driving behavior intention under the environment information corresponding to the driving behavior intention recognition model.
Optionally, the environmental information may include weather information and/or road condition information.
Because the driving behavior intention recognition model has a corresponding relation with the environment information, the fragment original data of various driving behavior intentions under the environment information corresponding to the driving behavior intention recognition model is obtained in the step. For example, if training (on a sunny day, the road condition is not complex) of the corresponding driving behavior intention recognition model is desired, in this step, fragment raw data of various driving behavior intentions under the conditions of the sunny day and the road condition is not complex may be obtained; if training of the driving behavior intention recognition model corresponding to a sunny day is expected, fragment raw data of various driving behavior intentions under the sunny day condition can be obtained in the step.
Each driving behavior intention in this step is each driving behavior intention that can be recognized by the driving behavior intention recognition model. For example, if the driving behavior intention to be recognized by the training-derived driving behavior intention recognition model is a lane change intention and an overtaking intention, then multiple sets of fragment raw data of the lane change intention and multiple sets of fragment raw data of the overtaking intention need to be acquired in this step.
Optionally, the set of segment raw data of the driving behavior intention refers to raw data of a certain duration in the raw data of the driving behavior intention corresponding to the driving behavior. For example, if 30 minutes of the original data of the driving behavior of "lane change" is collected, 5 minutes of the original data may be extracted therefrom as a set of segment original data of the driving behavior intention of "lane change intention" described above. It can be understood that for each driving behavior intention, a plurality of pieces of original data with a certain duration can be extracted, namely a plurality of groups of fragment original data are extracted, so that a plurality of samples are generated, training of a driving behavior intention recognition model is performed, and accuracy of the driving behavior intention recognition model on each driving behavior intention is improved.
The raw data may include driver data and/or vehicle data, and the clip raw data may include driver data and/or vehicle data, respectively.
Step 602: and filtering the segment original data of each driving behavior intention to obtain the segment original data of each driving behavior intention after filtering.
The step is an optional step, and the filtering processing in the step may be implemented by using a related filtering technology of the segment original data, which is not limited in this embodiment of the present application.
Through the filtering processing of the step, the fragment original data of each driving behavior intention can be more accurate, the accuracy of sample data of each driving behavior intention is further improved, and the accuracy of the driving behavior intention recognition model obtained through training on driving behavior intention recognition is improved.
Step 603: and generating sample data of each driving behavior intention according to the segment original data of each driving behavior intention after filtering.
In this step, when generating sample data of each driving behavior intention, for a set of segment raw data after filtering of one driving behavior intention, first target data may be extracted from the set of segment raw data, then, correlation analysis is performed on the first target data to obtain second target data, and then, a sample of the one driving behavior intention is generated according to the second target data. And executing the processing on each group of fragment original data after each driving behavior intention filtering, so that a sample corresponding to each group of fragment original data after each driving behavior intention filtering can be obtained, and sample data of each driving behavior intention can be obtained.
It is understood that parameters not needed by the sample may be included in the fragment raw data, for example, including in the fragment raw data: and when the sample is generated, parameters 1 to 5 are needed, and parameters 1 to 4 are only needed to be used, wherein the parameters 1 to 4 are the first target data, and the data needed for generating the sample can be obtained by extracting the first target data from the fragment original data, so that the data which are not needed for subsequent processing in the fragment original data are removed, and the data processing capacity is reduced. It will be appreciated that the step of extracting the first target data described above may not be performed if all of the data in the fragment raw data is used for generation of the sample.
Alternatively, the correlation analysis may be performed on the first target data for different parameter values of the same parameter, specifically, a pearson correlation coefficient between 2 parameter values may be calculated, and when the pearson correlation coefficient between 2 parameter values is greater than or equal to a preset coefficient threshold (for example, 0.3), 1 parameter value of the 2 parameter values may be reserved in the first target data, and another 1 parameter value may be discarded. Through correlation analysis, redundant data in the first target data can be reduced, and the data volume of the first target data is reduced.
The calculation formula of the pearson correlation coefficient is shown in the following formula:
where r (x, y) is the pearson correlation coefficient of the parameter value x and the parameter value y, cov (x, y) is the covariance of the parameter value x and the parameter value y, var|x| is the variance of the parameter value x, and var|y| is the variance of the parameter value y.
Step 604: training an initial driving behavior intention recognition model by using sample data of each driving behavior intention to obtain the driving behavior intention recognition model.
Alternatively, the initial driving behavior intent recognition model may be implemented using a decision tree model. The formula for calculating the coefficient of the key of the decision tree model is shown as follows:
Gini(D)=1-Σ(Ck/D)^2;
Wherein Gini (D) represents the coefficient of the sample set D, ck is the number of the kth type samples in the sample set D, D is the number of all the samples in the sample set D, and the smaller the coefficient of the Gini is, the better classification effect can be obtained for the used samples. The sample set here may be sample data of each driving behavior intention described above.
The specific implementation of training the initial driving behavior intention recognition model by using the sample data of each driving behavior intention may be implemented by a related model training method, which is not limited in the embodiment of the present application.
Through the training, the driving behavior intention recognition model corresponding to the environmental information such as weather information and/or road condition information can be obtained.
In order to further test the recognition effect of the training-derived driving behavior intention recognition model, as shown in fig. 6, the method may further include:
step 605: and testing the driving behavior intention recognition model.
The method for testing the driving behavior intention recognition model can be implemented by using a related model testing method, and the embodiment of the application is not limited.
Through the test processing, the driving behavior intention recognition model with higher recognition accuracy can be obtained. The driving behavior intention recognition model is input as driver data and vehicle data, and depending on the data used in the model training, the probability of each driving behavior intention is output. In practical application, the driving behavior intention with the highest probability can be determined as the driving behavior intention obtained by the driving behavior intention recognition model.
Fig. 7 is a schematic flow chart of a training method of a driving style intention recognition model according to an embodiment of the present application, and as shown in fig. 7, the training method may include:
step 701: and acquiring a plurality of groups of fragment original data under the environment information corresponding to the driving style intention recognition model.
In one embodiment, multiple groups of fragment original data can be extracted from the original data under the environment information corresponding to the driving style intention recognition model in a window-dividing mode according to a preset duration. For example, if the driving style is intended to correspond to (sunny day, and the road condition is not complicated), the original data of 30 minutes under the condition of sunny day and the road condition is not complicated may be collected, in this step, the step length may be 1 minute, the window length may be 5 minutes, the segment original data of 0 to 5 minutes, the segment original data of 1 to 6 minutes, the segment original data of 2 to 7 minutes, etc., so as to obtain the multiple groups of segment original data under (sunny day, and the road condition is not complicated).
Alternatively, the fragment raw data may include: vehicle data such as driving speed, lane change frequency, overtaking frequency and the like of the vehicle. Optionally, other vehicle data, such as throttle information, brake information, clutch information, gear information, etc., may also be included in the segment raw data. Optionally, the fragment raw data may further include driver data.
Step 702: the driving style of the raw data of each group of segments is calculated.
Optionally, the step may include:
calculating the scale score of the original data of each group of fragments according to a preset multidimensional driver style scale;
calculating expert evaluation scores (hereinafter referred to as expert scores) of the original data of each group of fragments;
and determining the driving style of the original data of each group of fragments according to the scale score and the expert score of the original data of each group of fragments.
Alternatively, the driver style scale and expert score may be 10-level scores, respectively.
Optionally, determining the driving style of each set of segment raw data according to the scale score and expert score of each set of segment raw data may specifically include: carrying out weighted average on the scale scores and expert scores of the original data of each group of fragments to obtain the comprehensive scores of the original data of each group of fragments; and determining the driving style intention corresponding to the scoring interval to which the comprehensive score of each group of fragment original data belongs as the driving style intention of each group of fragment original data according to the scoring interval corresponding to the preset driving style intention. For example, assuming that the preset scoring interval [0,3 ] corresponds to a driving style intention of a conservative type, the scoring interval [3,7] corresponds to a driving style intention of a robust type, and the scoring interval (7, 10) corresponds to a driving style intention of an aggressive type, if the overall score of the set of segment raw data is 2 points, the driving style intention of the set of segment raw data is conservative.
Wherein, the calculation formula for weighted average of the scale score and expert score of the group of fragment raw data is as follows:
wherein,comprehensive scoring of fragment raw data, w i Weights scored for scale of fragment raw data, x i Score the scale, w j Weights, x, of expert scores for fragment raw data j Expert scoring of the segment raw data.
Step 703: and generating sample data according to the plurality of groups of fragment original data and the driving style thereof.
Alternatively, one sample may be generated from each set of fragment raw data and its driving style intention, thereby obtaining sample data composed of a plurality of samples.
Step 704: training the initial driving style intention recognition model by using the sample data to obtain a driving style intention recognition model.
Alternatively, the initial driving style intent recognition model may be implemented using a decision tree model. The formula of the key coefficient of the decision tree model is referred to as step 604, and is not described here.
The specific implementation of training the initial driving style intention recognition model by using the sample data may be implemented by a related model training method, which is not limited in the embodiment of the present application.
Through the training, a driving style intention recognition model corresponding to the environment information can be obtained.
To further test the recognition effect of the training-derived driving style intention recognition model, as shown in fig. 7, the method may further include:
step 705: and testing the driving style intention recognition model.
The method for testing the driving style intention recognition model can be implemented by using a related model testing method, and the embodiment of the application is not limited.
Through the test processing, a driving style intention recognition model with higher recognition accuracy can be obtained. The driving style intention recognition model is input as vehicle data, or vehicle data and driver data, depending on the data used in the model training, output as probabilities of intention for each driving style. In practical application, the driving style intention with the highest probability can be determined as the driving style intention identified by the driving style intention identification model.
Fig. 8 is a schematic structural diagram of a driving intention recognition device according to an embodiment of the present application, and as shown in fig. 8, the device 800 may include:
the acquiring module 810 is configured to acquire environmental information of a location where a vehicle is located, acquire a corresponding driving intention recognition model according to the environmental information, and acquire first vehicle data and first driver data of the vehicle according to the driving intention recognition model;
The recognition module 820 is configured to input the first vehicle data and the first driver data into the driving intention recognition model to obtain a driving intention of the driver.
The apparatus provided in the embodiment shown in fig. 8 may be used to implement the technical solution of the method embodiment of the present application, and the implementation principle and technical effects may be further referred to in the related description of the method embodiment.
It should be understood that the above division of the modules of the apparatus shown in fig. 8 is merely a division of a logic function, and may be fully or partially integrated into a physical entity or may be physically separated. And these modules may all be implemented in software in the form of calls by the processing element; or can be realized in hardware; it is also possible that part of the modules are implemented in the form of software called by the processing element and part of the modules are implemented in the form of hardware. For example, the identification module may be a separately established processing element or may be implemented integrated in a certain chip of the electronic device. The implementation of the other modules is similar. In addition, all or part of the modules can be integrated together or can be independently implemented. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in a software form.
The embodiment of the application also provides electronic equipment, which comprises a processor and a memory, wherein the processor is used for realizing the method provided by the embodiment of the application.
The embodiment of the application also provides a vehicle-mounted system, which comprises the vehicle-mounted equipment and the electronic equipment.
The present embodiments also provide a computer-readable storage medium having a computer program stored therein, which when run on a computer, causes the computer to perform the method provided by the embodiments of the present application.
The present embodiments also provide a computer program product comprising a computer program which, when run on a computer, causes the computer to perform the method provided by the embodiments of the present application.
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relation of association objects, and indicates that there may be three kinds of relations, for example, a and/or B, and may indicate that a alone exists, a and B together, and B alone exists. Wherein A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of the following" and the like means any combination of these items, including any combination of single or plural items. For example, at least one of a, b and c may represent: a, b, c, a and b, a and c, b and c or a and b and c, wherein a, b and c can be single or multiple.
Those of ordinary skill in the art will appreciate that the various elements and algorithm steps described in the embodiments disclosed herein can be implemented as a combination of electronic hardware, computer software, and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In several embodiments provided herein, any of the functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (hereinafter referred to as ROM), a random access Memory (Random Access Memory) and various media capable of storing program codes such as a magnetic disk or an optical disk.
The foregoing is merely specific embodiments of the present application, and any changes or substitutions that may be easily contemplated by those skilled in the art within the technical scope of the present application are intended to be included in the scope of the present application. The protection scope of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. The driving intention recognition method is characterized by recognizing the driving intention of a driver through data information of people, vehicles, roads and rings, wherein the data information of the people is driver data, the data information of the vehicles is vehicle data, the data information of the roads is road condition information, and the data information of the rings is weather information, and the method comprises the following steps:
acquiring environment information of a position of a vehicle; the environment information includes: road condition information and weather information;
acquiring a corresponding driving intention recognition model according to the environment information;
acquiring first vehicle data and first driver data of the vehicle according to the driving intention recognition model;
the first vehicle data and the first driver data are input into the driving intention recognition model, and the driving intention output by the driving intention recognition model is taken as the driving intention of the driver.
2. The method of claim 1, wherein the driver's driving intent comprises: driving behavior intention, the driving intention recognition model comprising: a driving behavior intention recognition model;
the inputting the first vehicle data and the first driver data into the driving intention recognition model, taking the driving intention output by the driving intention recognition model as the driving intention of the driver, includes:
the first vehicle data and the first driver data are input into the driving behavior intention recognition model, and the driving behavior intention output by the driving behavior intention recognition model is used as the driving behavior intention of the driver.
3. The method according to claim 1 or 2, characterized in that the driving intention of the driver comprises: driving style intent, the driving intent recognition model comprising: a driving style intention recognition model;
the inputting the first vehicle data and the first driver data into the driving intention recognition model, taking the driving intention output by the driving intention recognition model as the driving intention of the driver, includes:
the first vehicle data and the first driver data are input into the driving style intention recognition model, and the driving style intention output by the driving style intention recognition model is used as the driving style intention of the driver.
4. The method according to claim 1 or 2, characterized in that the driving intention of the driver comprises: driving behavior intention;
the method further comprises the steps of:
acquiring early warning conditions of the driving behavior intention;
and when the early warning condition is met, early warning is carried out aiming at the driving behavior intention.
5. The method of claim 4, wherein the driving behavior intent comprises a first driving behavior intent, and wherein the pre-warning condition of the first driving behavior intent comprises: a number of times threshold within a first time period;
when the early warning condition is met, early warning is carried out aiming at the driving behavior intention, and the method comprises the following steps:
calculating the number of times of executing the first driving behavior in a first duration with the driving intention of the driver as the ending time; the first driving behavior is a driving behavior corresponding to the first driving behavior intention;
and when the number of times of executing the first driving behavior in the first duration is not smaller than the number threshold, early warning is conducted on the first driving behavior intention.
6. The method of claim 4, wherein the driving behavior intent comprises a second driving behavior intent, and wherein the pre-warning condition of the second driving behavior intent comprises: a vehicle speed threshold;
When the early warning condition is met, early warning is carried out aiming at the driving behavior intention, and the method comprises the following steps:
acquiring the speed of the vehicle;
and when the vehicle speed is not smaller than the vehicle speed threshold value, early warning is conducted aiming at the second driving behavior intention.
7. The method according to claim 2, wherein the training method of the driving behavior intention recognition model includes:
acquiring a plurality of groups of fragment original data of each driving behavior intention under the environment information corresponding to the driving behavior intention recognition model;
generating sample data of each driving behavior intention according to the plurality of groups of fragment original data of each driving behavior intention;
training an initial driving behavior intention recognition model by using the sample data of each driving behavior intention to obtain the driving behavior intention recognition model.
8. The method of claim 7, wherein generating the sample data of each driving behavior intent from the plurality of sets of segment raw data of each driving behavior intent comprises:
extracting first target data from a set of segment raw data of a driving behavior intention; the first target data includes: second vehicle data and second driver data;
Performing correlation analysis on the first target data to obtain second target data;
and generating a sample of the driving behavior intention according to the second target data.
9. A method according to claim 3, wherein the training method of the driving style intention recognition model comprises:
acquiring a plurality of groups of fragment original data under the environment information corresponding to the driving style intention recognition model;
determining the driving style intention of the original data of each group of fragments;
generating sample data according to the original data of each group of fragments and the driving style intention;
training an initial driving style intention recognition model by using the sample data to obtain the driving style intention recognition model.
10. The method according to claim 9, wherein the obtaining the plurality of sets of fragment raw data under the environment information corresponding to the driving style intention recognition model includes:
acquiring original data of the driving style intention recognition model under corresponding environment information;
and extracting the fragment original data from the original data by taking the window length as a preset second time length.
11. The method of claim 9, wherein said determining the driving style intent of each set of segment raw data comprises:
For each group of fragment original data, calculating the scale scores of the fragment original data according to a preset multi-dimensional driver style scale;
calculating expert scores of the fragment raw data;
calculating the comprehensive score of the fragment original data according to the scale score and expert score of the fragment original data;
and taking the driving style intention corresponding to the scoring interval to which the comprehensive score of the segment original data belongs as the driving style intention of the segment original data.
12. A driving intention recognition device, characterized by comprising:
the system comprises an acquisition unit, a control unit and a control unit, wherein the acquisition unit is used for acquiring environment information of a position where a vehicle is located, and the environment information comprises: road condition information and weather information; acquiring a corresponding driving intention recognition model according to the environment information, and acquiring first vehicle data and first driver data of the vehicle according to the driving intention recognition model;
and the processing unit is used for inputting the first vehicle data and the first driver data into the driving intention recognition model to obtain the driving intention of the driver.
13. An electronic device, comprising:
a processor, a memory; wherein one or more computer programs are stored in the memory, the one or more computer programs comprising instructions, which when executed by the processor, cause the electronic device to perform the method of any of claims 1-11.
14. A driving system, characterized by comprising: a data source acquisition device, a vehicle-mounted device and an electronic device, wherein the electronic device is adapted to perform the method of any one of claims 1 to 11.
15. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when run on a computer, causes the computer to perform the method of any of claims 1 to 11.
CN202311563786.3A 2023-11-22 2023-11-22 Driving intention recognition method, device and system Pending CN117774992A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311563786.3A CN117774992A (en) 2023-11-22 2023-11-22 Driving intention recognition method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311563786.3A CN117774992A (en) 2023-11-22 2023-11-22 Driving intention recognition method, device and system

Publications (1)

Publication Number Publication Date
CN117774992A true CN117774992A (en) 2024-03-29

Family

ID=90378853

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311563786.3A Pending CN117774992A (en) 2023-11-22 2023-11-22 Driving intention recognition method, device and system

Country Status (1)

Country Link
CN (1) CN117774992A (en)

Similar Documents

Publication Publication Date Title
US20210357670A1 (en) Driver Attention Detection Method
Chen et al. Lane departure warning systems and lane line detection methods based on image processing and semantic segmentation: A review
CN111274881B (en) Driving safety monitoring method and device, computer equipment and storage medium
Wang et al. A survey on driver behavior analysis from in-vehicle cameras
WO2019243863A1 (en) Vehicle re-identification techniques using neural networks for image analysis, viewpoint-aware pattern recognition, and generation of multi-view vehicle representations
CN105654753A (en) Intelligent vehicle-mounted safe driving assistance method and system
US20220277558A1 (en) Cascaded Neural Network-Based Attention Detection Method, Computer Device, And Computer-Readable Storage Medium
CN110866427A (en) Vehicle behavior detection method and device
CN110765807A (en) Driving behavior analysis method, driving behavior processing method, driving behavior analysis device, driving behavior processing device and storage medium
CN110741424B (en) Dangerous information collecting device
Wang et al. Driver identification leveraging single-turn behaviors via mobile devices
Zhang et al. Road marking segmentation based on siamese attention module and maximum stable external region
CN111931644A (en) Method, system and equipment for detecting number of people on vehicle and readable storage medium
CN112698660B (en) Driving behavior visual perception device and method based on 9-axis sensor
CN111062311B (en) Pedestrian gesture recognition and interaction method based on depth-level separable convolution network
CN116012822B (en) Fatigue driving identification method and device and electronic equipment
CN117774992A (en) Driving intention recognition method, device and system
CN113548056A (en) Automobile safety driving assisting system based on computer vision
CN116434173A (en) Road image detection method, device, electronic equipment and storage medium
CN114078318B (en) Vehicle violation detection system
KR20190023362A (en) Device using mobile phone for warning of the use of the mobile phone while driving a car, and control method thereof
CN112329555A (en) Intelligent mixed sensing system for hand action gesture of driver
CN113011347B (en) Intelligent driving method and device based on artificial intelligence and related products
JP7140895B1 (en) Accident analysis device, accident analysis method and program
CN111797659A (en) Driving assistance method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination