CN116767256B - Active human-computer interaction method and new energy automobile - Google Patents
Active human-computer interaction method and new energy automobile Download PDFInfo
- Publication number
- CN116767256B CN116767256B CN202310873614.XA CN202310873614A CN116767256B CN 116767256 B CN116767256 B CN 116767256B CN 202310873614 A CN202310873614 A CN 202310873614A CN 116767256 B CN116767256 B CN 116767256B
- Authority
- CN
- China
- Prior art keywords
- data
- driver
- vehicle
- steering wheel
- historical
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 169
- 238000000034 method Methods 0.000 title claims abstract description 55
- 230000000007 visual effect Effects 0.000 claims abstract description 24
- 230000006998 cognitive state Effects 0.000 claims abstract description 16
- 230000006399 behavior Effects 0.000 claims description 66
- 238000012545 processing Methods 0.000 claims description 31
- 230000004927 fusion Effects 0.000 claims description 17
- 238000012544 monitoring process Methods 0.000 claims description 16
- 238000006243 chemical reaction Methods 0.000 claims description 14
- 238000011156 evaluation Methods 0.000 claims description 14
- 238000012795 verification Methods 0.000 claims description 14
- 238000013135 deep learning Methods 0.000 claims description 13
- 230000000694 effects Effects 0.000 claims description 13
- 230000008569 process Effects 0.000 claims description 9
- 230000006461 physiological response Effects 0.000 claims description 6
- 230000011514 reflex Effects 0.000 claims description 6
- 238000010079 rubber tapping Methods 0.000 claims description 6
- 238000004088 simulation Methods 0.000 claims description 6
- 230000001360 synchronised effect Effects 0.000 claims description 6
- 230000001960 triggered effect Effects 0.000 claims description 4
- 239000013598 vector Substances 0.000 description 26
- 238000013527 convolutional neural network Methods 0.000 description 18
- 230000008859 change Effects 0.000 description 14
- 238000005516 engineering process Methods 0.000 description 9
- 208000019914 Mental Fatigue Diseases 0.000 description 8
- 230000001149 cognitive effect Effects 0.000 description 8
- 230000008451 emotion Effects 0.000 description 8
- 230000001133 acceleration Effects 0.000 description 6
- 230000005611 electricity Effects 0.000 description 6
- 230000007613 environmental effect Effects 0.000 description 6
- 238000000605 extraction Methods 0.000 description 6
- 230000015654 memory Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 210000004556 brain Anatomy 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 4
- 238000012512 characterization method Methods 0.000 description 4
- 238000013136 deep learning model Methods 0.000 description 4
- 230000002996 emotional effect Effects 0.000 description 4
- 230000004424 eye movement Effects 0.000 description 4
- 206010016256 fatigue Diseases 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000005286 illumination Methods 0.000 description 4
- 230000002452 interceptive effect Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000008447 perception Effects 0.000 description 3
- 238000004378 air conditioning Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000036760 body temperature Effects 0.000 description 2
- 238000013145 classification model Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000008909 emotion recognition Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000006996 mental state Effects 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- 230000035790 physiological processes and functions Effects 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 210000001747 pupil Anatomy 0.000 description 2
- 230000036391 respiratory frequency Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000006403 short-term memory Effects 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000000844 transformation Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
- B60W2050/143—Alarm means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
- B60W2050/146—Display means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2540/00—Input parameters relating to occupants
- B60W2540/221—Physiology, e.g. weight, heartbeat, health or special needs
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2540/00—Input parameters relating to occupants
- B60W2540/225—Direction of gaze
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2556/00—Input parameters relating to data
- B60W2556/10—Historical data
Landscapes
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Human Computer Interaction (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Traffic Control Systems (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
According to the method, the intention, the cognitive state and the visual attention of the driver are judged by utilizing various data, and the active interaction is performed by combining the active interaction model, so that the method is intelligent and efficient, can perfectly fit the state of the driver, and improves the experience of the driver.
Description
Technical Field
The invention relates to the technical field of man-machine interaction, in particular to an active man-machine interaction method and a new energy automobile.
Background
With the upgrading of user demands and the improvement of the intelligent level of automobiles, the man-machine interaction in automobiles is changing at present: the design concept is centered on users, and the basic 'functional perception interaction' is developed from the 'active perception interaction' through the perception technology of AI and the inside and outside of the vehicle. At present, in the stage of man-machine co-driving, the man-machine interaction mode is changed from passive mode to active mode, and through a series of sensors and controllers on the vehicle, the automobile can actively provide decision advice and realize auxiliary driving in a safety range. However, the current active man-machine interaction scheme is not intelligent and accurate enough, and still cannot meet the actual demands of users.
Disclosure of Invention
Based on the problems, the invention provides an active human-computer interaction method and a new energy automobile, and through the scheme of the invention, the intention, the cognitive state and the visual attention of a driver are judged by utilizing various data, and then the active interaction is performed by combining an active interaction model, so that the intelligent human-computer interaction method is intelligent and efficient, can perfectly fit the state of the driver, and improves the experience of the driver.
In view of this, an aspect of the present invention proposes an active human-computer interaction method, including:
acquiring historical behavior data of drivers and passengers, and acquiring a first active interaction model according to the historical behavior data;
Collecting first input data, first physiological data, first vehicle state data and first environment data of the driver in real time;
Processing the first input data, the first physiological data, the first vehicle state data and the first environment data to obtain first integrated data;
Judging intention, cognitive state and visual attention of the driver and passenger by adopting a multi-mode information fusion and deep learning method according to the first integrated data to obtain a first judgment result;
According to the first judgment result and the first active interaction model, a first voice prompt is sent to the driver and the passenger and/or first prompt information is displayed on a vehicle-mounted display;
Receiving first feedback data of the driver and the passenger on the first voice prompt and/or the first prompt information to evaluate the interaction effect, and obtaining first evaluation data;
And adjusting the first active interaction model according to the first evaluation data.
Optionally, the steering wheel of the vehicle is a touch steering wheel; the rim and the spoke of the steering wheel are provided with a flexible touch screen and a flexible display screen; the surface of the rim is provided with a fingerprint identification area; the inside of wheel rim is provided with a plurality of sensors.
Optionally, the method further comprises the steps of:
Acquiring first current position information of the vehicle and first navigation data of a navigation system;
Determining whether the vehicle reaches a preset position according to the first current position information and the first navigation data;
and prompting a driver of the vehicle to carry out a touch operation mode when the preset position is reached.
Optionally, the method further comprises the steps of:
the method comprises the steps that an automobile simulation driver obtains a first login account of a driver and a passenger logged in a system;
acquiring corresponding historical navigation data and historical driving behavior data according to the first login account;
Acquiring first road segment data and first environment data on a corresponding navigation route according to the historical navigation data;
Generating a first simulated driving scene according to the historical driving behavior data, the first road segment data and the first environment data;
collecting first behavior data and first physiological response data of the driver in the process that the driver operates the automobile simulated driver according to the first simulated driving scene;
Generating a first scene reaction model according to the first behavior data and the first physiological reaction data;
Generating the first active interaction model according to the first scene reaction model;
Binding the first initiative interaction model with the first login account and synchronizing the first initiative interaction model with a management server;
when the driver logs in through a first vehicle by using the first login account, the management server detects whether the first vehicle is synchronized with the first active interaction model;
If not, synchronizing the first active interaction model to the first vehicle;
and triggering a corresponding monitoring module by the first vehicle according to the first active interaction model to prompt and receive interaction data of the driver and the passengers.
Optionally, the manner of operating the touch steering wheel includes a touch operation; the touch operation includes at least one of a left hand and/or a right hand sliding, tapping, grasping, holding sliding, fingerprint verification operation on the touch-sensitive steering wheel;
Triggering a camera device and a sound acquisition device arranged on the first vehicle to acquire first image data and first sound data of the driver while the driver performs a first touch operation on the touch steering wheel;
Determining a first touch operation instruction corresponding to the first touch operation according to the first image data, the first sound data and the first touch operation;
And executing the first touch operation instruction in combination with the first active interaction model.
Another aspect of the present invention provides a new energy automobile, comprising: an acquisition module and a control processing module;
The acquisition module is configured to:
acquiring historical behavior data of drivers and passengers, and acquiring a first active interaction model according to the historical behavior data;
Collecting first input data, first physiological data, first vehicle state data and first environment data of the driver in real time;
the control processing module is configured to:
Processing the first input data, the first physiological data, the first vehicle state data and the first environment data to obtain first integrated data;
Judging intention, cognitive state and visual attention of the driver and passenger by adopting a multi-mode information fusion and deep learning method according to the first integrated data to obtain a first judgment result;
According to the first judgment result and the first active interaction model, a first voice prompt is sent to the driver and the passenger and/or first prompt information is displayed on a vehicle-mounted display;
Receiving first feedback data of the driver and the passenger on the first voice prompt and/or the first prompt information to evaluate the interaction effect, and obtaining first evaluation data;
And adjusting the first active interaction model according to the first evaluation data.
Optionally, the steering wheel of the vehicle is a touch steering wheel; the rim and the spoke of the steering wheel are provided with a flexible touch screen and a flexible display screen; the surface of the rim is provided with a fingerprint identification area; the inside of wheel rim is provided with a plurality of sensors.
Optionally, the acquisition module is configured to: acquiring first current position information of the vehicle and first navigation data of a navigation system;
the control processing module is configured to:
Determining whether the vehicle reaches a preset position according to the first current position information and the first navigation data;
and prompting a driver of the vehicle to carry out a touch operation mode when the preset position is reached.
Optionally, the control processing module is configured to:
Acquiring a first login account of a driver and a passenger logged in a system through an automobile simulation driver;
acquiring corresponding historical navigation data and historical driving behavior data according to the first login account;
Acquiring first road segment data and first environment data on a corresponding navigation route according to the historical navigation data;
Generating a first simulated driving scene according to the historical driving behavior data, the first road segment data and the first environment data;
Controlling the automobile simulated driver to acquire first behavior data and first physiological response data of the driver in the process that the driver operates the automobile simulated driver according to the first simulated driving scene;
Generating a first scene reaction model according to the first behavior data and the first physiological reaction data;
Generating the first active interaction model according to the first scene reaction model;
Binding the first initiative interaction model with the first login account and synchronizing the first initiative interaction model with a management server;
when the driver logs in through a first vehicle by using the first login account, the management server detects whether the first vehicle is synchronized with the first active interaction model;
If not, synchronizing the first active interaction model to the first vehicle;
and triggering a corresponding monitoring module by the first vehicle according to the first active interaction model to prompt and receive interaction data of the driver and the passengers.
Optionally, the manner of operating the touch steering wheel includes a touch operation; the touch operation includes at least one of a left hand and/or a right hand sliding, tapping, grasping, holding sliding, fingerprint verification operation on the touch-sensitive steering wheel;
the control processing module is configured to:
Triggering a camera device and a sound acquisition device arranged on the first vehicle to acquire first image data and first sound data of the driver while the driver performs a first touch operation on the touch steering wheel;
Determining a first touch operation instruction corresponding to the first touch operation according to the first image data, the first sound data and the first touch operation;
And executing the first touch operation instruction in combination with the first active interaction model.
By adopting the technical scheme, the method for active man-machine interaction sets and acquires historical behavior data of drivers and passengers, and a first active interaction model is obtained according to the historical behavior data; collecting first input data, first physiological data, first vehicle state data and first environment data of the driver in real time; processing the first input data, the first physiological data, the first vehicle state data and the first environment data to obtain first integrated data; judging intention, cognitive state and visual attention of the driver and passenger by adopting a multi-mode information fusion and deep learning method according to the first integrated data to obtain a first judgment result; according to the first judgment result and the first active interaction model, a first voice prompt is sent to the driver and the passenger and/or first prompt information is displayed on a vehicle-mounted display; receiving first feedback data of the driver and the passenger on the first voice prompt and/or the first prompt information to evaluate the interaction effect, and obtaining first evaluation data; and adjusting the first active interaction model according to the first evaluation data. The intention, the cognitive state and the visual attention of the driver and the passengers are judged by utilizing various data, and the active interaction is performed by combining the active interaction model, so that the intelligent and efficient interaction system is intelligent and efficient, can perfectly fit the state of the driver and the passengers, and improves the experience of the driver and the passengers.
Drawings
FIG. 1 is a flow chart of an active human-machine interaction method provided by an embodiment of the present invention;
fig. 2 is a schematic block diagram of a new energy automobile according to an embodiment of the present invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will be more clearly understood, a more particular description of the application will be rendered by reference to the appended drawings and appended detailed description. It should be noted that, without conflict, the embodiments of the present application and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced otherwise than as described herein, and therefore the scope of the present invention is not limited to the specific embodiments disclosed below.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
An active human-computer interaction method and a new energy automobile according to some embodiments of the present invention are described below with reference to fig. 1 to 2.
As shown in fig. 1, an embodiment of the present invention provides an active human-computer interaction method, which includes:
acquiring historical behavior data of drivers and passengers (including but not limited to historical behavior data of intelligent mobile phones, intelligent household appliances, automobile simulated drivers, intelligent automobiles and the like), and obtaining a first active interaction model according to the historical behavior data;
collecting first input data (including collecting various input information such as voice input, gesture input, visual input and the like of a driver and the like) of the driver, first physiological data (including but not limited to data in aspects such as brain electricity, electrocardio, sight line/pupil, body temperature, respiratory frequency and the like), first vehicle state data (including but not limited to speed, acceleration, steering angle, vibration frequency and amplitude, inclination angle, engine/motor temperature, engine/motor power, braking system data, air conditioning system data, window control data and the like which are collected by a vehicle-mounted sensor and related data which can be used for judging driving states) and first environment data (including but not limited to vehicle exterior environment data (such as weather data, road condition data, illumination data, temperature data, wind power data and the like) and vehicle interior environment data (such as vehicle interior temperature and humidity, light intensity, air quality and the like);
the processing of the first input data, the first physiological data, the first vehicle state data and the first environment data to obtain first integrated data includes performing feature extraction, classification, identification and other processing on the first input data, the first physiological data, the first vehicle state data and the first environment data to obtain intention information of a user, which specifically may be:
And (3) feature processing: extracting corresponding features, such as time sequence features, frequency domain features, statistical features and the like, from each data to construct feature vectors, wherein the feature vectors specifically comprise: selection characteristics: selecting proper characteristics, such as time domain characteristics, frequency domain characteristics, statistical characteristics and the like, according to the data type and task requirements, and selecting characteristics with distinction to the problems can improve the model performance; characteristic engineering: performing various transformations and refinements on the selected features to generate new features that are more differentiated and computable; constructing a feature vector: all the features are arranged according to a certain sequence to form a feature vector (feature row vector or feature column vector); feature vector calculation: the method is mainly used for calculating the distance or the similarity between the feature vectors;
Classification and clustering: classifying, clustering and pattern recognition are carried out on the feature vectors by using preset different models to obtain driving behavior patterns, physiological states, vehicle states, environment models and the like;
Data fusion: and integrating the processing results of the data sources, and fusing the multi-source information by using a data fusion technology, so that the judgment accuracy is improved, the overall understanding of a driver, a vehicle and the environment is formed, and the first integrated data is obtained.
Through the steps, heterogeneous data of different types can be converted into judgment and understanding with a certain meaning, and the data fusion is to link the judgment so as to obtain more comprehensive and comprehensive understanding.
Judging intention, cognitive state and visual attention of the driver and passenger by adopting a multi-mode information fusion and deep learning method according to the first integrated data to obtain a first judgment result;
according to the first judgment result and the first active interaction model, a first voice prompt is sent to the driver and the passenger and/or first prompt information is displayed on a vehicle-mounted display (such as HUD, vehicle-mounted display screen and the like);
Receiving first feedback data of the driver and the passenger on the first voice prompt and/or the first prompt information to evaluate the interaction effect, and obtaining first evaluation data;
And adjusting the first active interaction model according to the first evaluation data.
According to the scheme, the intention, the cognitive state and the visual attention of the driver and the passengers are judged by utilizing various data, and the active interaction is performed by combining the active interaction model, so that the intelligent and efficient interaction system is intelligent and efficient, can perfectly fit the state of the driver and the passengers, and improves the experience of the driver and the passengers.
It will be appreciated that the active interaction model in some possible embodiments of the invention may include, but is not limited to, the following:
Focusing on the calculation sub-model: whether the driver is focused on driving or not is detected, if the driver is distracted, the driver can be actively reminded of safety, and the visual attention points and the cognitive state of the driver can be detected by adopting technologies such as eye movement tracking, electroencephalogram and the like.
Emotion recognition sub-model: detecting an emotional state of the driver, such as anger, tension, etc., may affect the emotion of the driving. If these emotions are recognized, the soothing music can be actively played or interactions can be proposed to stabilize the emotion; speech analysis, expression recognition, etc. techniques may be employed to detect emotion.
Behavior speculation sub-model: and detecting the driving behavior and habit of the driver, and judging the possible next driving intention. If the intention is possibly dangerous, the interactive prompt can be actively proposed to realize safer driving behavior; in-vehicle sensor data may be employed to detect and predict driving behavior.
Dialog generation sub-model: generating natural and active dialogue with the driver, and providing information such as driving guide, scenic spot introduction and the like; and the method can also develop boring independent of driving, reduce driving fatigue, need to understand the dialogue intention of the context and the driver, and generate proper replies according to the dialogue intention.
The interaction strategy generates a sub-model: and selecting corresponding interaction content, form and strategy according to different scenes. If the audio interaction is selected on the expressway, the visual interaction can be selected on the urban road, auxiliary information is provided when parking is prepared, and the like; meanwhile, the cognitive load of the driver is considered, and too frequent or abrupt interaction is avoided.
In summary, the vehicle may actively interact with the driver/passenger by selecting an appropriate timing and manner according to the environment and the driver state, but the strategy and content of the interaction need to consider driving safety, so as to avoid causing distraction or emotional excitement of the driver/passenger.
In an embodiment of the present invention, physiological data/signal acquisition means include, but are not limited to: an electroencephalogram sensor arranged on a helmet or a head band is used for collecting electroencephalogram signals, a chest belt sensor is used for collecting electrocardiosignals, and a vehicle-mounted monitoring camera is used for collecting sight information of drivers and passengers; environmental information collection means include, but are not limited to: installing a temperature and humidity sensor, an illumination sensor and the like in a vehicle cabin to collect environmental information; the driving state information collection modes include, but are not limited to: acquiring data of a steering wheel angle sensor, a brake pedal sensor, an odometer and the like to judge driving operations (or other operations) and vehicle states of drivers and passengers;
In the embodiment of the invention, the multi-mode information fusion scheme can be as follows: the deep learning neural network model is adopted to fuse physiological signals, environmental information and driving state information, and the intention (such as lane changing intention), cognitive load state (such as mental fatigue), visual attention (such as side rearview monitoring) and the like of a driver are analyzed and judged.
In an embodiment of the present invention, the active interaction scheme may include, but is not limited to: according to the judgment result, the vehicle-mounted system can highlight important information (such as a blind area vehicle) outside the sight-line attention area of the driver through the vehicle-mounted display screen; the driver and the passengers can be reminded of paying attention to the blind area or lane change through voice; the vehicle-mounted system can be controlled to automatically complete some driving operations (such as automatic lane changing) so as to reduce the cognitive load of drivers and passengers.
In an embodiment of the present invention, the interaction valuation scheme may include, but is not limited to: the system records the response and operation of drivers and passengers on each interaction prompt, evaluates the interaction effect, continuously optimizes the interaction strategy and the deep learning model, and realizes customization and individuation of human-computer interaction.
The scheme of the embodiment of the invention realizes accurate judgment on the states of drivers and passengers by utilizing the multi-mode information and the artificial intelligence technology, actively interacts, can furthest lighten the workload of the drivers and passengers, provides self-defined driving experience and realizes safe and comfortable driving.
In the embodiment of the invention, the implementation scheme of the multi-mode information fusion specifically comprises the following steps:
and selecting information such as electroencephalogram, sight line data, steering wheel rotation angle, vehicle speed and the like of a driver and a passenger as a neural network to input. These information reflect the driver's cognitive state, visual attention, driving operation and vehicle state.
And carrying out feature extraction and sequence modeling on the input information by adopting deep learning models such as CNN (convolutional neural network) and LSTM (long short term memory network) and the like to obtain feature vectors capable of reflecting the intention and the state of drivers and passengers.
Inputting the feature vector into a classification model (such as a Softmax classifier) to obtain the intention judgment result of a driver, such as lane change intention: the feature vector indicates that the line of sight is gathered on the left rearview mirror within 3 seconds, the steering wheel turns slightly to the left, and the vehicle speed remains unchanged, and then the lane change intention is judged. Mental fatigue: and the brain electrical low-frequency power is increased and eye movement is slowed down for the past 5 minutes, and the mental fatigue state is judged. Side rearview monitoring: the feature vector indicates that the line of sight is looking across the left rearview mirror for the past 2 seconds, judging that the driver is monitoring left rearview.
The following interactive operation is adopted for the judging result:
Lane change intention: the display screen highlights the left rear view area and the voice prompt "notice left rear view";
Mental fatigue: refreshing music is played, and a voice prompt is given for 'you look tired, whether rest is needed';
Side rearview monitoring: the driver and passenger status is continuously monitored without interaction.
According to the implementation scheme of the embodiment of the invention, the complex state of the driver and the passenger is researched and judged through a deep learning method, and the driver and the passenger are actively interacted to prompt, so that the purposes of reducing the cognitive load of the driver and the passenger and improving the driving safety are achieved. By combining richer input information and interaction means, the scheme of the embodiment can achieve the effect of customizing man-machine interaction.
In the embodiment of the present invention, the application of CNN and LSTM in the scheme is explained in detail:
CNN performs feature extraction on the line of sight data and electroencephalogram data.
Line of sight data: and taking the monitored sight line track image of the driver and the passenger as CNN input, and extracting sight line characteristics such as a sight line gathering area, a sight line transfer speed and the like by the CNN through a convolution layer and a pooling layer so as to judge visual attention.
Electroencephalogram data: the electroencephalogram signal image is input as CNN, which extracts electroencephalogram spectral features such as alpha wave, beta wave and theta wave energy features to judge mental states such as fatigue.
LSTM sequence modeling of time series data such as steering wheel angle, vehicle speed, etc.
A series of steering wheel rotation angle and vehicle speed data acquired in a time domain are input into an LSTM, the LSTM captures dynamic change characteristics and long-term dependency in the data through a forgetting gate and output gate mechanism, and a sequence model of steering wheel operation and vehicle speed change is established. Based on the sequence model, the driving intention of the driver, such as the intention of acceleration, deceleration, lane change and the like, can be judged.
3. And taking the combination of the sight line extracted by the CNN and the electroencephalogram characteristics and the sequence model characteristics established by the LSTM as classifier input to judge the states and intentions of drivers and passengers.
The classifier integrates the characterizations and distinguishability of the characteristic information, and accurately judges the states and intentions of drivers and passengers.
With the increase of the input data volume, the performance of the classifier is continuously improved, and the deep understanding of the state of drivers and passengers is realized.
According to the scheme provided by the embodiment of the invention, the CNN and the LSTM are used for respectively carrying out feature learning and sequence modeling on the image data and the time sequence data, and various types of data characterization are unified into one classifier, so that the multi-mode deep learning method can achieve a good effect, and a technical basis is provided for judging the intention state of drivers and passengers and realizing active interaction.
In some possible embodiments of the invention, the steering wheel of the vehicle is a touch steering wheel; the rim and the spoke of the steering wheel are provided with a flexible touch screen and a flexible display screen; the surface of the rim is provided with a fingerprint identification area; the inside of wheel rim is provided with a plurality of sensors.
It will be appreciated that, in order to enable the driver to use the steering wheel conveniently and intelligently, in this embodiment, the rim and spokes of the steering wheel may be part (such as an area within the range of the driver's sight, or an area where the time of interest of the driver exceeds the first preset time, or the area where the time of interest of the driver exceeds the first preset time, or all of the area may be touchable (provided with a touch sensor), or part (such as an area on the steering wheel held during driving of the driver beyond the second preset time, as estimated from historical data) or all of the area may receive pressure input (provided with a pressure sensor).
The fingerprint identification area is connected with the fingerprint acquisition sensor and the processor for identification processing, and can enable or lock part of functions/operations on the steering wheel through fingerprint verification so as to ensure driving safety.
It should be noted that, the setting of the plurality of sensors may determine the type, number, area, distance, resolution, etc. of the sensor to be set after analyzing the big data according to the historical usage data of the steering wheel, and after the plurality of sensors are installed, the software control method may be used to adjust the type, number, corresponding area, resolution, etc. of the sensor to be triggered in the plurality of sensors according to the individual characteristics (such as palm size, grip size, sight attention area, etc.) of the driver.
In order to prompt the driver and the passenger to perform corresponding operations in time and efficiently during the driving process, some possible embodiments of the present invention further include the steps of:
Acquiring first current position information of the vehicle and first navigation data of a navigation system;
Determining whether the vehicle reaches a preset position according to the first current position information and the first navigation data;
when the preset position is reached (for example, when the distance from the intersection needing to be turned reaches a first preset distance or when the distance from a preset gas station/charging station reaches a second distance, etc.), prompting a driver of the vehicle to perform a touch operation mode (for example, displaying an operation schematic diagram or characters or animation on the steering wheel or displaying the operation schematic diagram or characters or animation through HUD projection, etc.).
In order to obtain an accurate active interaction model, in some possible embodiments of the present invention, the step of obtaining historical behavior data of a driver and a first active interaction model according to the historical behavior data further includes the steps of:
the method comprises the steps that an automobile simulation driver obtains a first login account of a driver and a passenger logged in a system;
Acquiring corresponding historical navigation data and historical driving behavior data (historical interaction behavior data and the like of intelligent terminals such as intelligent mobile phones, intelligent household appliances and the like) according to the first login account;
Acquiring first road segment data and first environment data on a corresponding navigation route according to the historical navigation data;
Generating a first simulated driving scene according to the historical driving behavior data, the first road segment data and the first environment data;
collecting first behavior data and first physiological response data of the driver in the process that the driver operates the automobile simulated driver according to the first simulated driving scene;
Generating a first scene reaction model according to the first behavior data and the first physiological reaction data;
Generating the first active interaction model according to the first scene reaction model (and historical interaction behavior data of intelligent terminals such as intelligent mobile phones, intelligent household appliances and the like);
Binding the first initiative interaction model with the first login account and synchronizing the first initiative interaction model with a management server;
when the driver logs in through a first vehicle by using the first login account, the management server detects whether the first vehicle is synchronized with the first active interaction model;
If not, synchronizing the first active interaction model to the first vehicle;
and triggering a corresponding monitoring module by the first vehicle according to the first active interaction model to prompt and receive interaction data of the driver and the passengers.
It should be noted that, due to the development of the technologies of the mobile internet, the internet of things and the intelligent terminals, the user has formed a depth-dependent interaction manner on each intelligent terminal, so in this embodiment, the first active interaction model is generated according to the first scene reaction model and the historical interaction behavior data of the driver and passenger using the intelligent terminals such as the smart phone, the intelligent home appliance and the like
It can be appreciated that, in this embodiment, in order to make the interaction model more accurately conform to the style of the driver and the passenger, vehicle feature data (such as structural data, including data reflecting the overall structural features of the vehicle, such as the vehicle size, the wheelbase, and the vehicle body material, etc.) of the first vehicle may also be further obtained; the system comprises a vehicle-mounted sensor, a vehicle-mounted electronic control unit, a vehicle-mounted control system and a vehicle-mounted control system, wherein the vehicle-mounted control system comprises mechanical data including data reflecting the dynamic performance and operating stability characteristics of the vehicle such as engine/motor power, torque and axle weight distribution, state data including real-time state data such as speed, acceleration, inclination angle and position obtained by the vehicle-mounted sensor, control data including steering wheel angle, accelerator/brake pedal position and the like obtained by the controller and data reflecting the operating behavior of a driver, fault codes which can judge fault conditions of related components or systems and are reported by the vehicle-mounted electronic control unit, working time or mileage data of key components such as the engine/motor, a gearbox and a brake system can be used for predicting the service life and replacement cycle of the components, oil consumption/electricity consumption data, real-time oil consumption/electricity consumption data obtained by the oil consumption sensor and a power manager and data reflecting the overall economical efficiency of the vehicle, and running data including running time, mileage, route, speed distribution and the like obtained by the vehicle-mounted positioning and navigation system and data reflecting the service condition of the vehicle; and modifying the first active interaction model according to the vehicle characteristic data to obtain an active interaction model which is more in line with the actual condition of the first vehicle after modification.
In some possible embodiments of the present invention, the manner of operating the touch-type steering wheel includes a touch operation; the touch operation includes at least one of a left hand and/or a right hand sliding, tapping, grasping, holding sliding, fingerprint verification operation on the touch-sensitive steering wheel;
Triggering a camera device and a sound acquisition device arranged on the first vehicle to acquire first image data and first sound data of the driver while the driver performs a first touch operation on the touch steering wheel;
Determining a first touch operation instruction corresponding to the first touch operation according to the first image data, the first sound data and the first touch operation;
And executing the first touch operation instruction in combination with the first active interaction model.
It can be appreciated that, in order to facilitate efficient and accurate operation of the steering wheel by the driver, in the embodiment of the present invention, the manner of operating the touch steering wheel includes a touch operation, where the touch operation includes, but is not limited to:
The single left hand touch operation is specifically as follows: the left hand performs finger or palm sliding, finger knocking, grasping, holding sliding, fingerprint verification and other operations;
The independent right hand touch operation is specifically as follows: the right hand performs finger or palm sliding, finger knocking, grasping, holding sliding, fingerprint verification and other operations
The double-hand touch operation is specifically as follows: the hands slide fingers or palms simultaneously or at intervals, and the hands strike, grasp, hold and slide, fingerprint verification and other operations are performed.
It should be noted that, according to the difference of the position or duration/time interval of the sliding, finger-knocking, gripping, holding, etc., different corresponding operation instructions may be defined, for example, the operation of fixing the steering wheel by the left hand, holding the rim of the steering wheel by the right hand to slide, the operation instruction corresponding to the sliding first preset distance may be the start of the right turn light, the operation instruction corresponding to the sliding second preset distance may be the scene of displaying the right rear blind area of the vehicle through the HUD, etc.
Triggering a camera device and a sound acquisition device arranged on the first vehicle to acquire first image data and first sound data of the driver while the driver performs a first touch operation on the touch steering wheel; determining a first touch operation instruction corresponding to the first touch operation according to the first image data, the first sound data and the first touch operation; and executing the first touch operation instruction in combination with the first active interaction model.
Referring to fig. 2, another embodiment of the present invention provides a new energy automobile, including: an acquisition module and a control processing module;
The acquisition module is configured to:
acquiring historical behavior data of drivers and passengers (including but not limited to historical behavior data of intelligent mobile phones, intelligent household appliances, automobile simulated drivers, intelligent automobiles and the like), and obtaining a first active interaction model according to the historical behavior data;
collecting first input data (including collecting various input information such as voice input, gesture input, visual input and the like of a driver and the like) of the driver, first physiological data (including but not limited to data in aspects such as brain electricity, electrocardio, sight line/pupil, body temperature, respiratory frequency and the like), first vehicle state data (including but not limited to speed, acceleration, steering angle, vibration frequency and amplitude, inclination angle, engine/motor temperature, engine/motor power, braking system data, air conditioning system data, window control data and the like which are collected by a vehicle-mounted sensor and related data which can be used for judging driving states) and first environment data (including but not limited to vehicle exterior environment data (such as weather data, road condition data, illumination data, temperature data, wind power data and the like) and vehicle interior environment data (such as vehicle interior temperature and humidity, light intensity, air quality and the like);
the control processing module is configured to:
the processing of the first input data, the first physiological data, the first vehicle state data and the first environment data to obtain first integrated data includes performing feature extraction, classification, identification and other processing on the first input data, the first physiological data, the first vehicle state data and the first environment data to obtain intention information of a user, which specifically may be:
And (3) feature processing: extracting corresponding features, such as time sequence features, frequency domain features, statistical features and the like, from each data to construct feature vectors, wherein the feature vectors specifically comprise: selection characteristics: selecting proper characteristics, such as time domain characteristics, frequency domain characteristics, statistical characteristics and the like, according to the data type and task requirements, and selecting characteristics with distinction to the problems can improve the model performance; characteristic engineering: performing various transformations and refinements on the selected features to generate new features that are more differentiated and computable; constructing a feature vector: all the features are arranged according to a certain sequence to form a feature vector (feature row vector or feature column vector); feature vector calculation: the method is mainly used for calculating the distance or the similarity between the feature vectors;
Classification and clustering: classifying, clustering and pattern recognition are carried out on the feature vectors by using preset different models to obtain driving behavior patterns, physiological states, vehicle states, environment models and the like;
Data fusion: and integrating the processing results of the data sources, and fusing the multi-source information by using a data fusion technology, so that the judgment accuracy is improved, the overall understanding of a driver, a vehicle and the environment is formed, and the first integrated data is obtained.
Through the steps, heterogeneous data of different types can be converted into judgment and understanding with a certain meaning, and the data fusion is to link the judgment so as to obtain more comprehensive and comprehensive understanding.
Judging intention, cognitive state and visual attention of the driver and passenger by adopting a multi-mode information fusion and deep learning method according to the first integrated data to obtain a first judgment result;
according to the first judgment result and the first active interaction model, a first voice prompt is sent to the driver and the passenger and/or first prompt information is displayed on a vehicle-mounted display (such as HUD, vehicle-mounted display screen and the like);
Receiving first feedback data of the driver and the passenger on the first voice prompt and/or the first prompt information to evaluate the interaction effect, and obtaining first evaluation data;
And adjusting the first active interaction model according to the first evaluation data.
According to the scheme, the intention, the cognitive state and the visual attention of the driver and the passengers are judged by utilizing various data, and the active interaction is performed by combining the active interaction model, so that the intelligent and efficient interaction system is intelligent and efficient, can perfectly fit the state of the driver and the passengers, and improves the experience of the driver and the passengers.
It will be appreciated that the active interaction model in some possible embodiments of the invention may include, but is not limited to, the following:
Focusing on the calculation sub-model: whether the driver is focused on driving or not is detected, if the driver is distracted, the driver can be actively reminded of safety, and the visual attention points and the cognitive state of the driver can be detected by adopting technologies such as eye movement tracking, electroencephalogram and the like.
Emotion recognition sub-model: detecting an emotional state of the driver, such as anger, tension, etc., may affect the emotion of the driving. If these emotions are recognized, the soothing music can be actively played or interactions can be proposed to stabilize the emotion; speech analysis, expression recognition, etc. techniques may be employed to detect emotion.
Behavior speculation sub-model: and detecting the driving behavior and habit of the driver, and judging the possible next driving intention. If the intention is possibly dangerous, the interactive prompt can be actively proposed to realize safer driving behavior; in-vehicle sensor data may be employed to detect and predict driving behavior.
Dialog generation sub-model: generating natural and active dialogue with the driver, and providing information such as driving guide, scenic spot introduction and the like; and the method can also develop boring independent of driving, reduce driving fatigue, need to understand the dialogue intention of the context and the driver, and generate proper replies according to the dialogue intention.
The interaction strategy generates a sub-model: and selecting corresponding interaction content, form and strategy according to different scenes. If the audio interaction is selected on the expressway, the visual interaction can be selected on the urban road, auxiliary information is provided when parking is prepared, and the like; meanwhile, the cognitive load of the driver is considered, and too frequent or abrupt interaction is avoided.
In summary, the vehicle may actively interact with the driver/passenger by selecting an appropriate timing and manner according to the environment and the driver state, but the strategy and content of the interaction need to consider driving safety, so as to avoid causing distraction or emotional excitement of the driver/passenger.
In an embodiment of the present invention, physiological data/signal acquisition means include, but are not limited to: an electroencephalogram sensor arranged on a helmet or a head band is used for collecting electroencephalogram signals, a chest belt sensor is used for collecting electrocardiosignals, and a vehicle-mounted monitoring camera is used for collecting sight information of drivers and passengers; environmental information collection means include, but are not limited to: installing a temperature and humidity sensor, an illumination sensor and the like in a vehicle cabin to collect environmental information; the driving state information collection modes include, but are not limited to: acquiring data of a steering wheel angle sensor, a brake pedal sensor, an odometer and the like to judge driving operations (or other operations) and vehicle states of drivers and passengers;
In the embodiment of the invention, the multi-mode information fusion scheme can be as follows: the deep learning neural network model is adopted to fuse physiological signals, environmental information and driving state information, and the intention (such as lane changing intention), cognitive load state (such as mental fatigue), visual attention (such as side rearview monitoring) and the like of a driver are analyzed and judged.
In an embodiment of the present invention, the active interaction scheme may include, but is not limited to: according to the judgment result, the vehicle-mounted system can highlight important information (such as a blind area vehicle) outside the sight-line attention area of the driver through the vehicle-mounted display screen; the driver and the passengers can be reminded of paying attention to the blind area or lane change through voice; the vehicle-mounted system can be controlled to automatically complete some driving operations (such as automatic lane changing) so as to reduce the cognitive load of drivers and passengers.
In an embodiment of the present invention, the interaction valuation scheme may include, but is not limited to: the system records the response and operation of drivers and passengers on each interaction prompt, evaluates the interaction effect, continuously optimizes the interaction strategy and the deep learning model, and realizes customization and individuation of human-computer interaction.
The scheme of the embodiment of the invention realizes accurate judgment on the states of drivers and passengers by utilizing the multi-mode information and the artificial intelligence technology, actively interacts, can furthest lighten the workload of the drivers and passengers, provides self-defined driving experience and realizes safe and comfortable driving.
In the embodiment of the invention, the implementation scheme of the multi-mode information fusion specifically comprises the following steps:
and selecting information such as electroencephalogram, sight line data, steering wheel rotation angle, vehicle speed and the like of a driver and a passenger as a neural network to input. These information reflect the driver's cognitive state, visual attention, driving operation and vehicle state.
And carrying out feature extraction and sequence modeling on the input information by adopting deep learning models such as CNN (convolutional neural network) and LSTM (long short term memory network) and the like to obtain feature vectors capable of reflecting the intention and the state of drivers and passengers.
Inputting the feature vector into a classification model (such as a Softmax classifier) to obtain the intention judgment result of a driver, such as lane change intention: the feature vector indicates that the line of sight is gathered on the left rearview mirror within 3 seconds, the steering wheel turns slightly to the left, and the vehicle speed remains unchanged, and then the lane change intention is judged. Mental fatigue: and the brain electrical low-frequency power is increased and eye movement is slowed down for the past 5 minutes, and the mental fatigue state is judged. Side rearview monitoring: the feature vector indicates that the line of sight is looking across the left rearview mirror for the past 2 seconds, judging that the driver is monitoring left rearview.
The following interactive operation is adopted for the judging result:
Lane change intention: the display screen highlights the left rear view area and the voice prompt "notice left rear view";
Mental fatigue: refreshing music is played, and a voice prompt is given for 'you look tired, whether rest is needed';
Side rearview monitoring: the driver and passenger status is continuously monitored without interaction.
According to the implementation scheme of the embodiment of the invention, the complex state of the driver and the passenger is researched and judged through a deep learning method, and the driver and the passenger are actively interacted to prompt, so that the purposes of reducing the cognitive load of the driver and the passenger and improving the driving safety are achieved. By combining richer input information and interaction means, the scheme of the embodiment can achieve the effect of customizing man-machine interaction.
In the embodiment of the present invention, the application of CNN and LSTM in the scheme is explained in detail:
CNN performs feature extraction on the line of sight data and electroencephalogram data.
Line of sight data: and taking the monitored sight line track image of the driver and the passenger as CNN input, and extracting sight line characteristics such as a sight line gathering area, a sight line transfer speed and the like by the CNN through a convolution layer and a pooling layer so as to judge visual attention.
Electroencephalogram data: the electroencephalogram signal image is input as CNN, which extracts electroencephalogram spectral features such as alpha wave, beta wave and theta wave energy features to judge mental states such as fatigue.
LSTM sequence modeling of time series data such as steering wheel angle, vehicle speed, etc.
A series of steering wheel rotation angle and vehicle speed data acquired in a time domain are input into an LSTM, the LSTM captures dynamic change characteristics and long-term dependency in the data through a forgetting gate and output gate mechanism, and a sequence model of steering wheel operation and vehicle speed change is established. Based on the sequence model, the driving intention of the driver, such as the intention of acceleration, deceleration, lane change and the like, can be judged.
3. And taking the combination of the sight line extracted by the CNN and the electroencephalogram characteristics and the sequence model characteristics established by the LSTM as classifier input to judge the states and intentions of drivers and passengers.
The classifier integrates the characterizations and distinguishability of the characteristic information, and accurately judges the states and intentions of drivers and passengers.
With the increase of the input data volume, the performance of the classifier is continuously improved, and the deep understanding of the state of drivers and passengers is realized.
According to the scheme provided by the embodiment of the invention, the CNN and the LSTM are used for respectively carrying out feature learning and sequence modeling on the image data and the time sequence data, and various types of data characterization are unified into one classifier, so that the multi-mode deep learning method can achieve a good effect, and a technical basis is provided for judging the intention state of drivers and passengers and realizing active interaction.
In some possible embodiments of the invention, the steering wheel of the vehicle is a touch steering wheel; the rim and the spoke of the steering wheel are provided with a flexible touch screen and a flexible display screen; the surface of the rim is provided with a fingerprint identification area; the inside of wheel rim is provided with a plurality of sensors.
It will be appreciated that, in order to enable the driver to use the steering wheel conveniently and intelligently, in this embodiment, the rim and spokes of the steering wheel may be part (such as an area within the range of the driver's sight, or an area where the time of interest of the driver exceeds the first preset time, or the area where the time of interest of the driver exceeds the first preset time, or all of the area may be touchable (provided with a touch sensor), or part (such as an area on the steering wheel held during driving of the driver beyond the second preset time, as estimated from historical data) or all of the area may receive pressure input (provided with a pressure sensor).
The fingerprint identification area is connected with the fingerprint acquisition sensor and the processor for identification processing, and can enable or lock part of functions/operations on the steering wheel through fingerprint verification so as to ensure driving safety.
It should be noted that, the setting of the plurality of sensors may determine the type, number, area, distance, resolution, etc. of the sensor to be set after analyzing the big data according to the historical usage data of the steering wheel, and after the plurality of sensors are installed, the software control method may be used to adjust the type, number, corresponding area, resolution, etc. of the sensor to be triggered in the plurality of sensors according to the individual characteristics (such as palm size, grip size, sight attention area, etc.) of the driver.
In order to prompt the driver and the passenger to perform corresponding operations in time and efficiently during driving, in some possible embodiments of the present invention, the obtaining module is configured to: acquiring first current position information of the vehicle and first navigation data of a navigation system;
the control processing module is configured to:
Determining whether the vehicle reaches a preset position according to the first current position information and the first navigation data;
when the preset position is reached (for example, when the distance from the intersection needing to be turned reaches a first preset distance or when the distance from a preset gas station/charging station reaches a second distance, etc.), prompting a driver of the vehicle to perform a touch operation mode (for example, displaying an operation schematic diagram or characters or animation on the steering wheel or displaying the operation schematic diagram or characters or animation through HUD projection, etc.).
In order to obtain an accurate active interaction model, in some possible embodiments of the present invention, the step of obtaining historical behavior data of a driver and obtaining a first active interaction model according to the historical behavior data, the control processing module is configured to:
Acquiring a first login account of a driver and a passenger logged in a system through an automobile simulation driver;
Acquiring corresponding historical navigation data and historical driving behavior data (historical interaction behavior data and the like of intelligent terminals such as intelligent mobile phones, intelligent household appliances and the like) according to the first login account;
Acquiring first road segment data and first environment data on a corresponding navigation route according to the historical navigation data;
Generating a first simulated driving scene according to the historical driving behavior data, the first road segment data and the first environment data;
Controlling the automobile simulated driver to acquire first behavior data and first physiological response data of the driver in the process that the driver operates the automobile simulated driver according to the first simulated driving scene;
Generating a first scene reaction model according to the first behavior data and the first physiological reaction data;
Generating the first active interaction model according to the first scene reaction model (and historical interaction behavior data of intelligent terminals such as intelligent mobile phones, intelligent household appliances and the like);
Binding the first initiative interaction model with the first login account and synchronizing the first initiative interaction model with a management server;
when the driver logs in through a first vehicle by using the first login account, the management server detects whether the first vehicle is synchronized with the first active interaction model;
If not, synchronizing the first active interaction model to the first vehicle;
and triggering a corresponding monitoring module by the first vehicle according to the first active interaction model to prompt and receive interaction data of the driver and the passengers.
It should be noted that, due to the development of the technologies of the mobile internet, the internet of things and the intelligent terminals, the user has formed a depth-dependent interaction manner on each intelligent terminal, so in this embodiment, the first active interaction model is generated according to the first scene reaction model and the historical interaction behavior data of the driver and passenger using the intelligent terminals such as the smart phone, the intelligent home appliance and the like
It can be appreciated that, in this embodiment, in order to make the interaction model more accurately conform to the style of the driver and the passenger, vehicle feature data (such as structural data, including data reflecting the overall structural features of the vehicle, such as the vehicle size, the wheelbase, and the vehicle body material, etc.) of the first vehicle may also be further obtained; the system comprises a vehicle-mounted sensor, a vehicle-mounted electronic control unit, a vehicle-mounted control system and a vehicle-mounted control system, wherein the vehicle-mounted control system comprises mechanical data including data reflecting the dynamic performance and operating stability characteristics of the vehicle such as engine/motor power, torque and axle weight distribution, state data including real-time state data such as speed, acceleration, inclination angle and position obtained by the vehicle-mounted sensor, control data including steering wheel angle, accelerator/brake pedal position and the like obtained by the controller and data reflecting the operating behavior of a driver, fault codes which can judge fault conditions of related components or systems and are reported by the vehicle-mounted electronic control unit, working time or mileage data of key components such as the engine/motor, a gearbox and a brake system can be used for predicting the service life and replacement cycle of the components, oil consumption/electricity consumption data, real-time oil consumption/electricity consumption data obtained by the oil consumption sensor and a power manager and data reflecting the overall economical efficiency of the vehicle, and running data including running time, mileage, route, speed distribution and the like obtained by the vehicle-mounted positioning and navigation system and data reflecting the service condition of the vehicle; and modifying the first active interaction model according to the vehicle characteristic data to obtain an active interaction model which is more in line with the actual condition of the first vehicle after modification.
In some possible embodiments of the present invention, the manner of operating the touch-type steering wheel includes a touch operation; the touch operation includes at least one of a left hand and/or a right hand sliding, tapping, grasping, holding sliding, fingerprint verification operation on the touch-sensitive steering wheel;
the control processing module is configured to:
Triggering a camera device and a sound acquisition device arranged on the first vehicle to acquire first image data and first sound data of the driver while the driver performs a first touch operation on the touch steering wheel;
Determining a first touch operation instruction corresponding to the first touch operation according to the first image data, the first sound data and the first touch operation;
And executing the first touch operation instruction in combination with the first active interaction model.
It can be appreciated that, in order to facilitate efficient and accurate operation of the steering wheel by the driver, in the embodiment of the present invention, the manner of operating the touch steering wheel includes a touch operation, where the touch operation includes, but is not limited to:
The single left hand touch operation is specifically as follows: the left hand performs finger or palm sliding, finger knocking, grasping, holding sliding, fingerprint verification and other operations;
The independent right hand touch operation is specifically as follows: the right hand performs finger or palm sliding, finger knocking, grasping, holding sliding, fingerprint verification and other operations
The double-hand touch operation is specifically as follows: the hands slide fingers or palms simultaneously or at intervals, and the hands strike, grasp, hold and slide, fingerprint verification and other operations are performed.
It should be noted that, according to the difference of the position or duration/time interval of the sliding, finger-knocking, gripping, holding, etc., different corresponding operation instructions may be defined, for example, the operation of fixing the steering wheel by the left hand, holding the rim of the steering wheel by the right hand to slide, the operation instruction corresponding to the sliding first preset distance may be the start of the right turn light, the operation instruction corresponding to the sliding second preset distance may be the scene of displaying the right rear blind area of the vehicle through the HUD, etc.
Triggering a camera device and a sound acquisition device arranged on the first vehicle to acquire first image data and first sound data of the driver while the driver performs a first touch operation on the touch steering wheel; determining a first touch operation instruction corresponding to the first touch operation according to the first image data, the first sound data and the first touch operation; and executing the first touch operation instruction in combination with the first active interaction model.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, such as the above-described division of units, merely a division of logic functions, and there may be additional manners of dividing in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, comprising several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the above-mentioned method of the various embodiments of the present application. And the aforementioned memory includes: a usb disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be implemented by a program that instructs associated hardware, and the program may be stored in a computer readable memory, which may include: flash disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
The foregoing has outlined rather broadly the more detailed description of embodiments of the application, wherein the principles and embodiments of the application are explained in detail using specific examples, the above examples being provided solely to facilitate the understanding of the method and core concepts of the application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.
Although the present invention is disclosed above, the present invention is not limited thereto. Variations and modifications, including combinations of the different functions and implementation steps, as well as embodiments of the software and hardware, may be readily apparent to those skilled in the art without departing from the spirit and scope of the invention.
Claims (4)
1. An active man-machine interaction method is characterized by comprising the following steps:
acquiring historical behavior data of drivers and passengers, and acquiring a first active interaction model according to the historical behavior data;
Collecting first input data, first physiological data, first vehicle state data and first environment data of the driver in real time;
Processing the first input data, the first physiological data, the first vehicle state data and the first environment data to obtain first integrated data;
Judging intention, cognitive state and visual attention of the driver and passenger by adopting a multi-mode information fusion and deep learning method according to the first integrated data to obtain a first judgment result;
According to the first judgment result and the first active interaction model, a first voice prompt is sent to the driver and the passenger and/or first prompt information is displayed on a vehicle-mounted display;
Receiving first feedback data of the driver and the passenger on the first voice prompt and/or the first prompt information to evaluate the interaction effect, and obtaining first evaluation data;
adjusting the first active interaction model according to the first evaluation data;
The steering wheel of the vehicle is a touch steering wheel; the rim and the spoke of the steering wheel are both provided with a flexible touch screen and a flexible display screen, the flexible touch screen is arranged on the rim and the spoke, and the attention time of the driver is in a region with a proportion exceeding a first preset time proportion; the surface of the rim is provided with a fingerprint identification area; a plurality of sensors are arranged in the rim; the setting of the plurality of sensors is to analyze big data according to historical usage data of the steering wheel, then determine the type, number, area, distance and resolution of the sensors to be set, and after the plurality of sensors are installed, adjust the type, number, corresponding area and resolution of the specific triggered sensors in the plurality of sensors according to the personal characteristic data of the current driver in the driver, wherein the personal characteristic data comprises palm size, grip strength size and sight line attention area;
The method also comprises the steps of:
Acquiring first current position information of the vehicle and first navigation data of a navigation system;
Determining whether the vehicle reaches a preset position according to the first current position information and the first navigation data;
When the preset position is reached, prompting a driver of the vehicle to carry out a touch operation mode;
The method also comprises the steps of:
the method comprises the steps that an automobile simulation driver obtains a first login account of a driver and a passenger logged in a system;
Acquiring corresponding historical navigation data and historical driving behavior data according to the first login account, and using a smart phone and historical interaction behavior data of intelligent household appliances;
Acquiring first road segment data and first environment data on a corresponding navigation route according to the historical navigation data;
Generating a first simulated driving scene according to the historical driving behavior data, the first road segment data and the first environment data;
collecting first behavior data and first physiological response data of the driver in the process that the driver operates the automobile simulated driver according to the first simulated driving scene;
Generating a first scene reaction model according to the first behavior data and the first physiological reaction data;
generating the first active interaction model according to the first scene reaction model and the historical interaction behavior data;
Binding the first initiative interaction model with the first login account and synchronizing the first initiative interaction model with a management server;
when the driver logs in through a first vehicle by using the first login account, the management server detects whether the first vehicle is synchronized with the first active interaction model;
If not, synchronizing the first active interaction model to the first vehicle;
and triggering a corresponding monitoring module by the first vehicle according to the first active interaction model to prompt and receive interaction data of the driver and the passengers.
2. The method of active human-machine interaction according to claim 1, wherein the manner of operating the touch-sensitive steering wheel comprises a touch operation; the touch operation includes at least one of a left hand and/or a right hand sliding, tapping, grasping, holding sliding, fingerprint verification operation on the touch-sensitive steering wheel;
Triggering a camera device and a sound acquisition device arranged on the first vehicle to acquire first image data and first sound data of the driver while the driver performs a first touch operation on the touch steering wheel;
Determining a first touch operation instruction corresponding to the first touch operation according to the first image data, the first sound data and the first touch operation;
And executing the first touch operation instruction in combination with the first active interaction model.
3. A new energy automobile, characterized by comprising: an acquisition module and a control processing module;
The acquisition module is configured to:
acquiring historical behavior data of drivers and passengers, and acquiring a first active interaction model according to the historical behavior data;
Collecting first input data, first physiological data, first vehicle state data and first environment data of the driver in real time;
the control processing module is configured to:
Processing the first input data, the first physiological data, the first vehicle state data and the first environment data to obtain first integrated data;
Judging intention, cognitive state and visual attention of the driver and passenger by adopting a multi-mode information fusion and deep learning method according to the first integrated data to obtain a first judgment result;
According to the first judgment result and the first active interaction model, a first voice prompt is sent to the driver and the passenger and/or first prompt information is displayed on a vehicle-mounted display;
Receiving first feedback data of the driver and the passenger on the first voice prompt and/or the first prompt information to evaluate the interaction effect, and obtaining first evaluation data;
adjusting the first active interaction model according to the first evaluation data;
The steering wheel of the vehicle is a touch steering wheel; the rim and the spoke of the steering wheel are both provided with a flexible touch screen and a flexible display screen, the flexible touch screen is arranged on the rim and the spoke, and the attention time of the driver is in a region with a proportion exceeding a first preset time proportion; the surface of the rim is provided with a fingerprint identification area; a plurality of sensors are arranged in the rim; the setting of the plurality of sensors is to analyze big data according to historical usage data of the steering wheel, then determine the type, number, area, distance and resolution of the sensors to be set, and after the plurality of sensors are installed, adjust the type, number, corresponding area and resolution of the specific triggered sensors in the plurality of sensors according to the personal characteristic data of the current driver in the driver, wherein the personal characteristic data comprises palm size, grip strength size and sight line attention area;
The acquisition module is configured to: acquiring first current position information of the vehicle and first navigation data of a navigation system;
the control processing module is configured to:
Determining whether the vehicle reaches a preset position according to the first current position information and the first navigation data;
When the preset position is reached, prompting a driver of the vehicle to carry out a touch operation mode;
the control processing module is configured to:
Acquiring a first login account of a driver and a passenger logged in a system through an automobile simulation driver;
Acquiring corresponding historical navigation data and historical driving behavior data according to the first login account, and using a smart phone and historical interaction behavior data of intelligent household appliances;
Acquiring first road segment data and first environment data on a corresponding navigation route according to the historical navigation data;
Generating a first simulated driving scene according to the historical driving behavior data, the first road segment data and the first environment data;
Controlling the automobile simulated driver to acquire first behavior data and first physiological response data of the driver in the process that the driver operates the automobile simulated driver according to the first simulated driving scene;
Generating a first scene reaction model according to the first behavior data and the first physiological reaction data;
generating the first active interaction model according to the first scene reaction model and the historical interaction behavior data;
Binding the first initiative interaction model with the first login account and synchronizing the first initiative interaction model with a management server;
when the driver logs in through a first vehicle by using the first login account, the management server detects whether the first vehicle is synchronized with the first active interaction model;
If not, synchronizing the first active interaction model to the first vehicle;
and triggering a corresponding monitoring module by the first vehicle according to the first active interaction model to prompt and receive interaction data of the driver and the passengers.
4. The new energy automobile of claim 3, wherein the manner in which the touch-type steering wheel is operated comprises a touch operation; the touch operation includes at least one of a left hand and/or a right hand sliding, tapping, grasping, holding sliding, fingerprint verification operation on the touch-sensitive steering wheel;
the control processing module is configured to:
Triggering a camera device and a sound acquisition device arranged on the first vehicle to acquire first image data and first sound data of the driver while the driver performs a first touch operation on the touch steering wheel;
Determining a first touch operation instruction corresponding to the first touch operation according to the first image data, the first sound data and the first touch operation;
And executing the first touch operation instruction in combination with the first active interaction model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310873614.XA CN116767256B (en) | 2023-07-14 | 2023-07-14 | Active human-computer interaction method and new energy automobile |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310873614.XA CN116767256B (en) | 2023-07-14 | 2023-07-14 | Active human-computer interaction method and new energy automobile |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116767256A CN116767256A (en) | 2023-09-19 |
CN116767256B true CN116767256B (en) | 2024-06-11 |
Family
ID=87989541
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310873614.XA Active CN116767256B (en) | 2023-07-14 | 2023-07-14 | Active human-computer interaction method and new energy automobile |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116767256B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106682090A (en) * | 2016-11-29 | 2017-05-17 | 上海智臻智能网络科技股份有限公司 | Active interaction implementing device, active interaction implementing method and intelligent voice interaction equipment |
KR101781325B1 (en) * | 2017-03-03 | 2017-10-10 | 주식회사 에이엠티 | Simulated driving system |
CN112874515A (en) * | 2021-02-23 | 2021-06-01 | 长安大学 | System and method for carrying out safety reminding on driving assistance system by using driving posture |
CN112947759A (en) * | 2021-03-08 | 2021-06-11 | 上汽大众汽车有限公司 | Vehicle-mounted emotional interaction platform and interaction method |
CN115027484A (en) * | 2022-05-23 | 2022-09-09 | 吉林大学 | Human-computer fusion perception method for high-degree automatic driving |
CN115593496A (en) * | 2022-10-13 | 2023-01-13 | 中国第一汽车股份有限公司(Cn) | Steering wheel for high-level intelligent driving automobile and operating system thereof |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3272610B1 (en) * | 2015-04-21 | 2019-07-17 | Panasonic Intellectual Property Management Co., Ltd. | Information processing system, information processing method, and program |
US11312298B2 (en) * | 2020-01-30 | 2022-04-26 | International Business Machines Corporation | Modulating attention of responsible parties to predicted dangers of self-driving cars |
-
2023
- 2023-07-14 CN CN202310873614.XA patent/CN116767256B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106682090A (en) * | 2016-11-29 | 2017-05-17 | 上海智臻智能网络科技股份有限公司 | Active interaction implementing device, active interaction implementing method and intelligent voice interaction equipment |
KR101781325B1 (en) * | 2017-03-03 | 2017-10-10 | 주식회사 에이엠티 | Simulated driving system |
CN112874515A (en) * | 2021-02-23 | 2021-06-01 | 长安大学 | System and method for carrying out safety reminding on driving assistance system by using driving posture |
CN112947759A (en) * | 2021-03-08 | 2021-06-11 | 上汽大众汽车有限公司 | Vehicle-mounted emotional interaction platform and interaction method |
CN115027484A (en) * | 2022-05-23 | 2022-09-09 | 吉林大学 | Human-computer fusion perception method for high-degree automatic driving |
CN115593496A (en) * | 2022-10-13 | 2023-01-13 | 中国第一汽车股份有限公司(Cn) | Steering wheel for high-level intelligent driving automobile and operating system thereof |
Also Published As
Publication number | Publication date |
---|---|
CN116767256A (en) | 2023-09-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11249544B2 (en) | Methods and systems for using artificial intelligence to evaluate, correct, and monitor user attentiveness | |
CN106803423B (en) | Man-machine interaction voice control method and device based on user emotion state and vehicle | |
CN101278324B (en) | Adaptive driver workload estimator | |
CN109416733B (en) | Portable personalization | |
CN112277955B (en) | Driving assistance method, device, equipment and storage medium | |
US9165280B2 (en) | Predictive user modeling in user interface design | |
CN110509983B (en) | Drive-by-wire road feel feedback device suitable for different driving demands | |
CN104417457A (en) | Driver assistance system | |
EP3638559A1 (en) | Systems and methods to obtain feedback in response to autonomous vehicle failure events | |
CN101238985A (en) | Psychology and behavior monitoring system based on simulation driving platform | |
CN110371132A (en) | Driver's adapter tube appraisal procedure and device | |
CN109727427A (en) | DMS driver fatigue early warning system | |
CN111540222A (en) | Intelligent interaction method and device based on unmanned vehicle and unmanned vehicle | |
CN105232064A (en) | System and method for predicting influence of music on behaviors of driver | |
CN113723528A (en) | Vehicle-mounted voice-video fusion multi-mode interaction method, system, device and storage medium | |
KR102625398B1 (en) | Vehicle and control method for the same | |
EP4042322A1 (en) | Methods and systems for using artificial intelligence to evaluate, correct, and monitor user attentiveness | |
CN114684152A (en) | Method, device, vehicle and medium for processing driving experience data | |
Rong et al. | Artificial intelligence methods in in-cabin use cases: A survey | |
CN115743137A (en) | Driving situation understanding method based on man-machine enhanced perception | |
Lu et al. | A review of sensory interactions between autonomous vehicles and drivers | |
CN116767256B (en) | Active human-computer interaction method and new energy automobile | |
Wang et al. | Classification of automated lane-change styles by modeling and analyzing truck driver behavior: A driving simulator study | |
CN113370984A (en) | Multi-index-based comprehensive evaluation method and system for comfort of automatic driving vehicle | |
Zhang et al. | Mid-air gestures for in-vehicle media player: elicitation, segmentation, recognition, and eye-tracking testing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |