CN115331513A - Auxiliary training method, equipment and medium for automobile driving skills - Google Patents

Auxiliary training method, equipment and medium for automobile driving skills Download PDF

Info

Publication number
CN115331513A
CN115331513A CN202210895940.6A CN202210895940A CN115331513A CN 115331513 A CN115331513 A CN 115331513A CN 202210895940 A CN202210895940 A CN 202210895940A CN 115331513 A CN115331513 A CN 115331513A
Authority
CN
China
Prior art keywords
driving
attention
training
data
eye movement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210895940.6A
Other languages
Chinese (zh)
Inventor
宋业臻
肖维斌
韩伟
曲继新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Xinfa Technology Co ltd
Original Assignee
Shandong Xinfa Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Xinfa Technology Co ltd filed Critical Shandong Xinfa Technology Co ltd
Priority to CN202210895940.6A priority Critical patent/CN115331513A/en
Publication of CN115331513A publication Critical patent/CN115331513A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/04Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles
    • G09B9/052Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles characterised by provision for recording or measuring trainee's performance

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The embodiment of the specification discloses an auxiliary training method, equipment and a medium for automobile driving skills, relates to the technical field of driving training, and is applied to an auxiliary training device, wherein the device comprises a data acquisition module, an attention compensation module, a fatigue driving recognition module and an auxiliary training module, and the method comprises the following steps: the driving data of the driving learner in the current training scene is collected through a data collection module, and the driving data comprises eye movement data of the driving learner and physiological data of the driving learner; generating attention compensation points of a watching area according to the eye movement data of the driving student through an attention compensation module, wherein the attention compensation points comprise active attention compensation points and passive attention compensation points; determining the current driving state of a driving student through physiological data and a cognitive load overload recognition model preset in a fatigue driving recognition module; and performing auxiliary driving training on the driving trainees through an auxiliary training module based on the attention compensation points and the current driving state.

Description

Auxiliary training method, equipment and medium for automobile driving skills
Technical Field
The specification relates to the technical field of driving training, in particular to an auxiliary training method, equipment and medium for automobile driving skills.
Background
With the continuous development of economic society, the automobile holding capacity is continuously increased, the number of motor vehicle drivers is continuously increased, and the road traffic safety is concerned by more and more society. The road traffic safety is closely related to the driving environment, the driver and the road environment, and part of the reasons of traffic safety accidents are caused by the driver, including the driving error of the driver, the illegal driving and other reasons, so that the driving skill of the driver is improved, and the method plays an important role in improving the road traffic safety.
Different driving students have certain differences in the capacity of processing environmental information, the response capacity to emergencies and the fatigue resistance, the capacity of processing environmental information determines whether the driving skill can be mastered, the response capacity to emergencies determines whether accidents can be effectively processed, and the fatigue resistance determines the determination of the rest time node of long-time driving to prevent fatigue driving. The current driving training only carries out unified repeated training aiming at the conventional driving operation and can not carry out guided training on different driving students under different driving scenes.
Disclosure of Invention
One or more embodiments of the present specification provide an auxiliary training method, device, and medium for automobile driving skills, which are used to solve the following technical problems: the current driving training only carries out unified repeated training aiming at the conventional driving operation and can not carry out guided training on different driving students under different driving scenes.
One or more embodiments of the present disclosure adopt the following technical solutions:
one or more embodiments of the present disclosure provide an auxiliary training method for automobile driving skills, which is applied to an auxiliary training device, where the device includes a data acquisition module, an attention compensation module, a fatigue driving recognition module, and an auxiliary training module, and the method includes: the method comprises the steps that driving data of a driving student in a current training scene are collected through a data collection module, wherein the driving data comprise eye movement data of the driving student and physiological data of the driving student; generating, by the attention compensation module, attention compensation points of a gaze area from eye movement data of the driving learner, wherein the attention compensation points include active attention compensation points and passive attention compensation points; determining the current driving state of the driving learner through the physiological data and a cognitive load overload recognition model preset in a fatigue driving recognition module, wherein the current driving state comprises a fatigue driving state and a non-fatigue driving state; and performing auxiliary driving training on the driving trainees through an auxiliary training module based on the attention compensation points and the current driving state.
Further, generating attention compensation points of a gazing area according to the eye movement data of the driving learner specifically includes: comparing the eye movement data of the driving trainee with an eye movement index standard corresponding to a pre-constructed current training scene, and when the eye movement data does not meet the eye movement index standard, generating an active attention compensation point in a specified watching area, wherein the active attention compensation point is used for guiding the driving trainee to actively pay attention to the driving trainee in the specified watching area; if stimulation information corresponding to an accident exists in the current training scene, acquiring a stimulation area corresponding to a stimulation signal; determining the current watching area of the driving student according to the eye movement data; generating passive attention compensation points based on the stimulation region and the current gaze region.
Further, generating a passive attention compensation point based on the stimulation area and the current gaze area, specifically comprising: determining the current watching area of the driving trainee, and judging whether a stimulation signal exists in the current watching area; if a stimulation signal exists in the current gazing area, acquiring a stimulation area coordinate corresponding to the stimulation signal; according to the eye movement data, determining the fixation area coordinate of the driving student in the current fixation area, and judging whether the stimulation area coordinate is consistent with the fixation area coordinate; and if not, generating a passive attention compensation point at the stimulation area coordinate, wherein the passive attention compensation point is used for guiding the driving trainee to continuously watch the stimulation signal at the stimulation area coordinate so as to perform passive attention compensation on the driving trainee.
Further, the eye movement data of the driving learner is compared with an eye movement index standard corresponding to a pre-constructed current training scene, and when the eye movement data do not meet the eye movement index standard, before an active attention compensation point is generated in a designated gazing area, the method further comprises: defining an eye movement index, wherein the eye movement index comprises a fixation point, a fixation transfer track and a fixation distribution proportion; the method comprises the steps of collecting eye movement index data meeting conditions under a plurality of training scenes to construct an eye movement index data set, wherein the data set comprises a plurality of eye movement index data corresponding to each training scene, and the training scenes comprise any one or more of left steering, right steering, overtaking, sidewalk passing and rapid driving; and establishing an eye movement index standard corresponding to each training scene based on the eye movement index data set, wherein the eye movement index standard is a standard description of the corresponding eye movement index in each training scene.
Further, before determining the current driving state of the driving learner through the physiological data and a cognitive load overload recognition model preset in a fatigue driving recognition module, the method further comprises: carrying out a plurality of driving training on the driving trainees, and collecting operation behaviors in each driving training and physiological data of the driving trainees at each moment, wherein the physiological data comprises pupil expansion degree, heart rate variability, blink frequency and blink duration; when the operation behavior is collected to belong to a preset error operation type, determining a collection time corresponding to the operation behavior; backtracking the acquisition time corresponding to the operation behavior to generate a time interval with preset duration; acquiring a plurality of physiological data of the driving trainees in the time interval, and constructing a training data set, wherein the training data set is described as follows: trainingset = { EA t | PD, HR, HRV, noEB, toEB }; wherein, training Set is a Training data Set, EA is an operation behavior belonging to a preset error operation type, PD is pupil expansion degree, HR is heart rate, HRV is heart rate variability, noEB is blink frequency, and ToEB is blink duration; constructing an artificial intelligence model, and performing meta-learning on the training data set through the artificial intelligence model to generate the training data setThe cognitive load overload recognition model comprises a cognitive load overload recognition model, wherein input data of the cognitive load overload recognition model are physiological data, and output data of the cognitive load overload recognition model are whether fatigue driving early warning is performed or not.
Further, the fixation point is used for representing a corresponding area where the stay time of the fovea of the eye is greater than a preset time threshold; the gaze transfer trajectory is used for representing a trajectory which changes from a first gaze point to a second gaze point, and a trajectory connection line between the first gaze point and the second gaze point is a gaze transfer index, wherein no other gaze point exists between the first gaze point and the second gaze point; the gazing distribution proportion is used for representing the distribution proportion of gazing time lengths formed by two groups of different gazing points in the same driving scene.
Further, based on the eye movement index data set, an eye movement index standard corresponding to each training scene is established, which specifically includes: when the training scene is left-turning, the left rearview mirror area is a watching area, and the eye movement index standard is RoGP ROI >RoGP other Wherein, roGP ROI For gaze allocation of the left-hand mirror region, roGP other Allocating proportions to the fixations of other areas; when the training scene is a right turn, the right rear view mirror area is a watching area, and the eye movement index standard is RoGP ROI >RoGP other Wherein, roGP ROI As a gaze allocation ratio of the right rear view mirror region, roGP other Allocating proportions for the gazing of other areas; when the training scene is overtaking, the left rearview mirror area is defined as LRW, the right rearview mirror area is defined as RRW, the front area of the car is defined as F, and the eye movement index standard is as follows: roGP LRW =RoGP RRW =RoGP F ,SR=(GP LRW →GP F →GP RRW →GP F ) And Δ GP t <=3s, wherein RoGP LRW For gaze allocation of the left-hand mirror region, roGP RRW For the gaze allocation proportion of the right rear-view mirror region, roGP F The gaze distribution ratio of the front region of the vehicle, SR being the gaze diversion indicator, GP LRW Is the left rearSight point of sight glass area, GP F Is the front region fixation point, GP RRW Is the right rear-view mirror region fixation point, Δ GP t Is the difference in transfer time between the two points of regard; when the training scene is through the pavement, the area outside the left window is defined as LOW, the area outside the right window is defined as ROW, the area in the front of the vehicle is defined as F, and the eye movement index standard is as follows: roGP F >RoGP ROW >RoGP LOW ,SR=(GP F →GP ROW →GP F ) And 1.5s<=ΔGP t <=3s, wherein RoGP LOW For the gaze allocation of the left window outer region, roGP ROW For the gaze allocation of the right window outer region, GP ROW A point of regard for the outer area of the right vehicle window; when the training scene is fast driving, the eye movement index standard is as follows: roGP F >RoGP ROW >RoGP LOW
SR=(GP F →GP LRW →GP F →GP RRW ) And Δ GP t <=1.5s。
Further, the active attention compensation points are semitransparent color-designated compensation points, and the passive attention compensation points are opaque color-designated compensation points.
One or more embodiments of the present specification provide an auxiliary training apparatus for automobile driving skills, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to cause the at least one processor to: the method comprises the steps that driving data of a driving student in a current training scene are collected through a data collection module, wherein the driving data comprise eye movement data of the driving student and physiological data of the driving student; generating, by an attention compensation module, attention compensation points of a gaze region from eye movement data of the driving learner, wherein the attention compensation points include active attention compensation points and passive attention compensation points; determining the current driving state of the driving learner through the physiological data and a cognitive load overload recognition model preset in a fatigue driving recognition module, wherein the current driving state comprises a fatigue driving state and a non-fatigue driving state; and performing auxiliary driving training on the driving trainees through an auxiliary training module based on the attention compensation points and the current driving state.
One or more embodiments of the present specification provide a non-transitory computer storage medium storing computer-executable instructions configured to: the method comprises the steps that driving data of a driving student in a current training scene are collected through a data collection module, wherein the driving data comprise eye movement data of the driving student and physiological data of the driving student; generating, by an attention compensation module, attention compensation points of a gaze region from eye movement data of the driving learner, wherein the attention compensation points include active attention compensation points and passive attention compensation points; determining the current driving state of the driving learner through the physiological data and a cognitive load overload recognition model preset in a fatigue driving recognition module, wherein the current driving state comprises a fatigue driving state and a non-fatigue driving state; and performing auxiliary driving training on the driving trainees through an auxiliary training module based on the attention compensation points and the current driving state.
The embodiment of the specification adopts at least one technical scheme which can achieve the following beneficial effects: by the technical scheme, under the condition that the attention of the driving learner to the current driving environment is insufficient, the system automatically pays attention to compensation and assists the driving learner to pay attention to processing; when the driver is found to be lack of attention to transient and abnormal events, enhancement prompt is carried out in a scene environment stimulation signal area, so that attention orientation of the driver is trained through a passive attention compensation mechanism; the current driving state is obtained according to the physiological data of the driving trainees, the difference of individual cognition is avoided, the fatigue driving state of the trainees can be found in time to prompt the trainees, the obtained auxiliary training scheme has the pertinence of the driving scene and the pertinence of the driving trainees, and the universality of the driving training is ensured.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present specification, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort. In the drawings:
fig. 1 is a schematic flow chart of an auxiliary training method for automobile driving skills according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of an auxiliary training device for automobile driving skills according to an embodiment of the present disclosure.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present specification, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present specification without any creative effort shall fall within the protection scope of the present specification.
The embodiment of the specification provides an auxiliary training method for automobile driving skills, which is applied to an auxiliary training device. Fig. 1 is a schematic flow chart of an auxiliary training method for automobile driving skills according to an embodiment of the present disclosure, and as shown in fig. 1, the method mainly includes the following steps:
and S101, acquiring driving data of the driving learner in the current training scene through a data acquisition module.
Wherein the driving data includes eye movement data of the driving trainee and physiological data of the driving trainee.
In one embodiment of the present specification, the data acquisition module acquires eye movement data and physiological data of a driving learner in a current training scene, where the data acquisition module may be a VR helmet with a built-in eye tracker, a physiological sensor and an external camera, and during training of automobile driving skills, the data acquisition module wears the VR helmet and the driving sensor to perform simulation training.
In step S102, the attention compensation module generates an attention compensation point of the attention area according to the eye movement data of the driving trainee.
The attention compensation points comprise active attention compensation points and passive attention compensation points, wherein the active attention compensation points are semitransparent compensation points of the designated color, and the passive attention compensation points are opaque compensation points of the designated color. It should be noted that the active attention compensation process is a dynamic process, that is, the active attention compensation point should be in the gazing area without affecting the driving view, and therefore, the active attention compensation point is a semitransparent compensation point; the passive attention compensation point is mainly used for assisting the driving learner to attract the attention of the learner in a stimulation signal area corresponding to the existence of an accident event, so that the passive attention compensation point is an opaque compensation point. The color here may be red, or may be other colors, and this embodiment is not limited in detail here.
Generating attention compensation points of a watching area according to the eye movement data of the driving trainee specifically comprises the following steps: comparing the eye movement data of the driving trainee with an eye movement index standard corresponding to a pre-constructed current training scene, and when the eye movement data does not conform to the eye movement index standard, generating an active attention compensation point in a specified watching area, wherein the active attention compensation point is used for guiding the active attention compensation of the driving trainee in the specified watching area; if stimulation information corresponding to an accident exists in the current training scene, acquiring a stimulation area corresponding to a stimulation signal; determining the current watching area of the driving student according to the eye movement data; based on the stimulation region and the current gaze region, passive attention compensation points are generated.
In one embodiment of the present specification, active learning compensation needs to be performed on the driving trainees according to a training scenario, so as to achieve the purpose that the trainees are skilled in driving skills. And judging whether active compensation needs to be carried out on the trainee or not according to the eye movement index standard corresponding to the pre-constructed training scene and the eye movement data of the current trainee. Here, the eye movement index and the eye movement data are different expressions of the same concept, and the eye movement index standard is standard data corresponding to each training scene established in advance. When the eye movement data do not meet the eye movement index standard under the corresponding training scene, the observation of the current trainee does not meet the standard, and an active attention compensation point is generated in the designated watching area and used for guiding the driving trainee to actively pay attention to the driving trainee in the designated watching area, namely, the driving trainee is reminded to pay attention to the active attention compensation point so as to realize the attention processing of the trainee. In addition, in an actual driving training scenario, it is also necessary to develop alertness and responsiveness when a driving learner faces an unexpected event. And if the stimulation information corresponding to the accident exists in the current training scene, acquiring a stimulation area corresponding to the stimulation signal. The stimulation information may be an unexpected event preset in the helmet display area, for example, an animal or an obstacle may appear, and the area corresponding to the unexpected event is the stimulation area. Determining the current watching area of the driving student according to the eye movement data of the driving student, generating a passive attention compensation point based on the stimulation area and the current watching area, and passively reminding the driving student of the current accident.
The skills that are difficult for the driving trainee to learn during the driving skill training process are attention information processing skills for the driving environment, such as: under complex road conditions, repeatedly observing and paying attention to a rearview mirror and the road conditions for many times; when turning, the left and right rearview mirrors are repeatedly observed and noticed for many times. The traditional skill training and skill evaluation mainly depend on a manual observation mode to carry out skill evaluation and training, and the core function of the attention compensation module is to assist a driving student to carry out attention processing training, and carry out automatic attention compensation under the condition that the driving student does not pay attention to the driving environment, so as to assist the driving student to carry out attention processing.
Comparing the eye movement data of the driving learner with an eye movement index standard corresponding to a pre-constructed current training scene, and when the eye movement data is not in accordance with the eye movement index standard, before an active attention compensation point is generated in a designated watching area, the method further comprises the following steps: defining an eye movement index, wherein the eye movement index comprises a fixation point, a fixation transfer track and a fixation distribution proportion; acquiring eye movement index data meeting conditions under a plurality of training scenes to construct an eye movement index data set, wherein the data set comprises a plurality of eye movement index data corresponding to each training scene, and the training scenes comprise any one or more of left steering, right steering, overtaking, sidewalk passing and rapid driving; and establishing an eye movement index standard corresponding to each training scene based on the eye movement index data set, wherein the eye movement index standard is the standard description of the corresponding eye movement index in each training scene.
The Gaze Point (GP) is used for representing a corresponding area where the staying time of the fovea of the eye is greater than a preset time threshold; a gaze transfer trajectory (SR) is used to indicate a trajectory that changes from a first gaze point to a second gaze point, and a trajectory connection line between the first gaze point and the second gaze point is a gaze transfer indicator, where no other gaze point exists between the first gaze point and the second gaze point; a Gaze allocation ratio (Rate of size Points, roGP) is used to indicate an allocation ratio of Gaze time lengths formed by two different sets of Gaze Points in the same driving scene.
Based on the eye movement index data set, establishing an eye movement index standard corresponding to each training scene, which specifically comprises the following steps: when the training scene is left steering, the left rearview mirror area is a watching area, and the eye movement index standard is RoGP ROI >RoGP other Wherein, roGP ROI For gaze allocation of the left-hand mirror region, roGP other Allocating proportions for the gazing of other areas; when the training scene is a right turn, the right rear view mirror area is a watching area, and the eye movement index standard is RoGP ROI >RoGP other Wherein, roGP ROI For the region of the right rear-view mirrorRoGP according to distribution ratio other Allocating proportions for the gazing of other areas; when the training scene is overtaking, the left rearview mirror area is defined as LRW, the right rearview mirror area is defined as RRW, the front area of the car is defined as F, and the eye movement index standard is as follows:
RoGP LRW =RoGP RRW =RoGP F
SR=(GP LRW →GP F →GP RRW →GP F ) And Δ GP t <=3s,
Wherein, roGP LRW For gaze allocation of the left-hand mirror region, roGP RRW For the gaze allocation proportion of the right rear-view mirror region, roGP F The gaze distribution ratio of the front region of the vehicle, SR being the gaze diversion indicator, GP LRW For the left-rear-view mirror region point of fixation, GP F Is the front region fixation point, GP RRW Is the right rear-view mirror region fixation point, Δ GP t Is the difference in transfer time between the two points of regard; when the training scene is a sidewalk, the outer area of the left window is defined as LOW, the outer area of the right window is defined as ROW, the area of the front part of the vehicle is defined as F, and the eye movement index standard is as follows:
RoGP F >RoGP ROW >RoGP LOW
SR=(GP F →GP ROW →GP F ) And 1.5s<=ΔGP t <=3s,
Wherein, roGP LOW For the gaze allocation of the left window outer region, roGP ROW For the gaze allocation of the right window outer region, GP ROW A point of regard for the outer area of the right vehicle window; when the training scene is fast driving, the eye movement index standard is as follows:
RoGP F >RoGP ROW >RoGP LOW
SR=(GP F →GP LRW →GP F →GP RRW ) And Δ GP t <=1.5s。
In one embodiment of the present specification, in a previous test, 50 drivers with driving experience of more than 15 years and a zero accident rate are selected as a subject, and attention distribution and attention transfer characteristics during driving of the subject under a simulation condition are collected as training reference standards. The simulation condition is split into a driving scene and a rapid driving scene of a slow complex road condition, a group of corresponding simulation videos of the scene in the vehicle and the scene outside the vehicle are played under the simulation condition respectively, the videos are displayed for a testee, a VR helmet is used for displaying the videos, and relevant data are collected by an eye tracker. Defining an eye movement indicator, the collected eye movement indicator comprising: the fixation point, the fixation transfer track and the fixation distribution proportion are respectively described as follows after a plurality of groups of experiments: the point of regard refers to a corresponding scene area with a dwell time length of more than 1635 ms. The fixation transition track refers to a track which changes from one fixation point (GP 1) to another fixation point (GP 2), a third fixation point is not arranged between the two fixation points, and a connecting line between the two fixation points is an SR index. The gaze allocation ratio refers to an allocation ratio of gaze time lengths formed by two different sets of gaze points on the same driving scene image.
And collecting data and establishing a standard. After 10 measurements of each of the 50 drivers, a total of 500 valid data sets were obtained, and the criteria were established. The standard is described as: when the training scene is left steering, the left rearview mirror Region is a Region of Interest (ROI), and the eye movement index standard is RoGP ROI >RoGP other (ii) a When the training scene is a right turn, the right rear view mirror area is a watching area, and the eye movement index standard is RoGP ROI >RoGP other (ii) a When the training scene is overtaking, the left rearview mirror area is defined as LRW, the right rearview mirror area is defined as RRW, the front area of the vehicle is defined as F, and the eye movement index standard is as follows:
RoGP LRW =RoGP RRW =RoGP F
SR=(GP LRW →GP F →GP RRW →GP F ) And Δ GP t <=3s,
When the training scene is through the sidewalk, the area outside the left window is defined as LOW, the area outside the right window is defined as ROW, the area in the front of the vehicle is defined as F, and the eye movement index standard is as follows:
RoGP F >RoGP ROW >RoGP LOW
SR=(GP F →GP ROW →GP F ) And 1.5s<=ΔGP t <=3s,
When the training scene is fast driving, the eye movement index standard is as follows:
RoGP F >RoGP ROW >RoGP LOW
SR=(GP F →GP LRW →GP F →GP RRW ) And Δ GP t <=1.5s。
When the eye movement data is different from the eye movement index standard in the training process of the driving student, a semitransparent red prompting bright spot automatically appears on an interface of the VR helmet in a current watching area of the driving student, the red bright spot continuously appears and continuously guides the driving student to pay attention to a processing task in the scene, and the processing task is continuously completed until the training of the scene is completed, so that the attention processing of the driving student is actively compensated, and the student is guided to pay attention to the processing.
Generating a passive attention compensation point based on the stimulation area and the current gaze area, specifically comprising: determining the current watching area of the driving trainee, and judging whether a stimulation signal exists in the current watching area; if the stimulation signal exists in the current gazing area, acquiring a stimulation area coordinate corresponding to the stimulation signal; according to the eye movement data, determining the fixation area coordinate of the driving learner in the current fixation area, and judging whether the stimulation area coordinate is consistent with the fixation area coordinate; and if not, generating a passive attention compensation point at the stimulation area coordinate, wherein the passive attention compensation point is used for guiding the driving student to continuously watch the stimulation signal at the stimulation area coordinate so as to perform passive attention compensation on the driving student.
In an embodiment of the present specification, a current gazing area of a driving trainee is determined according to eye movement data of the driving trainee, whether a stimulation signal corresponding to an unexpected event exists in the current gazing area is determined, if no stimulation signal exists in the current gazing area, a passive attention compensation point is set in a stimulation area corresponding to the stimulation signal, and passive attention compensation is performed on the driving trainee so as to remind the driving trainee to pay attention to the stimulation area. And if the stimulation signal exists in the current gazing area, acquiring the corresponding stimulation area coordinate of the stimulation signal in the stimulation area. And determining the fixation area coordinate of the driving student in the current fixation area according to the eye movement data, and judging whether the stimulation area coordinate is consistent with the fixation area coordinate. And if the driving area coordinates are inconsistent with the stimulation area coordinates, generating passive attention compensation points at the stimulation area coordinates, and guiding the driving trainee to continuously watch the stimulation signals at the stimulation area coordinates so as to perform passive attention compensation on the driving trainee.
During driving, an accident usually occurs rapidly, and a driving learner is lack of alertness and attention processing experience, so that attention processing is insufficient for an occurred accident scene, and timely action response cannot be made. The module has the core function that when the attention of the driving learner to transient and abnormal events is found to be insufficient, enhanced prompt is carried out in a scene environment stimulus signal area, so that the attention direction of the driving learner is trained through a passive attention compensation mechanism.
Defining an accident as EoT, and simulating a preliminary pre-experiment of a driving learner in an early stage 127, wherein when the display time interval of the accident on a VR helmet screen is greater than or equal to 150ms and less than or equal to 250ms, the scene event meets the condition of the degree of inducing human attention orientation but not completely performing scene stimulation attention processing, that is, when the display time is less than 150ms, the scene event is not enough to attract human attention, and when the display time is greater than 250ms, a user can timely pay attention to the scene stimulation without learning through training, namely: t is EoT =[150ms,250ms]。
When the fixation key point of the driving student after the stimulus is not consistent with the ROI coordinate of the stimulus, a red opaque bright spot appears in the ROI of the stimulus, the driving student is continuously prompted to fix the ROI, and passive attention compensation is performed on the driving student.
And step S103, determining the current driving state of the driving student through the physiological data and a cognitive load overload recognition model preset in the fatigue driving recognition module.
Wherein the current driving state comprises a fatigue driving state and a non-fatigue driving state.
Before determining the current driving state of the driving learner according to the physiological data and a cognitive load overload recognition model preset in a fatigue driving recognition module, the method further comprises the following steps: carrying out a plurality of driving training on the driving student, and collecting operation behaviors in each driving training and physiological data of the driving student at each moment, wherein the physiological data comprises pupil expansion degree, heart rate variability, blink frequency and blink duration; when the operation behavior is collected to belong to a preset error operation type, determining a collection time corresponding to the operation behavior; backtracking the acquisition time corresponding to the operation behavior to generate a time interval with preset duration; acquiring a plurality of physiological data of the driving learner in the time interval, and constructing a training data set, wherein the training data set is described as follows:
TrainingSet={EA t |PD,HR,HRV,NoEB,ToEB};
wherein, training Set is a Training data Set, EA is an operation behavior belonging to a preset error operation type, PD is pupil expansion degree, HR is heart rate, HRV is heart rate variability, noEB is blink frequency, and ToEB is blink duration; and constructing an artificial intelligence model, and performing meta-learning on the training data set through the artificial intelligence model to generate the cognitive load overload recognition model, wherein input data of the cognitive load overload recognition model is physiological data, and output data of the cognitive load overload recognition model is whether fatigue driving early warning is performed or not.
The cognitive information processing capacity of driving has larger difference among individuals, and the cognitive load upper limit threshold also has larger difference among individuals, which means that the driving fatigue time is different from person to person, and the rest time of fatigue driving cannot be set according to a uniform standard. The fatigue driving recognition module automatically constructs a training data set and an artificial intelligence learning model of fatigue driving by adopting a meta-learning mechanism, and performs personalized cognitive overload cycle recognition and prompt.
In one embodiment of the present specification, first, the driving trainee performs the driving training of the "cold start" stage, the external Kinect camera and VR helmet are started synchronously, and the erroneous operation actions are respectively collected and defined as EA, i.e., the operation behaviors of the erroneous operation type, the pupil dilation degree PD, the heart rate HR, the heart rate variability HRV, the blink frequency NoEB, and the blink duration ToEB. When the camera identifies that the driving student has wrong operation, the system automatically backtracks the physiological data within 1min before the operation occurs, and automatically generates a training data set for the artificial intelligence classifier, wherein the data set is described as follows:
TrainingSet={EA t | PD, HR, HRV, noEB, toEB }. Continuously collecting the driving process of the driving learner, continuously driving for 20min, repeating the driving training for 5 times, and generating a training data set of the learner. The artificial intelligence model automatically carries out meta-learning on the data set, an automatic identification and early warning mechanism for cognitive load overload is established, PD, HR, HRV, noEB and ToEB data are input and output as two classification variables for activation or not, and the specific calculation model is described as follows: and (3) establishing an incidence relation between input data PD, HR, HRV, noEB and ToEB and 'whether' judgment by using an SVM model so as to minimize a loss function:
Figure BDA0003767548080000131
in one embodiment of the present specification, since the cognitive information processing capabilities of different drivers have a large inter-individual difference, the cognitive load upper threshold value also has a large inter-individual difference, that is, the driving fatigue time is different from person to person. And determining the current driving state of the driving learner through the physiological data and a cognitive load overload recognition model preset in the fatigue driving recognition module. The cognitive load overload recognition model is obtained based on the driving data training of the current driving student, the obtained driving state of the driving student is more accurate, and the difference of fatigue driving time among individuals can be avoided.
And step S104, performing auxiliary driving training on the driving trainees through an auxiliary training module based on the attention compensation points and the current driving state.
In one embodiment of the present specification, the driving trainee is actively compensated by the active compensation point in the attention compensation points, and the system automatically performs attention compensation to assist the driving trainee in performing attention processing when the driving environment is not noticed sufficiently by the driving trainee. And passively compensating the driving learner through a passive compensation point, and when the driving learner is found to pay insufficient attention to transient and abnormal events, performing enhanced prompt in a scene environment stimulation signal area, so as to train the attention orientation of the driving learner through a passive attention compensation mechanism. And when the trainee drives in fatigue, the trainee is prompted based on the current driving state of the driving trainee.
By the technical scheme, under the condition that the attention of the driving learner to the current driving environment is insufficient, the system automatically pays attention to compensation and assists the driving learner to pay attention to processing; when the driver is found to be lack of attention to transient and abnormal events, enhancement prompt is carried out in a scene environment stimulation signal area, so that attention orientation of the driver is trained through a passive attention compensation mechanism; the current driving state is obtained according to the physiological data of the driving trainees, the difference of individual cognition is avoided, the fatigue driving state of the trainees can be found in time to prompt the trainees, the obtained auxiliary training scheme has the pertinence of the driving scene and the pertinence of the driving trainees, and the universality of the driving training is ensured.
An embodiment of the present specification further provides an auxiliary training device for automobile driving skills, as shown in fig. 2, the device includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to:
the driving data of a driving student in a current training scene is collected through a data collection module, wherein the driving data comprises eye movement data of the driving student and physiological data of the driving student; generating, by the attention compensation module, attention compensation points of a gaze region from eye movement data of the driving learner, wherein the attention compensation points include active attention compensation points and passive attention compensation points; determining the current driving state of the driving student through the physiological data and a cognitive load overload recognition model preset in a fatigue driving recognition module, wherein the current driving state comprises a fatigue driving state and a non-fatigue driving state; and performing auxiliary driving training on the driving trainees through an auxiliary training module based on the attention compensation points and the current driving state.
Embodiments of the present description also provide a non-volatile computer storage medium storing computer-executable instructions configured to: the method comprises the steps that driving data of a driving student in a current training scene are collected through a data collection module, wherein the driving data comprise eye movement data of the driving student and physiological data of the driving student; generating, by the attention compensation module, attention compensation points of a gaze area from eye movement data of the driving learner, wherein the attention compensation points include active attention compensation points and passive attention compensation points; determining the current driving state of the driving learner through the physiological data and a cognitive load overload recognition model preset in a fatigue driving recognition module, wherein the current driving state comprises a fatigue driving state and a non-fatigue driving state; and performing auxiliary driving training on the driving trainees through an auxiliary training module based on the attention compensation points and the current driving state.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the embodiments of the apparatus, the device, and the nonvolatile computer storage medium, since they are substantially similar to the embodiments of the method, the description is simple, and for the relevant points, reference may be made to the partial description of the embodiments of the method.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The above description is intended to represent one or more embodiments of the present disclosure, and should not be taken to be limiting of the present disclosure. Various modifications and alterations to one or more embodiments of the present description will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement or the like made within the spirit and principle of one or more embodiments of the present specification should be included in the scope of the claims of the present specification.

Claims (10)

1. An auxiliary training method for automobile driving skills is applied to an auxiliary training device, the device comprises a data acquisition module, an attention compensation module, a fatigue driving recognition module and an auxiliary training module, and the method comprises the following steps:
the method comprises the steps that driving data of a driving student in a current training scene are collected through a data collection module, wherein the driving data comprise eye movement data of the driving student and physiological data of the driving student;
generating, by the attention compensation module, attention compensation points of a gaze area from eye movement data of the driving learner, wherein the attention compensation points include active attention compensation points and passive attention compensation points;
determining the current driving state of the driving learner through the physiological data and a cognitive load overload recognition model preset in a fatigue driving recognition module, wherein the current driving state comprises a fatigue driving state and a non-fatigue driving state;
and performing auxiliary driving training on the driving trainees through an auxiliary training module based on the attention compensation points and the current driving state.
2. The method for assisting in training of automobile driving skills according to claim 1, wherein generating attention compensation points of a gaze area according to the eye movement data of the driving learner specifically comprises:
comparing the eye movement data of the driving trainee with an eye movement index standard corresponding to a pre-constructed current training scene, and when the eye movement data does not meet the eye movement index standard, generating an active attention compensation point in a specified watching area, wherein the active attention compensation point is used for guiding the driving trainee to actively pay attention to the driving trainee in the specified watching area;
if stimulation information corresponding to an accident exists in the current training scene, acquiring a stimulation area corresponding to a stimulation signal;
determining the current watching area of the driving student according to the eye movement data;
generating passive attention compensation points based on the stimulation region and the current gaze region.
3. The method for assisting in training of automobile driving skills according to claim 2, wherein generating passive attention compensation points based on the stimulation area and the current gaze area specifically comprises:
determining the current watching area of the driving trainee, and judging whether a stimulation signal exists in the current watching area;
if a stimulation signal exists in the current gazing area, acquiring a stimulation area coordinate corresponding to the stimulation signal;
according to the eye movement data, determining the fixation area coordinate of the driving student in the current fixation area, and judging whether the stimulation area coordinate is consistent with the fixation area coordinate;
and if not, generating a passive attention compensation point at the stimulation area coordinate, wherein the passive attention compensation point is used for guiding the driving trainee to continuously watch the stimulation signal at the stimulation area coordinate so as to perform passive attention compensation on the driving trainee.
4. The auxiliary training method for automobile driving skills according to claim 2, wherein the eye movement data of the driving learner is compared with the eye movement index standard corresponding to the pre-constructed current training scene, and when the eye movement data does not meet the eye movement index standard, before the active attention compensation point is generated in the designated gazing area, the method further comprises:
defining an eye movement index, wherein the eye movement index comprises a fixation point, a fixation transfer track and a fixation distribution proportion;
the method comprises the steps of collecting eye movement index data meeting conditions under a plurality of training scenes to construct an eye movement index data set, wherein the data set comprises a plurality of eye movement index data corresponding to each training scene, and the training scenes comprise any one or more of left steering, right steering, overtaking, sidewalk passing and rapid driving;
and establishing an eye movement index standard corresponding to each training scene based on the eye movement index data set, wherein the eye movement index standard is a standard description of the corresponding eye movement index in each training scene.
5. The method for assisting in training of automobile driving skills according to claim 1, wherein before determining the current driving state of the driving learner through the physiological data and a cognitive load overload recognition model preset in a fatigue driving recognition module, the method further comprises:
carrying out a plurality of driving training on the driving trainees, and collecting operation behaviors in each driving training and physiological data of the driving trainees at each moment, wherein the physiological data comprises pupil expansion degree, heart rate variability, blink frequency and blink duration;
when the operation behavior is collected to belong to a preset error operation type, determining a collection time corresponding to the operation behavior;
backtracking the acquisition time corresponding to the operation behavior to generate a time interval with preset duration;
acquiring a plurality of physiological data of the driving trainees in the time interval, and constructing a training data set, wherein the training data set is described as follows:
TrainingSet={EA t |PD,HR,HRV,NoEB,ToEB};
wherein, training Set is a Training data Set, EA is an operation behavior belonging to a preset error operation type, PD is pupil expansion degree, HR is heart rate, HRV is heart rate variability, noEB is blink frequency, and ToEB is blink duration;
and constructing an artificial intelligence model, and performing meta-learning on the training data set through the artificial intelligence model to generate the cognitive load overload recognition model, wherein input data of the cognitive load overload recognition model is physiological data, and output data of the cognitive load overload recognition model is whether fatigue driving early warning is performed or not.
6. The auxiliary training method for automobile driving skills according to claim 4, wherein the fixation point is used for representing a corresponding area where the stay time of the fovea of the eye is larger than a preset time threshold;
the gaze transfer trajectory is used for representing a trajectory which changes from a first gaze point to a second gaze point, and a trajectory connection line between the first gaze point and the second gaze point is a gaze transfer index, wherein no other gaze point exists between the first gaze point and the second gaze point;
the gazing distribution proportion is used for representing the distribution proportion of gazing time lengths formed by two groups of different gazing points in the same driving scene.
7. The auxiliary training method for automobile driving skills according to claim 6, wherein establishing eye movement index standards corresponding to each training scenario based on the eye movement index data set specifically comprises:
when the training scene is left steering, the area of the left rearview mirror is a watching area, and the eye movement index standard is RoGP ROI >RoGP other Wherein, roGP ROI For gaze allocation of the left-hand mirror region, roGP other Allocating proportions for the gazing of other areas;
when the training scene is a right turn, the right rear view mirror area is a watching area, and the eye movement index standard is RoGP ROI >RoGP other Wherein, roGP ROI For the gaze allocation proportion of the right rear-view mirror region, roGP other Allocating proportions for the gazing of other areas;
when the training scene is overtaking, the left rearview mirror area is defined as LRW, the right rearview mirror area is defined as RRW, the front area of the car is defined as F, and the eye movement index standard is as follows:
RoGP LRW =RoGP RRW =RoGP F
SR=(GP LRW →GP F →GP RRW →GP F ) And Δ GP t <=3s,
Wherein, roGP LRW For gaze allocation of the left-hand mirror region, roGP RRW For the gaze allocation proportion of the right rear-view mirror region, roGP F The gaze distribution ratio of the front region of the vehicle, SR being the gaze diversion indicator, GP LRW For the left-rear-view mirror region point of fixation, GP F Is the front region fixation point, GP RRW Is the right rear-view mirror region fixation point, Δ GP t Is the difference in transfer time between the two points of regard;
when the training scene is through the pavement, the area outside the left window is defined as LOW, the area outside the right window is defined as ROW, the area in the front of the vehicle is defined as F, and the eye movement index standard is as follows:
RoGP F >RoGP ROW >RoGP LOW
SR=(GP F →GP ROW →GP F ) And 1.5s<=ΔGP t <=3s,
Wherein, roGP LOW For the gaze allocation of the left window outer region, roGP ROW For the gaze allocation of the right window outer region, GP ROW A point of regard for the outer area of the right vehicle window;
when the training scene is fast driving, the eye movement index standard is as follows:
RoGP F >RoGP ROW >RoGP LOW
SR=(GP F →GP LRW →GP F →GP RRW ) And Δ GP t <=1.5s。
8. The method as claimed in claim 1, wherein the active attention compensation points are translucent color-designated compensation points, and the passive attention compensation points are opaque color-designated compensation points.
9. An auxiliary training device for automobile driving skill, characterized in that the device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to cause the at least one processor to:
the method comprises the steps that driving data of a driving student in a current training scene are collected through a data collection module, wherein the driving data comprise eye movement data of the driving student and physiological data of the driving student;
generating, by an attention compensation module, attention compensation points of a gaze region from eye movement data of the driving learner, wherein the attention compensation points include active attention compensation points and passive attention compensation points;
determining the current driving state of the driving learner through the physiological data and a cognitive load overload recognition model preset in a fatigue driving recognition module, wherein the current driving state comprises a fatigue driving state and a non-fatigue driving state;
and performing auxiliary driving training on the driving trainees through an auxiliary training module based on the attention compensation points and the current driving state.
10. A non-transitory computer storage medium storing computer-executable instructions configured to:
the method comprises the steps that driving data of a driving student in a current training scene are collected through a data collection module, wherein the driving data comprise eye movement data of the driving student and physiological data of the driving student;
generating, by an attention compensation module, attention compensation points of a gaze region from eye movement data of the driving learner, wherein the attention compensation points include active attention compensation points and passive attention compensation points;
determining the current driving state of the driving learner through the physiological data and a cognitive load overload recognition model preset in a fatigue driving recognition module, wherein the current driving state comprises a fatigue driving state and a non-fatigue driving state;
and performing auxiliary driving training on the driving trainees through an auxiliary training module based on the attention compensation points and the current driving state.
CN202210895940.6A 2022-07-27 2022-07-27 Auxiliary training method, equipment and medium for automobile driving skills Pending CN115331513A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210895940.6A CN115331513A (en) 2022-07-27 2022-07-27 Auxiliary training method, equipment and medium for automobile driving skills

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210895940.6A CN115331513A (en) 2022-07-27 2022-07-27 Auxiliary training method, equipment and medium for automobile driving skills

Publications (1)

Publication Number Publication Date
CN115331513A true CN115331513A (en) 2022-11-11

Family

ID=83919695

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210895940.6A Pending CN115331513A (en) 2022-07-27 2022-07-27 Auxiliary training method, equipment and medium for automobile driving skills

Country Status (1)

Country Link
CN (1) CN115331513A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011198037A (en) * 2010-03-19 2011-10-06 Toyota Central R&D Labs Inc Safety confirmation support device and program
US20170285741A1 (en) * 2016-04-01 2017-10-05 Lg Electronics Inc. Vehicle control apparatus and method thereof
CN109711260A (en) * 2018-11-28 2019-05-03 易念科技(深圳)有限公司 Detection method, terminal device and the medium of fatigue state
CN110826369A (en) * 2018-08-10 2020-02-21 北京魔门塔科技有限公司 Driver attention detection method and system during driving
CN110962746A (en) * 2019-12-12 2020-04-07 上海擎感智能科技有限公司 Driving assisting method, system and medium based on sight line detection
CN112489425A (en) * 2020-11-25 2021-03-12 平安科技(深圳)有限公司 Vehicle anti-collision early warning method and device, vehicle-mounted terminal equipment and storage medium
CN113743471A (en) * 2021-08-05 2021-12-03 暨南大学 Driving evaluation method and system
CN114179811A (en) * 2022-02-17 2022-03-15 北京心驰智途科技有限公司 Data processing method, equipment, medium and product for acquiring driving state
CN114495630A (en) * 2022-01-24 2022-05-13 北京千种幻影科技有限公司 Vehicle driving simulation method, system and equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011198037A (en) * 2010-03-19 2011-10-06 Toyota Central R&D Labs Inc Safety confirmation support device and program
US20170285741A1 (en) * 2016-04-01 2017-10-05 Lg Electronics Inc. Vehicle control apparatus and method thereof
CN110826369A (en) * 2018-08-10 2020-02-21 北京魔门塔科技有限公司 Driver attention detection method and system during driving
CN109711260A (en) * 2018-11-28 2019-05-03 易念科技(深圳)有限公司 Detection method, terminal device and the medium of fatigue state
CN110962746A (en) * 2019-12-12 2020-04-07 上海擎感智能科技有限公司 Driving assisting method, system and medium based on sight line detection
CN112489425A (en) * 2020-11-25 2021-03-12 平安科技(深圳)有限公司 Vehicle anti-collision early warning method and device, vehicle-mounted terminal equipment and storage medium
CN113743471A (en) * 2021-08-05 2021-12-03 暨南大学 Driving evaluation method and system
CN114495630A (en) * 2022-01-24 2022-05-13 北京千种幻影科技有限公司 Vehicle driving simulation method, system and equipment
CN114179811A (en) * 2022-02-17 2022-03-15 北京心驰智途科技有限公司 Data processing method, equipment, medium and product for acquiring driving state

Similar Documents

Publication Publication Date Title
US9501947B2 (en) Driver training
Xu et al. Driving performance under violations of traffic rules: Novice vs. experienced drivers
US10614726B2 (en) Behaviorally-based crash avoidance system
Feng et al. An on-board system for detecting driver drowsiness based on multi-sensor data fusion using Dempster-Shafer theory
Wu et al. Does a faster takeover necessarily mean it is better? A study on the influence of urgency and takeover-request lead time on takeover performance and safety
Zimasa et al. The influence of driver’s mood on car following and glance behaviour: Using cognitive load as an intervention
Leversen et al. Ageing and driving: examining the effects of visual processing demands
Yang et al. A method to improve driver’s situation awareness in automated driving
Kotseruba et al. Behavioral research and practical models of drivers' attention
CN115331513A (en) Auxiliary training method, equipment and medium for automobile driving skills
Najar et al. Driving errors and gaze behavior during in-vehicle object and spatial distractions
Castro Visual demands and driving
Moták et al. Older drivers' self-regulation: Discrepancy reduction or region of proximal learning?
Zhai et al. A study of how drivers’ subjective workload and driving performance change under varying levels of automation and critical situations
Aurell Perception: a model comprising two modes of consciousness
Nabatilan Factors that influence visual attention and their effects on safety in driving: an eye movement tracking approach
Park et al. Distribution of peripheral vision for a driving simulator functional field of view task
RU2819843C2 (en) Method for determining the level of formation of the skill of identifying a potentially dangerous situation and the skill of reacting to an event
Pilligundla Driving Simulator: Driving Performance Under Distraction
Pierella et al. Driving Simulator for Assessing Driving Skills of People with Multiple Sclerosis: a Pilot Study
Mao et al. Projection helps to improve visual impact: On a dark or foggy day
Zhang et al. The effect of CMS with AR on drivingperformance
Echt The effect of illumination, contrast, and age on text comprehension performance
Roksic An Inclusive Design Approach to Integrating an External Human Machine Interface with Autonomous Vehicles
Sakata et al. Proposal for Reproducible and Practical Drowsiness Indices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination