CN112181149B - Driving environment recognition method and device and simulated driver - Google Patents

Driving environment recognition method and device and simulated driver Download PDF

Info

Publication number
CN112181149B
CN112181149B CN202011068538.8A CN202011068538A CN112181149B CN 112181149 B CN112181149 B CN 112181149B CN 202011068538 A CN202011068538 A CN 202011068538A CN 112181149 B CN112181149 B CN 112181149B
Authority
CN
China
Prior art keywords
data
target
eye movement
movement information
virtual driving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011068538.8A
Other languages
Chinese (zh)
Other versions
CN112181149A (en
Inventor
徐艺
姜国辛
王黎明
王玉琼
桑晓青
郭栋
孙亮
邵金菊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University of Technology
Original Assignee
Shandong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University of Technology filed Critical Shandong University of Technology
Priority to CN202011068538.8A priority Critical patent/CN112181149B/en
Publication of CN112181149A publication Critical patent/CN112181149A/en
Application granted granted Critical
Publication of CN112181149B publication Critical patent/CN112181149B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention provides a method and a device for identifying a driving environment and a simulated driver, wherein the method comprises the following steps: acquiring original eye movement information of a target object and scene information of a virtual driving scene, and preprocessing the original eye movement information to obtain target eye movement information; determining the importance level of an object to be viewed in the virtual driving scene based on the target eye movement information; determining a visual recognition sequence of objects to be viewed in the virtual driving scene based on the target eye movement information and the scene information; and identifying the driving environment of the virtual driving scene based on the importance level and the visual recognition sequence. The invention can better improve the accuracy and efficiency of environment sensing.

Description

Driving environment recognition method and device and simulated driver
Technical Field
The invention relates to the technical field of visual recognition, in particular to a driving environment recognition method and device and a simulated driver.
Background
The sensing of the environment of the unmanned vehicle is an important link of the development of the unmanned technology, the existing environment sensing method mainly relates to an environment sensing method based on radar, however, the environment sensing method based on radar has the problems of high technical landing cost, easy influence of the environment and the like, and therefore the related technology provides an environment sensing method based on machine learning so as to improve the accuracy of environment sensing. However, the inventor has found through research that the machine learning-based environment sensing method generally realizes the recognition of multiple types of visual recognition objects in the environment by increasing the number of fully-connected layers or global average pooling layers, but the overall scale of a machine learning model for sensing the environment is enlarged by the method, so that the accuracy and efficiency of environment sensing are reduced.
Disclosure of Invention
In view of the above, the present invention provides a driving environment recognition method, a driving environment recognition device, and a simulated driver, which can better improve the accuracy and efficiency of environment sensing.
In a first aspect, an embodiment of the present invention provides a method for identifying a driving environment, including: acquiring original eye movement information of a target object and scene information of a virtual driving scene, and preprocessing the original eye movement information to obtain target eye movement information; determining an importance level of an object to be viewed in the virtual driving scene based on the target eye movement information; determining a visual recognition sequence of objects to be visually recognized in the virtual driving scene based on the target eye movement information and the scene information; and identifying the driving environment of the virtual driving scene based on the importance level and the visual recognition sequence.
In one embodiment, the target eye movement information includes pupil diameter data and temporal reflex data; the step of determining the importance level of the object to be visually recognized in the virtual driving scene based on the target eye movement information includes: performing multi-scale geometric analysis on the pupil diameter data to select an object of interest from objects to be viewed and recognized in the virtual driving scene to obtain an object of interest set; performing harmonic analysis on the transient reflection data to select a concentration object from objects to be visually recognized in the virtual driving scene to obtain a concentration object set; determining the importance level of the object to be viewed in the virtual driving scene according to a preset interest weight, a preset concentration weight, the interest object set and the concentration object set.
In one embodiment, the step of performing a multi-scale geometric analysis on the pupil diameter data to select an object of interest from objects to be viewed in the virtual driving scene includes: performing wavelet transformation on the pupil diameter data to obtain a first data set; determining a first interval containing peak points in the first data set, and identifying a target pupil diameter range corresponding to the first interval; and selecting an interested object from objects to be viewed in the virtual driving scene based on the interested degree corresponding to the target pupil diameter range.
In one embodiment, the step of performing harmonic analysis on the transient reflection data to select a concentration object from the objects to be identified for viewing in the virtual driving scene includes: fourier transformation is carried out on the transient reflection data to obtain a second data set; determining a second interval containing amplitude points in the second data set, and identifying a target instantaneous reflection range corresponding to the second interval; and selecting a concentration object from the objects to be visually recognized in the virtual driving scene based on the concentration degree corresponding to the target instantaneous reflection range.
In one embodiment, the preset weight of interest w 1 The setting method is as follows:
Figure BDA0002712392280000021
wherein, C i Represents the ith pupil diameter data, S 1j The pupil diameter standard value of the jth importance level is represented, and n represents the total number of pupil diameter data; the preset concentration weight w 2 The setting method of (1) is as follows:
Figure BDA0002712392280000031
wherein D is p Representing the p-th instantaneous reflection data, S 2q And (3) representing a snapshot reflection standard value of the q-th importance level, and m represents the total number of the snapshot reflection data.
In one embodiment, the step of determining the visual recognition sequence of the objects to be viewed in the virtual driving scene based on the eye movement information and the scene information includes: obtaining sight point data based on the target eye movement information and the scene information; determining sight point data meeting preset conditions as target visual recognition data; the preset conditions comprise a fixation time condition and a fixation angle condition; and determining the visual recognition sequence of the target visual recognition data corresponding to the object to be visually recognized according to the incidence relation between the visual recognition time and the target visual recognition data.
In one embodiment, the step of preprocessing the original eye movement information to obtain target eye movement information includes: and preprocessing the original eye movement information by utilizing a Lagrange interpolation algorithm and an empirical mode decomposition method to obtain target eye movement information.
In a second aspect, an embodiment of the present invention further provides a driving environment recognition apparatus, including: the data acquisition module is used for acquiring original eye movement information of a target object and scene information of a virtual driving scene, and preprocessing the original eye movement information to obtain target eye movement information; the level determination module is used for determining the importance level of the object to be viewed and recognized in the virtual driving scene based on the original eye movement information; the sequence determining module is used for determining the visual recognition sequence of the object to be visually recognized in the virtual driving scene based on the target eye movement information and the scene information; and the environment identification module is used for identifying the driving environment of the virtual driving scene based on the importance level and the visual recognition sequence.
In a third aspect, an embodiment of the present invention further provides a driver simulator, including a simulator screen, a processor, and a memory; the simulator screen is for displaying the virtual driving scenario, the memory having stored thereon a computer program which, when executed by the processor, performs the method as set forth in any one of the first aspect.
In a fourth aspect, an embodiment of the present invention further provides a computer storage medium for storing computer software instructions for use in any one of the methods provided in the first aspect.
The driving environment recognition method, the driving environment recognition device and the simulated driver provided by the embodiment of the invention are characterized in that the original eye movement information of a target object and the scene information of a virtual driving scene are firstly collected, the original eye movement information is preprocessed to obtain the target eye movement information, the importance level of an object to be viewed in the virtual driving scene is determined based on the target eye movement information, the viewing sequence of the object to be viewed in the virtual driving scene is determined based on the target eye movement information and the scene information, and then the driving environment of the virtual driving scene is recognized based on the importance level and the viewing sequence. The embodiment of the invention determines the importance level of the object to be viewed based on the target eye movement information and determines the viewing sequence of the object to be viewed based on the target eye movement information and the scene information, so that the driving environment is identified based on the viewing sequence and the importance level, and the identification precision and the identification efficiency of the driving environment can be effectively improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic flow chart of a driving environment recognition method according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a simulated driver according to an embodiment of the present invention;
fig. 3 is a process diagram of a driving environment recognition method according to an embodiment of the present invention;
FIG. 4 is a structural framework diagram of a visual search strategy model according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a driving environment recognition apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of another simulated driver according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the embodiments, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
At present, the existing environment sensing method has the problems of low sensing precision and low sensing efficiency. For example, a multi-target identification method for a vehicle-mounted radar based on FMCW (Frequency Modulated Continuous Wave) is proposed in the related art, the vehicle-mounted radar continuously transmits the Frequency Modulated Continuous Wave forward, and performs FFT (Fast Fourier transform) operation on beat signals to extract and match the main Frequency, calculate the target speed distance, and reduce the false target false rate; the related technology provides an intelligent multi-target comprehensive identification method based on data mining, the data mining technology is applied to the field of target identification, two target identification ideas based on target characteristic knowledge identification and target association knowledge identification are provided, and an automatic and intelligent means is provided for multi-target identification; the related technology provides a vehicle surrounding environment sensing-based system and a control method thereof, wherein a master control unit is connected with a millimeter wave radar unit and a visual sensing module through a CAN bus; the related technology provides an intelligent vehicle multi-laser radar fusion recognition method based on target characteristics, target matching is carried out according to similarity, target tracking is carried out to obtain target motion characteristics, correction is carried out, and the recognition capability of the target is enhanced. However, the above techniques all have problems of low sensing accuracy or low sensing efficiency. In order to solve the problem, embodiments of the present invention provide a method and an apparatus for identifying a driving environment, and a simulated driver, which can better improve accuracy and efficiency of environmental perception.
To facilitate understanding of the present embodiment, first, a detailed description is given to a method for identifying a driving environment disclosed in the present embodiment, referring to a schematic flow chart of the method for identifying a driving environment shown in fig. 1, where the method mainly includes the following steps S102 to S108:
step S102, collecting original eye movement information of a target object and scene information of a virtual driving scene, and preprocessing the original eye movement information to obtain target eye movement information. The target object is also a driver, the original eye movement information is also eye movement information which is not preprocessed, such as pupil diameter data, instantaneous eye reflection data and the like, the original eye movement information may have missing points or abnormal points, the scene information may be image data of a virtual driving scene, and the target eye movement information is eye movement information which is obtained after preprocessing and may also include pupil diameter data, instantaneous eye reflection data and the like. In one embodiment, a camera for acquiring original eye movement information of a target object and a camera for acquiring scene information of a virtual driving scene may be separately provided, and data acquired by each camera may be separately acquired.
And step S104, determining the importance level of the object to be viewed in the virtual driving scene based on the target eye movement information. The virtual driving scene is provided with a plurality of objects to be viewed, and the objects to be viewed can comprise pedestrians, motor vehicles, non-motor vehicles, sign lines and the like. The importance level is used to characterize the importance degree of the object to be viewed, for example, the importance level can be divided into five levels, i.e., a key target, an important target, a medium important target, a general target, and a non-important target. In one embodiment, a plurality of importance levels may be divided in advance, and the importance level to which each piece of eye movement information belongs may be determined based on the cognitive neural reflection and the cognitive psychological reflection represented by the eye movement information.
And step S106, determining the visual recognition sequence of the objects to be viewed in the virtual driving scene based on the target eye movement information and the scene information. The visual recognition sequence can be used for representing the sequence of observing the objects to be visually recognized in the virtual driving scene by the target object. In an embodiment, the visual recognition sequence is determined according to the three data of the target eye movement information, the scene information and the visual recognition time, optionally, the target visual recognition data may be determined based on the target eye movement information and the scene information, and the visual recognition sequence of the object to be viewed corresponding to the target visual recognition data may be determined by establishing the association between the visual recognition time and the target visual recognition data and combining the sequence of the visual recognition time.
And step S108, identifying the driving environment of the virtual driving scene based on the importance level and the visual recognition sequence. In one embodiment, the driving environment of the virtual driving scenario may be characterized with a level of importance and an order of visibility.
The method for identifying the driving environment provided by the embodiment of the invention provides a new method for sensing the environment, the embodiment of the invention determines the importance level of the object to be viewed based on the target eye movement information, and determines the viewing sequence of the object to be viewed based on the target eye movement information and the scene information, so that the driving environment is identified based on the viewing sequence and the importance level, and the identification precision and the identification efficiency of the driving environment can be effectively improved.
In one embodiment, the method for identifying the driving environment may be implemented by a simulated driver, where the simulated driver is an experimental device of a virtual driving scene, and a certain number of drivers with rich driving experience are selected as target objects, and the target objects are seated in a driving simulation cabin by wearing a head-mounted eye tracker. Referring to the schematic structural diagram of the simulated driver shown in fig. 2, the simulated driver includes a driving simulation cabin and a simulator screen, the simulator screen can display a virtual driving scene (also referred to as a driving environment) of a multi-view target according to the setting of experimental requirements, the driving simulation cabin can simulate a driving operation environment, and the simulator screen is arranged at a preset distance (such as 5-8 meters) right in front of the driving simulation cabin. In practical applications, the head-mounted eye tracker includes a scene camera and an eye tracker, the scene camera is installed to collect scene information, and the eye tracker is used to collect original eye tracking information. The target object is a main body generated by original eye movement information, when a driver is selected, the driver with rich driving experience and standard driving experience of 5 km driving mileage can be selected for experiment, the number of the drivers N > =10, the male and female proportion is about 3.
Optionally, the simulated driver may be set in advance, so that the simulated driver provides a virtual driving scene, and synchronously acquires scene information of the virtual driving scene and original eye movement information of the target object. The virtual driving scene refers to a driving environment for constructing a single visual recognition target and a driving environment for multiple visual recognition targets under different driving tasks.
In consideration of the fact that the acquired original eye movement information and the scene information may have abnormal data such as missing points or abnormal points, the embodiment of the present invention may perform preprocessing on the original eye movement information and the scene information, optionally, may perform preprocessing such as elimination, compensation, noise reduction, etc. on the original eye movement information by using a lagrangian interpolation algorithm and an empirical mode decomposition method to obtain target eye movement information, and then perform analysis by using the preprocessed target eye movement information. Wherein. The original eye movement information is respectively sight line position data, pupil diameter data and instant eye reflection data of target objects at different time moments, wherein the sight line position data (X0, Y0) = { (X1, Y1), (X2, Y2) \8230 { (Xi, yi), \8230 { (Xn, yn) }, the pupil diameter data D0= (D1, D2 \8230; di, \8230; dn), and the instant eye reflection data B0= (B1, B2 \8230; bi, \8230; bn).
For the above-mentioned collected scene information and original eye movement information, the following (1) to (3) are performed: (1) determining missing values and outliers in the data. (2) Using Lagrange interpolation formula at defect and abnormal points
Figure BDA0002712392280000081
Obtaining an approximate value at a point corresponding to the missing value or the abnormal value, wherein n represents the total number of the sight line point data, and x represents a Denotes the a-th point of sight data, x b Indicating the b-th line of sight point data. (3) EMD (Empirical Mode Decomposition ) was usedThe maximum value and the minimum value in the pupil diameter data D0 and the instantaneous reflection data B0 are found out, the mean value of the two is calculated, and then the data are subjected to noise reduction based on the mean value. Where x (t) is an original signal, in the embodiment of the present invention, the original signal may be line-of-sight point data. After the preprocessing, the target eye movement information is sight line point position data (X1, Y1), pupil diameter data D1 and instantaneous eye reflection data B1.
In practical application, the target eye movement information includes pupil diameter data and transient reflex data, and based on this, the embodiment of the present invention provides a specific implementation manner for determining the importance level of an object to be viewed in a virtual driving scene based on the target eye movement information, which is as follows, in steps 1 to 3:
step 1, performing multi-scale geometric analysis on pupil diameter data to select an interested object from objects to be viewed and recognized in a virtual driving scene to obtain an interested object set. In one embodiment, the object of interest may be determined as shown in steps 1.1 to 1.3 as follows:
and 1.1, performing wavelet transformation on the pupil diameter data to obtain a first data set. In concrete implementation, uniform serial numbers of pupil diameters contained in pupil diameter data D1 are used as marking information of all sampling points to obtain a set U = (1, 2 \8230i, \8230n) of all sampling points, and then wavelet transformation is carried out on the pupil diameter data D1 to obtain a first data set Q = (Q1, Q2 \8230Qi, \8230Qn), wherein the data serial numbers of the sampling points in the first data set Q are L (L1, L2 \8230; li \82303030; 8230Ln). The formula of the wavelet transformation is as follows:
Figure BDA0002712392280000091
alpha is a control scale, tau is a control position, alpha and tau are any real numbers, psi is a mother wavelet, and the first data set can also be called as initial pupil diameter data to be matched.
Step 1.2, determining a first interval containing peak points in the first data set, and identifying a target pupil diameter range corresponding to the first interval. In practical applications, a first interval including the peak point may be determined from the first data set, so that a pupil diameter range corresponding to the first interval is determined as the target pupil diameter range.
And 1.3, selecting an interested object from the objects to be viewed and recognized in the virtual driving scene based on the interested degree corresponding to the target pupil diameter range. In one embodiment, a peak point of each peak In the pupil diameter In the first data set Q is identified, a set of data sequence numbers of sampling points In the first data set Q of the target pupil diameter corresponding to the peak point is recorded as I (I1, I2 \8230; ii, \8230; in), the variation range of the pupil diameter is obtained through the set I, and the corresponding object to be recognized when the pupil diameter is large is taken as the object of interest. In an alternative embodiment, the range of the pupil diameter data in the target eye movement information may be determined, the range of the pupil diameter data is divided into a plurality of intervals (such as five intervals), and different interest degrees are determined for each interval, so that the object of interest is selected from the objects to be viewed in the virtual driving scene in the order of the interest degrees from large to small.
And 2, carrying out harmonic analysis on the transient reflection data to select a concentration object from the objects to be viewed and recognized in the virtual driving scene to obtain a concentration object set. In one embodiment, the concentration subject may be determined as shown in steps 2.1 to 2.3 as follows:
and 2.1, carrying out Fourier transform on the transient reflection data to obtain a second data set. In the concrete implementation, the uniform serial number of the transient reflection in the transient reflection data B1 is used as the mark information of each sampling point, and a set F = (1, 2 \8230i, \8230n) of all the sampling points is obtained; then, fourier transform is carried out on the transient reflection data B1, and a second data set E = (E1, E2 \8230; ei, \8230; en) is obtained, wherein the data sequence number of sampling points in the second data set E is P = (P1, P2 \8230; pi, \8230; pn). Wherein, the Fourier transform formula is:
Figure BDA0002712392280000101
a n and b n Which is the amplitude of the real frequency component, T is the function period,
Figure BDA0002712392280000102
n is an integer and n>0, the second data set may also be referred to as an initial to-be-matched snapshot reflection data set.
And 2.2, determining a second interval containing the amplitude points in the second data set, and identifying a target instantaneous reflection range corresponding to the second interval. In practical applications, a second interval including the amplitude point may be determined from the second data set, so that the transient reflection range corresponding to the second interval is determined as the target transient reflection range.
And 2.3, selecting a concentration object from the objects to be viewed and recognized in the virtual driving scene based on the concentration degree corresponding to the target instantaneous reflection range. And identifying amplitude points of the transient reflection data in the second data set P, wherein the set of sampling point data serial numbers of the amplitude points in the second data set P is R = (R1, R2 \8230Ri, \8230Rn). According to the embodiment of the invention, the amplitude variation range of the transient reflection is obtained through the set R, and the object to be viewed and recognized corresponding to the data with small transient reflection is used as a concentration object. In an alternative embodiment, the range of the snapshot reflection data in the target eye movement information may be determined, the range of the snapshot reflection data may be divided into a plurality of intervals (such as five intervals), and each interval may be determined to correspond to a different concentration degree, so that the concentration objects are selected from the objects to be viewed in the virtual driving scene in order of the concentration degree from large to small.
And 3, determining the importance level of the object to be viewed in the virtual driving scene according to the preset interest weight, the preset concentration weight, the interest object set and the concentration object set. Optionally, the larger the pupil diameter data is, the higher the probability that the object to be viewed is regarded as the object of interest corresponding to the pupil diameter data is; the smaller the instantaneous reflection data is, the higher the possibility that the object to be recognized corresponding to the instantaneous reflection data is a focused object is, and the target object, i.e., the focused object, is considered to be a high-importance object to be recognized. To facilitate understanding of step 3, the embodiment of the present invention exemplarily provides an implementation manner of determining the importance level of the object to be visually recognized, and the present invention classifies the pupil diameter data and the corresponding mental states into five categories: diameter of pupil<2.5mm is an uninteresting object, the pupil diameter is 2.5-4mm is a general interesting object, the pupil diameter is 4-5.5mm is a medium interesting object, the pupil diameter is 5.5-7mm is an important interesting object, the pupil diameter is>7mm is a very interesting object; the blink reflex data is namely blink movement, and the blink reflex data and the corresponding cognitive nerve state are divided into five types: instantaneous reflection>16 times/min is in state of no concentration, transient reflex 14-16 times/min is in state of normal concentration, transient reflex 12-14 times/min is in state of moderate concentration, transient reflex 10-12 times/min is in state of abnormal concentration, and diameter of pupil<10 times/min is in extreme concentration. And (3) carrying out weight calculation on the data of the set Q and the set E according to a combination clustering weight method and a combination grade standard: setting the measured value of the selected pupil diameter data as Qi, the measured value of the selected instantaneous reflection data as Ei, and the standard value of the jth grade of the ith index as S ij I =1,2; j =1,2,3,4,5. Let W ij If the weight of the ith index in the jth level is determined, the interested weight w is preset 1 (i.e., the weight w of the pupil diameter 1 ) Can be expressed as:
Figure BDA0002712392280000111
wherein, C i Represents the ith pupil diameter data, S 1j The pupil diameter standard value of the jth importance level is represented, and n represents the total number of pupil diameter data; preset concentration weight w 2 (i.e., the weight w of the snapshot reflection 2 ) Can be expressed as:
Figure BDA0002712392280000112
wherein D is p Representing the p-th instantaneous reflection data, S 2q A snapshot reflection metric value representing the qth importance level, and m represents the total number of snapshot reflection data. Therefore, the importance levels of the objects to be viewed in the multi-view target environment under different driving tasks are divided into five levels, including: critical targets, important targets, medium important targets, general targets, unimportant targets.
For the above step S106, determining the visual recognition order of the objects to be visually recognized in the virtual driving scene based on the target eye movement information and the scene information may be performed as follows:
and a, obtaining sight point data based on the target eye movement information and the scene information.
And b, determining the sight point data meeting the preset conditions as target visual identification data. The preset conditions comprise a watching time condition and a watching angle condition. In one embodiment, the gaze time condition may be that the gaze duration is greater than a preset time threshold, and the gaze angle condition may be that the gaze angle range is within a preset angle range. Firstly, the difference between target visual recognition data and a conventional saccade sight point is clarified, wherein the target visual recognition data is the sight point position obtained when the eyeball movement speed is lower than 5deg/s, the visual angle deviation a is a, wherein a < =0.41 degrees and the fixation duration is more than 100 ms; wherein the fixation duration is a duration in which the visual axis center position remains unchanged, that is, a time taken to extract information from the fixation object; the gaze angle is the angle at which the eyeball rotates in both horizontal and vertical directions relative to the head. And (0, 0) is taken as an original point, namely the intersection point of a vertical line from the eyeball to the vertical plane and the vertical plane, and an included angle a between the projection of a connecting line between the eyeball and the fixation point on the horizontal plane and the vertical plane is an angle of the sight point position on the horizontal plane.
And c, determining the visual recognition sequence of the target visual recognition data corresponding to the object to be visually recognized according to the incidence relation between the visual recognition time and the target visual recognition data. As the experiment adopts the frequency of 100Hz for sampling, in different driving tasks and complex environments with multiple visual recognition targets, MATLAB image processing technology is utilized to extract a set (X2, Y2) of target visual recognition data from all sight line point sets (X1, Y1), the position set of the target visual recognition data in a time domain is (Xt, yt), and a corresponding matrix of visual recognition time and the position of the target visual recognition sight line is established
Figure BDA0002712392280000121
The actual target sight line position
Figure BDA0002712392280000122
And obtaining the corresponding target visual recognition sequence through the time items of the corresponding matrix.
In order to facilitate understanding of the method for identifying the driving environment provided by the above embodiment, an embodiment of the present invention provides another method for identifying the driving environment, which is shown in fig. 3, and is configured to perform multi-scale geometric analysis on pupil diameter data and harmonic analysis on snapshot reflection data, combine two types of indexes to reflect in cognitive neurology and cognitive psychology to obtain an importance level of an object to be viewed, perform gaze point division on the gaze point data, determine target visual recognition data at different times and an object to be viewed corresponding to the target visual recognition data, perform corresponding analysis on the object to be viewed corresponding to the target visual recognition data to obtain a visual recognition order of the object to be viewed, and finally establish a visual search strategy model by using a Petri network discrete modeling method, where the visual search strategy model is used for identifying the driving environment based on the method provided by the above embodiment, and the input of the visual search strategy model is a driving task and the output is the importance level and the visual recognition order. The embodiment of the invention also provides an implementation mode for establishing a visual search strategy model by using a Petri network discrete modeling method, and particularly, the visual search strategy model taking a driving task as input and taking an important grade and a visual recognition sequence as output is established by using the processed pupil diameter data Q and instantaneous reflection data E and the corresponding relation between time and the position of target visual recognition data facing to the driving environment of a single visual recognition target and the driving environment of multiple visual recognition targets under different driving tasks.
In order to facilitate understanding of the above-mentioned visual search strategy model, an embodiment of the present invention provides a structural framework diagram of a visual search strategy model, as shown in fig. 4, let X = { D0, B0, (X0, Y0), D1, B1, (X1, Y1), Q, E, (X2, Y2), I, R, (Xt, yt), C, S, (Xi, yi) }, where X denotes all data used, including original eye movement information, scene information, and processed data, D0 denotes pupil diameter data before preprocessing, B0 denotes instantaneous reflection data before preprocessing, (X0, Y0) denotes gaze point position data before preprocessing, D1 denotes pupil diameter data after preprocessing, B1 denotes instantaneous reflection data after preprocessing, (X1, Y1) denotes gaze point position data after preprocessing, Q denotes a first data set, E denotes a second data set, (X2, Y2) denotes a target gaze point data set, I denotes a target peak value, I denotes a target point in a target set, and R is an important point data set, and R is an amplitude value of a sampling point in a sampling point, and R is an important point in a sampling point set.
A flow change set Σ = { T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13} represented in fig. 4, wherein T1 is a data acquisition process (including acquiring eye movement information and acquiring scene information), T2 is a process of preprocessing (including removing, compensating, and denoising) pupil diameter data, T3 is a process of preprocessing transient reflection data, T4 is a process of preprocessing line-of-sight data, T5 is a process of performing multi-scale transformation on pupil diameter data, T6 is a process of performing harmonic analysis on transient reflection data, T7 is a process of extracting data from Matlab, T8 is a process of extracting peak points and a search process of an interval where peak points are located, T9 is an extraction process of amplitude points and a search process of an interval where amplitude points are located, T10 is a position combining process in a time domain, T11 is a process of calculating a weight index, T12 is a process of calculating a weight index of calculating pupil diameter, and T13 is a time domain corresponding to a target position matrix establishing process.
In summary, the driving environment identification method provided by the embodiment of the invention is based on a complex driving environment facing various driving tasks and multiple visual recognition targets, accords with the traffic safety principle based on traffic psychology by taking the psychological state reflected by the eye movement information of the driver as the division basis of the target visual recognition importance level, determines the visual recognition sequence by processing the sight line positions at different moments, and constructs the visual search strategy model based on the complex environment by using the periti network system modeling method, thereby overcoming the defects of low perception accuracy and efficiency in other intelligent vehicle environment perception methods, improving the pertinence of the multiple visual recognition target search of the vehicle in the complex environment, reducing the number of targets needing to be continuously tracked, shortening the time required for perception in the complex environment, and enabling the intelligent vehicle to be safer and more reliable.
As to the method for identifying a driving environment provided in the above embodiment, an embodiment of the present invention further provides an apparatus for identifying a driving environment, and referring to a schematic structural diagram of an apparatus for identifying a driving environment shown in fig. 5, the apparatus mainly includes the following components:
the data acquisition module 502 is configured to acquire original eye movement information of the target object and scene information of the virtual driving scene, and preprocess the original eye movement information to obtain target eye movement information.
A grade determination module 504, configured to determine an importance grade of the object to be viewed in the virtual driving scene based on the target eye movement information.
And the sequence determining module 506 is configured to determine a visual recognition sequence of the objects to be visually recognized in the virtual driving scene based on the target eye movement information and the scene information.
And the environment identification module 508 is used for identifying the driving environment of the virtual driving scene based on the importance level and the visual recognition sequence.
The embodiment of the invention provides a driving environment recognition device and provides a new environment sensing method.
In one embodiment, the target eye movement information includes pupil diameter data and temporal reflex data; the rank determination module 504 is further configured to: performing multi-scale geometric analysis on the pupil diameter data to select an interested object from objects to be viewed in the virtual driving scene to obtain an interested object set; carrying out harmonic analysis on the transient ocular reflection data to select a concentration object from objects to be visually recognized in the virtual driving scene to obtain a concentration object set; and determining the importance level of the object to be viewed in the virtual driving scene according to the preset interest weight, the preset concentration weight, the interest object set and the concentration object set.
In one embodiment, the rank determination module 504 is further configured to: performing wavelet transformation on the pupil diameter data to obtain a first data set; determining a first interval containing peak points in the first data set, and identifying a target pupil diameter range corresponding to the first interval; and selecting an interested object from the objects to be viewed in the virtual driving scene based on the interested degree corresponding to the target pupil diameter range.
In one embodiment, the rank determination module 504 is further configured to: fourier transformation is carried out on the snapshot reflection data to obtain a second data set; determining a second interval containing the amplitude points in the second data set, and identifying a target instantaneous reflection range corresponding to the second interval; and selecting a concentration object from the objects to be visually recognized in the virtual driving scene based on the concentration degree corresponding to the target instantaneous reflection range.
In one embodiment, the weight of interest w is preset 1 The setting method is as follows:
Figure BDA0002712392280000151
wherein, C i Represents the ith pupil diameter data, S 1j The pupil diameter standard value of the jth importance level is represented, and n represents the total number of pupil diameter data; preset concentration weight w 2 The setting method of (1) is as follows:
Figure BDA0002712392280000161
wherein D is p Representing the p-th instantaneous reflection data, S 2q And (3) representing a snapshot reflection standard value of the q-th importance level, and m represents the total number of the snapshot reflection data.
In one embodiment, the order determination module 506 is further configured to: obtaining sight point data based on the target eye movement information and the scene information; determining sight point data meeting preset conditions as target visual recognition data; the preset conditions comprise a fixation time condition and a fixation angle condition; and determining the visual recognition sequence of the target visual recognition data corresponding to the object to be visually recognized according to the incidence relation between the visual recognition time and the target visual recognition data.
In one embodiment, the apparatus further comprises a preprocessing module configured to: and preprocessing the original eye movement information by utilizing a Lagrange interpolation algorithm and an empirical mode decomposition method to obtain target eye movement information.
The device provided by the embodiment of the present invention has the same implementation principle and technical effect as the method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the method embodiments without reference to the device embodiments.
The embodiment of the invention provides a simulated driver, which particularly comprises a simulator screen, a processor and a storage device, wherein the simulator screen is connected with the processor; the simulator screen is used for displaying a virtual driving scenario, and the storage device has a computer program stored thereon, which, when executed by the processor, performs the method of any of the above described embodiments.
Fig. 6 is a schematic structural diagram of another simulated driver according to an embodiment of the present invention, where the simulated driver 100 includes: a processor 60, a memory 61, a bus 62 and a communication interface 63, wherein the processor 60, the communication interface 63 and the memory 61 are connected through the bus 62; the processor 60 is adapted to execute executable modules, such as computer programs, stored in the memory 61.
The computer program product of the readable storage medium provided in the embodiment of the present invention includes a computer readable storage medium storing a program code, where instructions included in the program code may be used to execute the method described in the foregoing method embodiment, and specific implementation may refer to the foregoing method embodiment, which is not described herein again.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. A method for recognizing a driving environment, comprising:
acquiring original eye movement information of a target object and scene information of a virtual driving scene, and preprocessing the original eye movement information to obtain target eye movement information;
determining an importance level of an object to be visually recognized in the virtual driving scene based on the target eye movement information;
determining a visual recognition sequence of objects to be visually recognized in the virtual driving scene based on the target eye movement information and the scene information;
identifying a driving environment of the virtual driving scene based on the importance level and the visual recognition sequence;
the target eye movement information comprises pupil diameter data and transient ocular reflex data;
the step of determining the importance level of the object to be visually recognized in the virtual driving scene based on the target eye movement information includes:
performing multi-scale geometric analysis on the pupil diameter data to select an object of interest from objects to be viewed and recognized in the virtual driving scene to obtain an object of interest set;
performing harmonic analysis on the transient reflection data to select a concentration object from objects to be viewed and recognized in the virtual driving scene to obtain a concentration object set;
determining the importance level of an object to be viewed in the virtual driving scene according to a preset interest weight, a preset concentration weight, the interest object set and the concentration object set;
the step of determining the visual recognition sequence of the object to be visually recognized in the virtual driving scene based on the target eye movement information and the scene information comprises the following steps:
obtaining sight point data based on the target eye movement information and the scene information;
determining sight point data meeting preset conditions as target visual recognition data; the preset conditions comprise a fixation time condition and a fixation angle condition;
and determining the visual recognition sequence of the target visual recognition data corresponding to the object to be visually recognized according to the incidence relation between the visual recognition time and the target visual recognition data.
2. The method according to claim 1, wherein the step of performing a multi-scale geometric analysis on the pupil diameter data to select an object of interest from objects to be viewed of the virtual driving scene comprises:
performing wavelet transformation on the pupil diameter data to obtain a first data set;
determining a first interval containing a peak point in the first data set, and identifying a target pupil diameter range corresponding to the first interval;
and selecting an interested object from objects to be viewed in the virtual driving scene based on the interested degree corresponding to the target pupil diameter range.
3. The method of claim 1, wherein the step of performing harmonic analysis on the transient reflection data to select a concentration object from the objects to be identified for viewing in the virtual driving scene comprises:
performing Fourier transform on the instantaneous reflection data to obtain a second data set;
determining a second interval containing amplitude points in the second data set, and identifying a target instantaneous reflection range corresponding to the second interval;
and selecting a concentration object from the objects to be visually recognized in the virtual driving scene based on the concentration degree corresponding to the target instantaneous reflection range.
4. The method of claim 1, wherein the presetting is performed in response to a user inputWeight of interest w 1 The setting method is as follows:
Figure FDA0003920967030000021
wherein, C i Represents the ith pupil diameter data, S 1j The pupil diameter standard value of the jth importance level is represented, and n represents the total number of pupil diameter data;
the preset concentration weight w 2 The setting method is as follows:
Figure FDA0003920967030000031
wherein D is p Representing the p-th instantaneous reflection data, S 2q And (3) representing a snapshot reflection standard value of the q-th importance level, and m represents the total number of the snapshot reflection data.
5. The method of claim 1, wherein the step of preprocessing the original eye movement information to obtain target eye movement information comprises:
and preprocessing the original eye movement information by utilizing a Lagrange interpolation algorithm and an empirical mode decomposition method to obtain target eye movement information.
6. An apparatus for recognizing a driving environment, comprising:
the data acquisition module is used for acquiring original eye movement information of a target object and scene information of a virtual driving scene, and preprocessing the original eye movement information to obtain target eye movement information;
the grade determining module is used for determining the importance grade of an object to be viewed in the virtual driving scene based on the target eye movement information;
the sequence determining module is used for determining the visual recognition sequence of the objects to be viewed in the virtual driving scene based on the target eye movement information and the scene information;
the environment identification module is used for identifying the driving environment of the virtual driving scene based on the importance level and the visual recognition sequence;
the target eye movement information comprises pupil diameter data and instantaneous reflection data;
the rank determination module is further to:
performing multi-scale geometric analysis on the pupil diameter data to select an interested object from objects to be viewed in the virtual driving scene to obtain an interested object set;
performing harmonic analysis on the transient reflection data to select a concentration object from objects to be viewed and recognized in the virtual driving scene to obtain a concentration object set;
determining the importance level of an object to be viewed in the virtual driving scene according to a preset interest weight, a preset concentration weight, the interest object set and the concentration object set;
the order determination module is further configured to:
obtaining sight point data based on the target eye movement information and the scene information;
determining sight point data meeting preset conditions as target visual recognition data; the preset conditions comprise a watching time condition and a watching angle condition;
and determining the visual recognition sequence of the target visual recognition data corresponding to the object to be visually recognized according to the incidence relation between the visual recognition time and the target visual recognition data.
7. A simulated driver comprising a simulator screen, a processor and a memory;
the simulator screen is for displaying the virtual driving scenario, the memory having stored thereon a computer program which, when executed by the processor, performs the method of any one of claims 1 to 5.
8. A computer storage medium storing computer software instructions for use in the method of any one of claims 1 to 5.
CN202011068538.8A 2020-09-30 2020-09-30 Driving environment recognition method and device and simulated driver Active CN112181149B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011068538.8A CN112181149B (en) 2020-09-30 2020-09-30 Driving environment recognition method and device and simulated driver

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011068538.8A CN112181149B (en) 2020-09-30 2020-09-30 Driving environment recognition method and device and simulated driver

Publications (2)

Publication Number Publication Date
CN112181149A CN112181149A (en) 2021-01-05
CN112181149B true CN112181149B (en) 2022-12-20

Family

ID=73947750

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011068538.8A Active CN112181149B (en) 2020-09-30 2020-09-30 Driving environment recognition method and device and simulated driver

Country Status (1)

Country Link
CN (1) CN112181149B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1705454A (en) * 2002-10-15 2005-12-07 沃尔沃技术公司 Method and arrangement for interpreting a subjects head and eye activity
CN105205443A (en) * 2015-08-13 2015-12-30 吉林大学 Traffic conflict identification method based on eye movement characteristic of driver
CN107545754A (en) * 2017-07-18 2018-01-05 北京工业大学 A kind of acquisition methods and device of road signs information threshold value
CN108068821A (en) * 2016-11-08 2018-05-25 现代自动车株式会社 For determining the device of the focus of driver, there are its system and method
CN108369780A (en) * 2015-12-17 2018-08-03 马自达汽车株式会社 Visual cognition helps system and the detecting system depending on recognizing object
CN109637261A (en) * 2019-01-16 2019-04-16 吉林大学 Auto manual drives driver's respond training system under power handover situations
CN109726426A (en) * 2018-11-12 2019-05-07 初速度(苏州)科技有限公司 A kind of Vehicular automatic driving virtual environment building method
CN111667568A (en) * 2020-05-28 2020-09-15 北京工业大学 Variable information board information publishing effect evaluation method based on driving simulation technology

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1705454A (en) * 2002-10-15 2005-12-07 沃尔沃技术公司 Method and arrangement for interpreting a subjects head and eye activity
CN105205443A (en) * 2015-08-13 2015-12-30 吉林大学 Traffic conflict identification method based on eye movement characteristic of driver
CN108369780A (en) * 2015-12-17 2018-08-03 马自达汽车株式会社 Visual cognition helps system and the detecting system depending on recognizing object
CN108068821A (en) * 2016-11-08 2018-05-25 现代自动车株式会社 For determining the device of the focus of driver, there are its system and method
CN107545754A (en) * 2017-07-18 2018-01-05 北京工业大学 A kind of acquisition methods and device of road signs information threshold value
CN109726426A (en) * 2018-11-12 2019-05-07 初速度(苏州)科技有限公司 A kind of Vehicular automatic driving virtual environment building method
CN109637261A (en) * 2019-01-16 2019-04-16 吉林大学 Auto manual drives driver's respond training system under power handover situations
CN111667568A (en) * 2020-05-28 2020-09-15 北京工业大学 Variable information board information publishing effect evaluation method based on driving simulation technology

Also Published As

Publication number Publication date
CN112181149A (en) 2021-01-05

Similar Documents

Publication Publication Date Title
CN110427850B (en) Method, system and device for predicting lane change intention of driver on expressway
CN109444912B (en) Driving environment sensing system and method based on cooperative control and deep learning
CN112417940B (en) Domain adaptation for image analysis
Borghi et al. Embedded recurrent network for head pose estimation in car
Peng et al. Driving maneuver early detection via sequence learning from vehicle signals and video images
CN113592905B (en) Vehicle driving track prediction method based on monocular camera
CN112215120B (en) Method and device for determining visual search area and driving simulator
WO2018171875A1 (en) Control device, system and method for determining the perceptual load of a visual and dynamic driving scene
Kasneci et al. Aggregating physiological and eye tracking signals to predict perception in the absence of ground truth
CN103455795A (en) Method for determining area where traffic target is located based on traffic video data image
CN113743471A (en) Driving evaluation method and system
CN116331221A (en) Driving assistance method, driving assistance device, electronic equipment and storage medium
CN116665003B (en) Point cloud three-dimensional target detection method and device based on feature interaction and fusion
CN112181149B (en) Driving environment recognition method and device and simulated driver
CN111950371B (en) Fatigue driving early warning method and device, electronic equipment and storage medium
CN111259829B (en) Processing method and device of point cloud data, storage medium and processor
Yuan et al. Incrementally perceiving hazards in driving
CN116823884A (en) Multi-target tracking method, system, computer equipment and storage medium
CN113548056A (en) Automobile safety driving assisting system based on computer vision
CN116955943A (en) Driving distraction state identification method based on eye movement sequence space-time semantic feature analysis
Raksincharoensak et al. Integrated driver modelling considering state transition feature for individual adaptation of driver assistance systems
Hong et al. Towards drowsiness driving detection based on multi-feature fusion and LSTM networks
Rusmin et al. Design and implementation of driver drowsiness detection system on digitalized driver system
CN112347851B (en) Multi-target detection network construction method, multi-target detection method and device
CN116664873B (en) Image information processing method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant