CN112256124B - Emotion-based control work efficiency analysis method, equipment and system - Google Patents

Emotion-based control work efficiency analysis method, equipment and system Download PDF

Info

Publication number
CN112256124B
CN112256124B CN202011023966.9A CN202011023966A CN112256124B CN 112256124 B CN112256124 B CN 112256124B CN 202011023966 A CN202011023966 A CN 202011023966A CN 112256124 B CN112256124 B CN 112256124B
Authority
CN
China
Prior art keywords
control
emotion
score
video image
physiological information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011023966.9A
Other languages
Chinese (zh)
Other versions
CN112256124A (en
Inventor
李小俚
赵小川
姚群力
顾恒
丁兆环
张昊
柳传财
张予川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Normal University
Original Assignee
Beijing Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Normal University filed Critical Beijing Normal University
Priority to CN202011023966.9A priority Critical patent/CN112256124B/en
Publication of CN112256124A publication Critical patent/CN112256124A/en
Application granted granted Critical
Publication of CN112256124B publication Critical patent/CN112256124B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7225Details of analog processing, e.g. isolation amplifier, gain or sensitivity adjustment, filtering, baseline or drift compensation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06398Performance of employee with respect to a job function
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2503/00Evaluating a particular growth phase or type of persons or animals
    • A61B2503/20Workers

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Human Resources & Organizations (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Mathematical Physics (AREA)
  • Psychiatry (AREA)
  • Multimedia (AREA)
  • Educational Administration (AREA)
  • Signal Processing (AREA)
  • Development Economics (AREA)
  • Veterinary Medicine (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Public Health (AREA)
  • Strategic Management (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Pathology (AREA)
  • Computational Linguistics (AREA)
  • Operations Research (AREA)
  • Marketing (AREA)

Abstract

The present disclosure provides a method, a device and a system for analyzing control work efficiency based on emotion, wherein the method comprises the following steps: acquiring physiological information data generated by controlling a target object to execute a target task by a player; inputting the physiological information data into a preset emotion recognition model to obtain the score of the control player on the emotion evaluation index; the emotion recognition model reflects a mapping relation between the control behavior data and the physiological information data and the score of the emotion evaluation index; obtaining a control score of the control player according to the score of the control player on the emotion evaluation index; and executing set operation according to the control score.

Description

Emotion-based control work efficiency analysis method, equipment and system
Technical Field
The present disclosure relates to the technical field of automatic analysis of control work efficiency, and more particularly, to a method, a device and a system for analyzing control work efficiency based on emotion.
Background
Different operators operate the same target object to execute the target task, and different operation efficiencies can be achieved, for example, different operators operate the same type of unmanned aerial vehicle to execute the same target task, different performances can be achieved, some operators can complete the target task in a short time, and some operators have good psychological states when executing the target task. The method and the device can analyze the operation work efficiency shown when an operator operates a target object to execute a target task, can be used as a basis for selecting the operator who operates the target object, and can also be used as a basis for evaluating the adaptability between any operator and any motion control device. Currently, when analyzing the control work efficiency, an organization expert usually performs manual scoring for an operator to control a target object to execute a target task, so as to reflect the corresponding control work efficiency through a scoring result, wherein the higher the score is, the higher the control work efficiency is. The manual scoring mode consumes a large amount of manpower, and the scoring result is excessively dependent on human subjective factors, so that the problems of low accuracy and unfairness exist, and therefore, an intelligent scheme for analyzing and controlling the work efficiency is needed to be provided.
Disclosure of Invention
It is an object of embodiments of the present disclosure to provide a new solution for analyzing manipulation ergonomics.
According to a first aspect of the present disclosure, there is provided a mood-based control ergonomics method, comprising:
acquiring physiological information data generated by controlling a target object to execute a target task by a player;
inputting the physiological information data into a preset emotion recognition model to obtain the score of the control player on the emotion evaluation index; the emotion recognition model reflects a mapping relation between the control behavior data and the physiological information data and the score of the emotion evaluation index;
obtaining a control score of the control player according to the score of the control player on the emotion evaluation index;
and executing set operation according to the control score.
Optionally, the physiological information data includes an electroencephalogram signal;
the step of inputting the physiological information data into a preset emotion recognition model and obtaining the score of the control player on the emotion evaluation index comprises the following steps:
performing wavelet packet transformation processing on the electroencephalogram signals to obtain electroencephalogram time-frequency characteristics;
acquiring a vector value of a brain electricity emotion feature vector from the brain electricity time-frequency feature based on a preset first depth convolution neural network;
and based on a preset first classifier, obtaining the score of the control player on the emotion evaluation index according to the vector value of the electroencephalogram emotion feature vector.
Optionally, the physiological information data includes a facial video signal;
the step of inputting the physiological information data into a preset emotion recognition model and obtaining the score of the control player on the emotion evaluation index comprises the following steps:
acquiring a current video sampling interval;
sampling the face video signal based on the current video sampling interval to obtain a current frame video image;
determining the expression similarity between the current frame video image and the corresponding previous frame video image; the previous frame of video image is a frame of video image obtained by sampling the face video signal at the previous time;
determining the emotion recognition result of the current frame video image according to the expression similarity;
and obtaining the score of the control player for the emotion evaluation index according to the emotion recognition result of the video image obtained by sampling the face video signal.
Optionally, the determining the expression similarity between the current frame video image and the corresponding previous frame video image includes:
acquiring a vector value of the expression characteristic vector of the current frame video image;
and based on a preset convolution network, determining the expression similarity between the current frame video image and the previous frame video image according to the vector value of the expression characteristic vector of the current frame video image and the vector value of the pre-stored expression characteristic vector of the previous frame video image.
Optionally, the method further includes a step of training the convolutional network, including:
acquiring third training samples, wherein one third training sample reflects the mapping relation between vector values of expression feature vectors corresponding to two frames of facial images and labels, and the labels reflect whether the two frames of facial images in the corresponding third sample belong to the same expression;
and training according to the vector values of the expression feature vectors of the two frames of facial images of the third training sample and the label of the third training sample to obtain the convolutional network.
Optionally, the training the vector values of the expression feature vectors of the two frames of facial images of the third training sample and the label of the third training sample to obtain the convolutional network includes:
determining an expression similarity prediction expression of the third training sample by taking a third network parameter of a convolutional network as a variable according to vector values of expression feature vectors of two frames of facial images of the third training sample;
constructing a third loss function according to the expression similarity prediction expression of the third training sample and the label of the third training sample;
and determining the third network parameter according to the third loss function to obtain the convolutional network.
Optionally, the determining the emotion recognition result of the current frame video image according to the expression similarity includes:
taking the emotion recognition result of the previous frame of video image as the emotion recognition result of the current frame of video image under the condition that the expression similarity is smaller than or equal to a similarity threshold;
under the condition that the expression similarity is larger than the similarity threshold, acquiring a vector value of a face emotion feature vector of the current frame video image based on a preset second depth convolution neural network; and based on a preset second classifier, obtaining an emotion recognition result of the current frame video image according to the vector value of the face emotion characteristic vector of the current frame video image.
Optionally, the method further includes:
determining a next video sampling interval for sampling the facial video signal next time according to the expression similarity when the expression similarity is less than or equal to a similarity threshold;
randomly generating a next video sampling interval for sampling the face video signal next time under the condition that the expression similarity is greater than the similarity threshold; and the next video sampling interval is less than or equal to a preset maximum sampling interval and greater than or equal to a minimum sampling interval.
Optionally, the method further includes a step of determining the similarity threshold, including:
acquiring a reference face video signal of the control player;
determining the expression similarity of every two adjacent video images in the reference face video signal;
and determining the similarity threshold according to the expression similarity of every two adjacent video images.
Optionally, the obtaining of the emotion score of the control player according to the emotion recognition result of the video image obtained by sampling the video information includes:
determining an emotion recognition result of the face video signal according to an emotion recognition result of a frame video image obtained by sampling the face video signal based on a voting method;
and obtaining the score of the control player for the emotion evaluation index according to the emotion recognition result of the face video signal.
Optionally, the step of acquiring the physiological information data includes:
the method comprises the steps of obtaining physiological information data provided by various physiological information acquisition devices, wherein the physiological information data provided by any physiological information acquisition device comprises at least one of physiological signal data and physiological image data.
Optionally, the acquiring physiological information data provided by each physiological information acquisition device includes:
controlling each physiological information acquisition device to synchronously carry out respective acquisition operation;
and acquiring physiological information data output by the physiological information acquisition equipment through respective acquisition operation.
Optionally, each physiological information acquisition device includes at least one of an electroencephalogram acquisition device, a electrodeionization acquisition device, an electrocardio acquisition device, an eye tracking device, a video acquisition device for acquiring facial expressions, and a voice acquisition device for acquiring voices;
the physiological information data provided by the electroencephalogram acquisition equipment comprises at least one of an electroencephalogram signal and an electroencephalogram image; the physiological information data provided by the bioelectricity collecting equipment comprises at least one of a bioelectricity signal and a bioelectricity image; the physiological information data provided by the electrocardio acquisition equipment comprises at least one of electrocardiosignals and electrocardio images; the eye tracking device provides physiological information data including at least one of change data of an ocular feature and ocular image data; the physiological information data provided by the video acquisition equipment comprises at least one of facial video signals and change data of facial features; the physiological information data provided by the voice acquisition device includes at least one of a voice signal and a sound wave image.
Optionally, the performing the setting operation includes at least one of:
a first item outputting the manipulation score;
a second item, which provides a selection result whether the control player is selected or not according to the control score;
a third item, determining the control level of the control player according to the control score;
a fourth item for determining a control task to be performed by the control player according to the control score;
and fifthly, selecting a control combination which enables the control score to meet the set requirement according to the control score of the same control player for controlling the target object to execute the target task through different motion control devices, wherein one control combination comprises the control player and the motion control device which are matched.
Optionally, the method further includes:
providing a setting entrance in response to an operation of setting an application scene;
acquiring an application scene input through the setting entrance, wherein the input application scene reflects an operation to be executed based on a control score;
and determining the operation content of the set operation according to the input application scene.
Optionally, the method further includes:
and providing a virtual scene corresponding to the target task, wherein the target object is a virtual object in the virtual scene.
Acquiring a control command generated by the control player through a control motion control device, and updating the virtual scene according to the control command;
and acquiring feedback data generated by the virtual scene, and sending the feedback data to the motion control device.
Optionally, the acquiring the control behavior data and the physiological information data generated when the control player controls the target object to execute the target task includes:
and acquiring control behavior data and physiological information data generated when the control player controls the target object to execute the target task in the virtual scene.
Optionally, the method includes:
providing a configuration interface in response to an operation to configure the target task;
acquiring configuration information for the target task, which is input through the configuration interface;
and providing a virtual scene corresponding to the target task according to the configuration information.
According to a second aspect of the present disclosure, there is provided an emotion-based manipulation ergonomics apparatus comprising at least one computing device and at least one storage device, wherein,
the at least one storage device is configured to store instructions for controlling the at least one computing device to perform the method according to the first aspect of the present disclosure.
According to a third aspect of the present disclosure, a control ergonomics system based on emotion is provided, the system comprising a task execution device, physiological information acquisition devices and the control ergonomics analysis device of the second aspect of the present disclosure, wherein the task execution device and the physiological information acquisition devices are in communication connection with the control ergonomics analysis device.
Optionally, the task execution device includes a manipulated target object and a motion control device for manipulating the target object, and the target object is connected to the motion control device in a communication manner.
Optionally, the motion control device is a flight control device, and the target object controlled by the flight control device is an unmanned aerial vehicle.
The method has the advantages that the method gives the score of the control player for the emotion evaluation index through the physiological information data generated by the control player for controlling the target object to execute the target task, further determines the control score of the control player according to the score of the control player for the emotion evaluation index, and can select the control person of the target object, grade the control person and/or match the control person with the motion control device according to the control score. According to the method of the embodiment, the analysis of the control work efficiency can be automatically completed, the labor cost and the time cost can be saved, in addition, the dependence on expert experience is greatly reduced according to the analysis performed by the method of the embodiment, and the accuracy and the effectiveness of the analysis are improved.
Other features of the present invention and advantages thereof will become apparent from the following detailed description of exemplary embodiments of the invention, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a schematic diagram of a component architecture of a mood-based maneuver ergonomics analysis system according to one embodiment;
FIG. 2 is a schematic diagram of a component structure of a mood-based maneuver ergonomics system according to another embodiment;
FIG. 3 is a schematic diagram of a hardware configuration of a mood-based control ergonomics apparatus according to another embodiment;
FIG. 4 is a flow diagram of a method of mood-based maneuver ergonomics analysis according to an embodiment;
FIG. 5 is a schematic diagram of a structural equation model in accordance with one embodiment.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless it is specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
< System embodiment >
Fig. 1 and 2 are schematic component block diagrams of an alternative ergonomic analysis system 100 to which the methods of embodiments of the present disclosure may be applied.
As shown in fig. 1, the manipulation ergonomics system 100 may include an electronic device 110, a task performance device 120 and physiological information collection devices 130.
The electronic device 110 may be a server or a terminal device, and is not limited herein.
The server may be, for example, a blade server, a rack server, or the like, and the server may also be a server cluster deployed in the cloud. The terminal device can be any device with data processing capability, such as a PC, a notebook computer, a tablet computer and the like.
The electronic device 110 may include a processor 1101, a memory 1102, an interface device 1103, a communication device 1104, a display device 1105, an input device 1106.
The memory 1102 is used to store computer instructions, and the memory 1102 includes, for example, a ROM (read only memory), a RAM (random access memory), a nonvolatile memory such as a hard disk, and the like. The processor 1101 is configured to execute a computer program that may be written in an instruction set of architectures such as x86, Arm, RISC, MIPS, SSE, and the like. The interface device 1103 includes various bus interfaces, for example, a serial bus interface (including a USB interface and the like), a parallel bus interface, and the like. The communication device 1104 is capable of wired or wireless communication, for example, and performs communication using at least one of a RJ45 module, a WIFI module, a 2G to 6G mobile communication module, a network adapter of a bluetooth module, and the like. The display device 1105 is, for example, a liquid crystal display, an LED display touch panel, or the like. The input device 1106 may include, for example, a touch screen, keyboard, mouse, etc.
In this embodiment, the memory 1102 of the electronic device 110 is configured to store computer instructions for controlling the processor 1101 to operate so as to implement a method of manipulating ergonomics according to any embodiment of the present disclosure. The skilled person can design the instructions according to the disclosed aspects of the present disclosure. How the instructions control the operation of the processor is well known in the art and will not be described in detail herein.
Although a plurality of devices of the electronic apparatus 110 are shown in fig. 1, the present disclosure may only refer to some of the devices, for example, the electronic apparatus 110 only refers to the memory 1102, the processor 1101, the communication device 1104 and the like.
In one embodiment, as shown in fig. 1, the task performing device 120 may be a real environment-based performing device, and the task performing device 120 includes a motion control apparatus 1201 and a target object 1202 communicatively connected to the motion control apparatus 1201, that is, a target manipulation object, and a manipulation person may manipulate the target object 1202 through the motion control apparatus 1201 to perform a target task. For example, the target object 1202 is a drone, and the motion control device 1201 is a flight control device for operating the drone. As another example, the target task includes completing at least one of a splay flight, a spin flight, a collective flight, and the like in a set environment. As another example, the set environment includes wind, rain, fog, and the like. Of course, the target object 1202 may also be other controlled objects, such as an unmanned vehicle, any type of robot, etc., and is not limited herein.
In this embodiment, the human operator may send a control command to the target object 1202 through the motion control device 1201, so that the target object 1202 acts according to the control command. In the process of controlling and executing the target task, the target object 1202 acquires motion state data and feeds the motion state data back to the motion control device 1201, so that an operator can make control judgment and the like.
The motion control device 1201 may include, for example, at least one of a remote control and a remote control handle.
The motion control device 1201 may include a processor, a memory, an interface device, an input device, a communication device, and the like. The memory may store computer instructions that, when executed by the processor, perform: an operation of transmitting a corresponding control command to the target object 1202 according to an operation of the input device by the operator; acquiring motion state data returned by a target object, and performing corresponding processing operation; and uploading the collected manipulation result data to the electronic device 110, etc., which will not be further described herein.
The target object 1202 may include a processor, memory, communication devices, power devices, sensors, and the like. The memory may store computer instructions that, when executed by the processor, perform: according to the control command sent by the motion control device 1201, the power device and the like of the control target object 1202 execute corresponding actions; acquiring data acquired by each sensor to form motion state data; and control the communication means to transmit the motion state data to the motion control means 1201 and the like.
In this embodiment, the task execution device 120 is communicatively connected to the electronic device 110 to upload the manipulation result data to the electronic device 110. This may be, for example, that the task performing device 120 is communicatively connected to the electronic device 110 via the motion control apparatus 1201. For another example, the motion control apparatus 1201 and the target object 1202 may be both communicatively connected to the electronic device 110, which is not limited herein.
In another embodiment, as shown in fig. 2, the task performing device 120 may be a task performing device based on semi-physical simulation of a virtual environment, and the task performing device 120 may include a terminal device 1203 and a real motion control apparatus 1201, where the terminal device 1203 is configured to provide a virtual scene corresponding to a target task, that is, a simulation scene, and in this embodiment, the target object 1202 is a virtual object in the virtual scene. In this embodiment, the motion control apparatus 1201 is in communication connection with the terminal device 1203 to implement data and/or command interaction between the motion control apparatus 1201 and the virtual scene, so that an operator can operate and control the target object 1202 to execute a target task in the virtual scene through the motion control apparatus 1201.
In this embodiment, the terminal device 1203 may have a hardware structure similar to that of the electronic device 110, which is not described herein again, and the terminal device 1203 and the electronic device 110 may be physically separated devices or may be the same device, that is, the electronic device 110 may also provide the virtual environment, which is not limited herein.
In fig. 1, each physiological information collection device 130 is used to provide physiological information data required by the electronic device in implementing the method of pilot ergonomics according to any of the embodiments. Each physiological information acquisition device 130 is in communication connection with the electronic device 110 to upload the physiological information data provided by each to the electronic device 110.
Each physiological information acquisition device 130 includes at least one of an electroencephalogram acquisition device 1301, a picogram acquisition device 1302, an electrocardiograph acquisition device 1303, a video acquisition device 1304 for acquiring facial expressions, an eye movement tracking device 1305, and a voice acquisition device 1306 for acquiring voices.
The physiological information data provided by the brain electrical acquisition device 1301 includes at least one of a brain electrical signal and a brain electrical image.
The physiological information data provided by the electrodermal acquisition device 1302 includes at least one of a electrodermal signal and an electrodermal image.
The electrocardiographic acquisition device 1303 provides physiological information data including at least one of electrocardiographic signals and electrocardiographic images.
The physiological information data provided by the video capture device 1304 may include at least one of facial feature variation data and facial image data.
The physiological information data provided by the eye tracking device 1305 may include at least one of change data of the ocular feature and ocular image data.
The physiological information data provided by the voice capture device 1306 may include at least one of voice signals and acoustic images.
Any physiological information acquisition device 130 may include a front-end acquisition device and a data processing circuit connected to the acquisition device, the front-end acquisition device is configured to acquire raw data, and may be an electrode device that contacts with a control player, the data processing circuit is configured to perform corresponding preprocessing on the raw data, the preprocessing includes at least one of signal amplification, filtering, denoising, and notch processing, the data processing circuit may be implemented by a basic circuit built up by electronic components, may also be implemented by a processor operation instruction, and may also be implemented by a combination of the two, which is not limited herein.
The electronic device 110 and the task performing device 120, and the electronic device 110 and each physiological information collecting device 130 may be in communication connection in a wired or wireless manner, which is not limited herein.
In one embodiment, as shown in fig. 3, the present disclosure provides a maneuver ergonomics apparatus 140 comprising at least one computing device 1401 and at least one storage device 1402, wherein the at least one storage device 1402 is configured to store instructions for controlling the at least one computing device 1401 to perform a maneuver ergonomics method according to any of the embodiments of the present disclosure. The control ergonomics apparatus 140 may include at least one electronic device 110, and may further include a terminal device 1203, etc., which are not limited herein.
< method examples >
Fig. 4 is a flow diagram of a maneuver ergonomics method according to one embodiment, which may be implemented, for example, by the maneuver ergonomics apparatus 140 shown in fig. 3. In this embodiment, the method for analyzing work efficiency of a control of a user by a task execution device will be described as an example, and the method may include the following steps S410 to S450:
step S410, acquiring control behavior data and physiological information data generated by a control player controlling a target object to execute a target task.
The target object may be, for example, a drone or the like.
The target task comprises task content, a corresponding task environment and the like.
In one embodiment, as shown in fig. 1, the control player may control the target object in a real scene through the motion control device 1201, i.e., the target object is real with the task environment.
In another embodiment, as shown in fig. 2, the control player may manipulate the target object in a virtual scene provided by the terminal device 1203 through the motion control device 1201, that is, the target object and the task environment are both virtual. In this embodiment, in order to implement interaction of data and commands between the motion control device 1201 and the virtual scene, the method may further include the following steps S4011 to S4013:
step S4011, providing a virtual scene corresponding to the target task, wherein the target object is a virtual object in the virtual scene.
Step S4012 obtains a control command generated by an operator operating the motion control apparatus 1201, and updates the virtual scene according to the control command.
In this step S4012, updating the virtual scene includes updating the task environment and the state of the target object, which includes the position and posture of the target object, and the like.
Step S4013 obtains feedback data generated in the virtual scene, and sends the feedback data to motion control apparatus 1201.
The virtual scene includes all virtual things of the corresponding target task provided by the terminal device 1203, including virtual environments and virtual objects, etc.
In step S4013, the feedback data may be collected by a virtual sensor of the virtual object, and sent to the motion control apparatus 1201 by the terminal device 1203, so as to allow the control player to perform the control judgment. The feedback data may also be used for the device 140 to obtain at least part of the above-mentioned manipulation result data.
In this embodiment, the acquiring of the manipulation result data generated by the manipulation of the target object by the manipulation player in step S410 may include: and acquiring control result data generated by controlling the virtual object to execute the target task under the virtual scene by the control player.
In this embodiment, the method may further include the following steps S4021 to S4023:
step S4021, in response to the operation of configuring the target task, provides a configuration interface.
The device 140 may have a simulation application installed thereon, and an interface of the simulation application may provide an entry for triggering an operation of configuring the target task, through which a configuration person may access a configuration interface provided by the configuration interface.
The configuration interface may include at least one of an input box, a checklist, and a drop-down list for a configuration person to configure the target task.
Step S4022, acquiring configuration information for the target task input through the configuration interface.
In step S4022, the configuration information input through the configuration interface may be acquired in response to an operation to complete configuration. The configuration information includes, for example, information reflecting the task content and task environment, and the like.
In step S4022, for example, the configurator may trigger the operation of completing the configuration through a key such as "confirm" or "submit" provided by the configuration interface.
Step S4023, providing a virtual scene corresponding to the target task according to the configuration information.
The virtual scene comprises a virtual object corresponding to the target task, a virtual environment and the like.
As can be seen from the above steps S4021 to S4023, the configurator can flexibly configure the target task through the configuration interface as needed, so as to provide virtual scenes corresponding to different target tasks through the device 140.
In the embodiment shown in fig. 2, the acquiring of the physiological information data generated by the player operating the target object to perform the target task in step S410 may include: and acquiring physiological information data generated by controlling the virtual object to execute the target task under the virtual scene by controlling the player.
The physiological information data reflects the cognitive ability of the control player on the target task, the higher the cognitive ability is, the easier the control player completes the target task, and the weaker the cognitive ability is, the harder the control player completes the target task. The difficulty in completing the target task is to make the player have corresponding reactions in the physiological state of the player, such as heart rate reaction, brain electricity reaction, skin electricity reaction, facial expression reaction, eyeball position reaction, voice reaction, and the like. Therefore, in this embodiment, based on the physiological information data, the scores of the evaluation indexes reflecting the cognitive abilities of the manipulation player with respect to the target task can be obtained.
The physiological information data is multidimensional data including a plurality of index data. The physiological information data may include at least one of information data reflecting a brain load condition, information data reflecting a nerve fatigue condition, and information data reflecting an emotion, for example.
Correspondingly, each evaluation index for evaluating the cognitive ability of the control player includes, for example: mental fatigue evaluation index, brain load index and emotion evaluation index. According to the physiological information data, a score corresponding to each evaluation index can be obtained.
The physiological information data may be provided by respective physiological information acquisition devices.
In this embodiment, the physiological information data provided by any physiological information acquisition device may include at least one of physiological signal data and physiological image data.
For example, each physiological information acquisition device includes a brain electrical acquisition device 1301 as shown in fig. 1, and the physiological information data provided by the brain electrical acquisition device 1301 may include at least one of a brain electrical signal (electrical signal) and a brain electrical image.
As another example, each physiological information acquisition device includes a pico-cell acquisition device 1302 as shown in fig. 1, and the physiological information data provided by the pico-cell acquisition device 1302 may include at least one of a pico-cell signal (electrical signal) and a pico-cell image.
For another example, each physiological information acquisition device includes an electrocardiograph acquisition device 1303 shown in fig. 1, and the physiological information data provided by the electrocardiograph acquisition device 1303 may include at least one of an electrocardiograph signal and an electrocardiograph image.
For another example, each physiological information acquisition device includes a video acquisition device 1304 as shown in fig. 1, and the physiological information data provided by the video acquisition device 1304 includes at least one of change data of facial features and facial image data. The facial feature change data includes at least one of data on occurrence of an eye closing action, and data on occurrence of a yawning action, for example.
For another example, each physiological information collection device includes an eye tracking device 1305 as shown in fig. 1, the eye tracking device 1305 providing physiological information data including at least one of change data of an ocular feature and ocular image data. The data of changes in the ocular characteristics include, for example, data of occurrence of a blinking motion, data of occurrence of a closing motion, data of occurrence of a saccadic motion, data of occurrence of a gazing motion.
As another example, each physiological information acquisition device includes a voice acquisition device 1306 shown in fig. 1, and the physiological information data provided by the voice acquisition device 1306 includes at least one of a voice signal and a sound wave image.
After the raw data is acquired by any physiological information acquisition device through the acquisition device at the front end, at least one of signal amplification, filtering, denoising and notch processing can be performed on the raw data, and the physiological information data is generated and provided for the device 140 so that the device 140 can obtain the physiological information data.
Since the physiological information data come from different physiological information acquisition devices, in order to make the evaluation of the cognitive abilities of the control players have the same time reference according to the physiological information data, in one embodiment, the acquiring the physiological information data provided by each physiological information acquisition device may include: controlling each physiological information acquisition device to synchronously perform acquisition operation; and acquiring physiological information data generated by the physiological information acquisition equipment through corresponding acquisition operation.
In this embodiment, for example, a unified clock reference may be set to trigger each physiological information acquisition device to synchronously start and end the corresponding acquisition operation, and the like.
And step S420, inputting the physiological information data into a preset emotion recognition model to obtain the score of the control player on the emotion evaluation index.
The emotion recognition model can reflect the mapping relation between the control behavior data and the physiological information data and the score of the emotion evaluation index.
In one embodiment of the present disclosure, the physiological information data includes an electroencephalogram signal; then, inputting the physiological information data into a preset emotion recognition model to obtain the score of the control player for the emotion evaluation index may include steps S4071 to S4073 as follows:
step S4071, wavelet packet transformation processing is carried out on the electroencephalogram signals, and electroencephalogram time-frequency characteristics are obtained.
Step S4072, based on the preset first depth convolution neural network, obtaining the vector value of the electroencephalogram emotion feature vector from the electroencephalogram time-frequency feature.
Step S4073, based on a preset first classifier, scoring of the control player on the emotion evaluation index is obtained according to the vector value of the electroencephalogram emotion feature vector.
Specifically, the wavelet packet transform module may perform wavelet packet transform processing on the electroencephalogram signal to obtain electroencephalogram time-frequency characteristics. The wavelet packet transformation module can be configured to decompose wavelet packets at k (k is 6) level, the wavelet packet transformation can provide finer decomposition for the high-frequency part of the signal, and the decomposition has no redundancy or omission, so that better time-frequency localization analysis can be carried out on the signal.
In one embodiment, a convolutional neural network can be used for extracting vector values of the electroencephalogram emotion feature vector from the electroencephalogram time-frequency features. The required feature extraction capability is realized through a specially designed lightweight convolutional neural network with relatively low calculation overhead.
In one example, ResNet18 may be chosen as the base model for a convolutional neural network, which balances accuracy and the cost of resource overhead better than other models. The modified network was named EsNet26 and the network structure is shown in Table 4 below.
TABLE 4
Figure GDA0003731093570000081
In one embodiment, the vector value of the electroencephalogram emotion feature vector is used as input, and the score of the control player for the emotion evaluation index is obtained based on a classifier of a Softmax function.
The classifier based on the Softmax function can connect the feature vectors output by the upper fully-connected layer to the output nodes, and obtains an n-dimensional vector [ p ] through Softmax regression 1 ,p 2 ,…,p n ] T And the numerical value of each dimension is the probability that the emotion type of the input electroencephalogram signal belongs to the corresponding type.
In one embodiment of the present disclosure, the physiological information data includes a facial video signal. Then, inputting the physiological information data into a preset emotion recognition model, and obtaining the score of the control player for the emotion evaluation index includes steps S4081 to S4085 as follows:
step S4081, the current video sampling interval is obtained.
In this embodiment, the current video sampling interval is specifically a sampling interval for sampling the face video signal this time to obtain the current frame video image, and represents the number of image frames spaced between the current frame video image and the previous frame video image. The previous frame video image is a video image obtained by sampling before and at last of the current frame video image.
The current video sampling interval may be a preset fixed value, may also be a random value meeting a preset condition, and may also be determined according to the expression similarity between the video images sampled by the corresponding previous two frames.
In one embodiment of the disclosure, when the expression similarity between two adjacent video images sampled before and after the current video image is less than or equal to a similarity threshold, the current video sampling interval is determined according to the expression similarity of the two previous video images.
Specifically, the current video sampling interval Num may be determined by the following formula skip
Figure GDA0003731093570000091
Wherein sim ff Expressing the similarity of the expressions of the first two frames of video images, wherein the lambda is the upper limit of a preset sampling interval; λ is a lower limit of a preset sampling interval; theta ff Is a similarity threshold.
Under the condition that the expression similarity between two adjacent video images obtained by sampling before and at last of the current video image is greater than a similarity threshold, randomly generating a current video sampling interval; the current video sampling interval is less than or equal to a preset maximum sampling interval and greater than or equal to a minimum sampling interval.
In one embodiment of the present disclosure, the method may further include the step of determining a similarity threshold, including:
acquiring a reference face video signal of a control player;
determining the expression similarity of every two adjacent video images in the reference face video signal;
and determining a similarity threshold according to the expression similarity of every two adjacent video images.
Since the expressions of different users may be different in expression manner or degree, we individually set similarity thresholds for different users. Similarity threshold θ ff May be calculated according to the following formula:
Figure GDA0003731093570000092
wherein, M (frames) j ) Vector values, M (frames), representing expressive feature vectors of the jth frame of video image in the reference facial video signal j+1 ) And a vector value representing an expression feature vector of a (j + 1) th frame video image in the reference face video signal, wherein L is the total image frame number of the reference face video signal, and alpha is a preset parameter value.
Step S4082, sampling the face video signal based on the current video sampling interval to obtain the current frame video image.
In one embodiment of the present disclosure, steps S4081 and S4082 may be implemented by a frame sampler.
Step S4083, determining the expression similarity between the current frame video image and the corresponding previous frame video image.
The previous frame of video image is a frame of video image obtained by sampling the face video signal at the previous time.
In one embodiment of the present disclosure, determining the expression similarity between the current frame video image and the corresponding previous frame video image may include:
acquiring a vector value of an expression feature vector of a current frame video image;
and based on a preset convolution network, determining the expression similarity between the current frame video image and the previous frame video image according to the vector value of the expression characteristic vector of the current frame video image and the vector value of the pre-stored expression characteristic vector of the previous frame video image.
In this embodiment, in the case of determining the emotion recognition result of the previous frame of video image, the vector value of the expression feature vector of the previous frame of video image is determined and buffered. So as to be directly called in the case of determining the expression similarity between the current frame video image and the corresponding previous frame video image.
In one embodiment of the present disclosure, a vector value of an expression feature vector of a current frame video image may be obtained by a feature extractor, and an expression similarity between the current frame video image and a previous frame video image may be determined by a fast switch.
The feature extractor is used for extracting vector values of the expression feature vectors from the current frame video image. The required feature extraction capability is realized with relatively low computational overhead through a specially designed lightweight convolutional neural network. The recognition of the basic expression is completed only through the face area of the user, which has high requirements on the feature extractor. For this reason, a deep learning based network model is designed, so that to build a powerful feature extractor, ResNet18 is selected as the basic model for designing the feature extractor, which balances accuracy and cost of resource overhead better than other models. The modified network was named EsNet26 and the network structure is shown in Table 5 below.
TABLE 5
Figure GDA0003731093570000093
Figure GDA0003731093570000101
The modification principle and the modification strategy for the basic model mainly have the following aspects:
(1) the convolution kernel of the first layer size 7x7 is replaced with a size of 3x3 and downsampling is cancelled to prevent the feature surface from dropping too fast and feature information from being lost in the shallow convolution.
(2) The modified model can significantly reduce the computational overhead, but at the same time, the modifications introduce a new problem that the performance of feature extraction is also reduced. To compensate for this, the modified model deepens the depth of the convolutional network, extending the convolutional network to 26 layers.
(3) Because the camera is fixed on the wearable device, the shot area is relatively fixed for the same control player, so that the input picture in the scene does not need to be too large in size, and the input of the original ResNet18 model can be reduced from 224x224 to 64x 64.
The quick changer can automatically judge whether a large amount of subsequent convolution calculation is necessary or not during identification, and if the convolution calculation is unnecessary, the quick changer can choose to bypass the calculation so as to accelerate the operation efficiency of the system.
The convolutional network is the core part of the fast transponder and can be composed of 10 convolutional layers, 1 pooling layer and a loss layer. The pooling layer will output 128-dimensional feature vectors, and the network structure of the convolutional network of the fast transformer is shown in table 6 below, where conv1_ x is stacked by 3 residual blocks.
TABLE 6
Figure GDA0003731093570000102
The loss layer will calculate the distance between the features of two adjacent frames of images and calculate the corresponding loss. The Loss layer of the convolutional network is mainly realized by a contextual Loss function, and the calculation formula is as follows:
Figure GDA0003731093570000103
wherein y is a label of an input sample, if the input sample is a positive sample, namely two frames of images input the same expression, the value of y is 1, otherwise, the value of y is 0. d is the expression similarity of two adjacent frames of images, and the smaller d is, the more similar the two adjacent frames of images are. margin is a hyperparameter, is a penalty term for negative samples, when the input is a positive sample, the square term of the feature distance is a loss value, and when the input is a negative sample, the loss is not generated only when the feature distance is greater than margin, otherwise, the smaller the feature distance, the larger the loss is generated. At training time, margin is set to 5 by default.
In one example, for the current frame video image, after the first 7 layers of convolution calculation of the feature extractor is finished, the result of the intermediate calculation is obtained, and at this time, the subsequent calculation of the feature extractor is suspended, and the data stream enters the fast transformer. The fast exchanger further extracts vector values from the calculation result of the feature extractor and judges the expression similarity of the current frame video image and the previous frame video image, thereby determining whether the current frame video image should resume the pause processing in the feature extractor or directly allocate a final class label. The input to the system component is the intermediate output of the layer 7 convolution of the feature extractor, which also buffers the output features of the last frame that failed to trigger the fast-acting switch in the switch for comparison with the last frame.
Step S4084, determining the emotion recognition result of the current frame video image according to the expression similarity.
In one embodiment of the present disclosure, determining the emotion recognition result of the current frame video image according to the expression similarity includes:
under the condition that the expression similarity is smaller than or equal to the similarity threshold, taking the emotion recognition result of the previous frame of video image as the emotion recognition result of the current frame of video image;
under the condition that the expression similarity is larger than a similarity threshold, acquiring a vector value of a face emotion feature vector of the current frame video image based on a preset second deep convolutional neural network; and based on a preset second classifier, obtaining an emotion recognition result of the current frame video image according to the vector value of the face emotion characteristic vector of the current frame video image.
And under the condition that the expression similarity is greater than the similarity threshold, the suspension processing in the feature extractor can be resumed, namely, the feature extractor continues to extract the vector value of the expression feature vector of the current frame video image, and the second classifier determines the emotion recognition result corresponding to the vector value of the expression feature vector finally output by the feature extractor.
In one embodiment of the present disclosure, the method further includes a step of training the convolutional network, including steps S550 to S560 as follows:
step S550, a third training sample is obtained.
One third training sample reflects the mapping relation between the vector values of the expression feature vectors corresponding to the two frames of facial images and the labels, and the labels reflect whether the two frames of facial images in the corresponding third sample belong to the same type of expression or not;
and step S560, training according to the vector values of the expression feature vectors of the two frames of facial images of the third training sample and the label of the third training sample to obtain a convolution network.
In an embodiment of the present disclosure, training, according to vector values of expression feature vectors of two frames of facial images of a third training sample and a label of the third training sample, obtaining a convolutional network may include:
determining an expression similarity prediction expression of a third training sample by taking a third network parameter of the convolutional network as a variable according to vector values of expression feature vectors of two frames of facial images of the third training sample;
constructing a third loss function according to the expression similarity prediction expression of the third training sample and the label of the third training sample;
and determining a third network parameter according to the third loss function to obtain the convolutional network.
In this embodiment, the feature extractor and the fast transformer may share multiple layers of convolution calculation, and may combine two network models into one network with two sub-branches. The entire model can be jointly trained to achieve the purpose of sharing convolutional layers, and a feature extractor and a fast transponder can be obtained simultaneously after training is completed.
And step S4085, obtaining the score of the control player on the emotion evaluation index according to the emotion recognition result of the video image obtained by sampling the facial video signal.
In this embodiment, the emotion recognition results of all video images obtained by sampling the face video signal may be determined, and the score of the control player for the emotion assessment index may be determined according to the emotion recognition results of all video images.
In an embodiment of the disclosure, obtaining the emotion score of the control player according to the emotion recognition result of the video image obtained by sampling the video information includes:
determining an emotion recognition result of the face video signal according to an emotion recognition result of a frame video image obtained by sampling the face video signal based on a voting method;
and obtaining the score of the control player for the emotion evaluation index according to the emotion recognition result of the face video signal.
In the embodiment, unnecessary convolution calculation can be avoided through the quick transmitter, and the redundant video frames can be directly skipped over while emotion change is not missed through the frame sampler. Thus, the emotion recognition efficiency can be improved by the method of the embodiment.
In one embodiment of the present disclosure, the physiological information data includes a brain electrical signal and a facial video signal. Then, according to the method in the foregoing embodiment, the emotion recognition results determined according to the electroencephalogram signals and the emotion recognition results of all video images obtained by sampling the facial video signals may be respectively obtained; and then, a score of the emotion evaluation index is obtained according to the emotion recognition results.
Specifically, the score of the emotion evaluation index may be obtained based on a voting method, based on an emotion recognition result determined by the electroencephalogram signal, and based on emotion recognition results of all video images obtained by sampling the facial video signal.
In the embodiment, the emotion recognition results of the two types of signals are evaluated through a voting method to obtain scores of emotion evaluation indexes, so that the emotion recognition precision can be improved.
And step S430, obtaining the control score of the control player according to the score of the control player on the emotion evaluation index.
And inputting the score of the control player on the emotion evaluation index into a preset structural equation model to obtain the control score of the control player.
In one embodiment of the present disclosure, at least one emotion estimation index may be set in advance. The score for each evaluation index may be obtained according to the corresponding embodiment described above.
For example, an emotion endogenous evaluation index, an emotion exogenous evaluation index and an emotion subjective evaluation index are preset, and a score e1 of the emotion endogenous evaluation index is obtained according to physiological information data acquired by electroencephalogram acquisition equipment, skin electricity acquisition equipment and electrocardiogram acquisition equipment; obtaining a score e2 of an external emotion evaluation index through physiological information data collected by the root eye movement tracking device, the video collecting device and the voice collecting device; and obtaining a score e3 of the emotional subjective evaluation index according to the subjective evaluation of the control player on the emotional state in the control behavior data.
The score of the control player for each emotion evaluation index is input into a preset structural equation model, and the control score of the control player can be obtained.
In step S440, a set operation is performed according to the manipulation score obtained in step S430.
In one embodiment, the operation of performing the setting in step S440 may include a first operation of outputting the manipulation score.
Outputting the maneuver score may include: the display device of the driving apparatus 140 or a display device connected to the apparatus 140 displays the manipulation score.
Outputting the maneuver score may also include: and sending the control score to terminal equipment registered by a user customizing the control score or to a user account of the user customizing the control score.
The user is, for example, a manipulation-rated person, and the user may register device information of the terminal device with the device 140, so that the device 140 may send a manipulation score to the terminal device after obtaining the manipulation score of the manipulation player.
In the case of developing the control analysis application in accordance with the method of the present embodiment, a control rater may install a client of the application on a terminal device of the user, and obtain a control score of a control player by logging in a user account registered in the application.
The terminal device is, for example, a PC, a notebook computer, or a mobile phone, and is not limited herein.
In one embodiment, the operation of performing the setting in step S440 may include a second operation of providing a result of whether the manipulation player is selected according to the manipulation score. According to the embodiment, the selection of the operator can be realized. Here, a score threshold value may be set, and in a case where the manipulation score is higher than or equal to the score threshold value, the manipulation player may be judged to be eligible for selection. In this embodiment, the operation of executing the setting may further include: and outputting the selection result in an arbitrary mode. The arbitrary means includes displaying, printing, transmitting, and the like.
In one embodiment, the operation of performing the setting in step S440 may include a third operation of determining the manipulation level of the manipulation player according to the manipulation score. Here, a look-up table reflecting a correspondence between the manipulation score and the manipulation level may be preset to determine the manipulation level of the corresponding manipulation player from the manipulation score for an arbitrary manipulation player and the look-up table. In this embodiment, the operation of executing the setting may further include: the manipulation level is output in an arbitrary manner.
In one embodiment, the operation of performing setting in step S440 may include a fifth operation of determining a manipulation task performed by the manipulation player according to the manipulation score. Here, a comparison table reflecting the correspondence between the manipulation scores and the manipulation tasks may be preset to determine the manipulation task to be executed by the corresponding manipulation player based on the manipulation score for any manipulation player and the comparison table. In this embodiment, the operation of performing setting may further include: the manipulation task is output in an arbitrary manner.
In one embodiment, the operation of performing setting in step S440 may include a fifth operation, that is, selecting a control combination that makes the control score meet the setting requirement according to the control score of the same control player for controlling the target object through different motion control devices, where one control combination includes the matched control player and motion control device. In this embodiment, the operation of executing the setting may further include: the manipulated combination is output in an arbitrary manner.
In this embodiment, since the same control player has different proficiency levels for different motion control devices, in this example, not only the control combination that makes the control score satisfy the setting requirement but also the motion control device most suitable for the control player can be obtained. In this example, the setting request is, for example, that the manipulation score is equal to or larger than a set value.
In one embodiment, the user may be allowed to select the operation to be performed in step S440, and thus, the method may further include: providing a setting entrance in response to an operation of setting an application scene; acquiring an application scene input through the setting entrance, wherein the application scene reflects an operation to be executed based on the control score; and determining the operation content of the set operation according to the input application scene.
For example, according to an input application scenario, the operation content of the operation determined to be set includes at least one of the above operations.
As can be seen from the above steps S410 to S440, the method of this embodiment may determine the control score for the control player according to the control behavior data and the physiological information data generated when the control player controls the target object to perform the target task, which may greatly save labor cost and time cost, greatly reduce the dependence on expert experience, and improve the accuracy and effectiveness of the analysis.
In addition, the operation score can be used for relevant personnel to select the operation personnel, grade the operation personnel, and/or carry out matching setting between the operation personnel and the motion control device.
In one embodiment of the present disclosure, control behavior data generated by controlling the player to control the target object to perform the target task may also be acquired.
In this embodiment, the control behavior data may be provided by task performing device 120, or the base data for calculating the control behavior data may be provided by task performing device 120 to control ergonomics device 140, and the control behavior data may be calculated by control ergonomics device 140 based on the base data.
The control behavior data may include data reflecting the control behavior of the task execution device 120 when the control player performs the target task, and may further include a subjective evaluation result of the cognitive state of the control player after the control player performs the target task. Wherein, the data reflecting the control behavior of the task performing device 120 in the process of executing the target task by the control player may include: a moving trajectory of the target object, an acceleration of the joystick, an angle of the joystick, and the like.
The subjective evaluation scale of the control player for the mental fatigue state after the target task is performed can be shown in the following table 1, and the subjective evaluation scale for the brain load state can be shown in the following table 2 and/or table 3. The operation player can subjectively evaluate the self-cognition state according to a subjective evaluation scale.
TABLE 1
Rating of evaluation Controlling player performance
1 Is completely clear and energetic
2 Is very active and can be quickly reflected
3 General sobering
4 Tired and not clear-headed
5 Moderate tiredness and less positive
6 Extreme fatigue and difficulty in concentrating
7 Exhaustion and failure to work effectively
TABLE 2
Figure GDA0003731093570000131
Figure GDA0003731093570000141
TABLE 3
Figure GDA0003731093570000142
Then, the method may further include steps S610 and S620 as follows:
and step S610, obtaining the scores of the control players for the set mental fatigue evaluation indexes and the set brain load evaluation indexes according to the control behavior data and the physiological information data obtained in the step S410.
The score of each evaluation index may reflect the cognitive ability of the operator with respect to the target task. Each evaluation index may be set in advance.
In one embodiment of the present disclosure, obtaining the score of the manipulation player for the set mental fatigue evaluation index, the score for the set brain load evaluation index, and the score for the set emotion evaluation index from the manipulation behavior data and the physiological information data may include steps S6031 to S6033 shown below:
step S6031, determining vector values of a first physiological feature vector corresponding to a mental fatigue evaluation index and vector values of a second physiological feature vector corresponding to a brain load evaluation index, which are preset, according to the control behavior data and the physiological information data.
The first physiological feature vector comprises a plurality of first physiological features affecting the mental fatigue evaluation index. The second physiological feature vector includes a plurality of second physiological features that affect the brain burden evaluation index.
In this embodiment, the vector value of the first physiological feature vector and the vector value of the second physiological feature vector may be obtained through corresponding convolution networks.
For the vector value of any physiological feature vector, the feature value of each physiological feature contained in the physiological feature vector can be reflected.
Because the different individual brain rhythms have differences, the brain rhythms of the control players can be analyzed to obtain the characteristic values of the brain electricity characteristics. Then, in the case where the physiological information data includes an electroencephalogram signal and any one of the physiological feature vectors includes an electroencephalogram feature, the step of determining the vector value of any one of the physiological feature vectors may include steps S6041 to S6043 shown below:
step S6041, the electroencephalogram power spectrum of the electroencephalogram signal is obtained and used as a target electroencephalogram power spectrum.
Step S6042, determining a power spectrum classification corresponding to the target electroencephalogram power spectrum from a plurality of preset power spectrum classifications as a target power spectrum classification.
Step S6043, determining the vector value of the corresponding physiological characteristic vector according to the brain rhythm corresponding to the target power spectrum classification.
In one embodiment of the present disclosure, the method may further include the step of obtaining a power spectrum classification, including steps S6051-S6053 as follows:
step S6051, the electroencephalogram power spectrums of a plurality of reference electroencephalogram signals are obtained and used as reference electroencephalogram power spectrums.
In this embodiment, a time-frequency conversion algorithm (e.g., a fast fourier transform algorithm) may be adopted to convert each reference electroencephalogram signal into a corresponding frequency signal, so as to obtain a reference electroencephalogram power spectrum corresponding to the reference electroencephalogram signal.
Step S6052, based on multiple clustering algorithms, clustering is performed on the multiple reference electroencephalogram power spectrums respectively to obtain clustering results corresponding to each clustering algorithm.
In this embodiment, a plurality of clustering algorithms may be used to perform cluster analysis on the reference electroencephalogram power spectrum, so as to comprehensively describe and retrieve differences between rhythms in the plurality of reference electroencephalogram power spectrums.
Because the clustering algorithm adopts a random initialization mode, clustering results obtained by analyzing the same reference electroencephalogram power spectrum by different clustering algorithms are possibly different during clustering, and even the absolute results obtained by analyzing the same reference electroencephalogram power spectrum for multiple times by the same clustering algorithm are possibly different.
And step S6053, based on consensus clustering algorithm, obtaining a plurality of power spectrum classifications according to the clustering result corresponding to each clustering algorithm.
Wherein each power spectrum classification comprises at least one reference electroencephalogram power spectrum.
In this embodiment, based on the consensus clustering algorithm, a final clustering result of a plurality of reference electroencephalogram power spectrums can be obtained according to a clustering result corresponding to each clustering algorithm, and a plurality of power spectrum classifications can be obtained according to the final clustering result.
The consensus clustering algorithm is a general method for evaluating stability and robustness aiming at multiple operations of multiple or single clustering algorithms, has strong capability of integrating multiple clustering results, and can provide better clustering results than a single clustering scheme.
In this embodiment, a plurality of physiological characteristics included in any physiological characteristic vector may be preset. For example, the expert selects at least part of initial physiological features from preset initial physiological features according to experiments or specific requirements to form corresponding physiological feature vectors; or screening at least part of initial physiological characteristics with high correlation with the cognitive state of the control player from preset initial physiological characteristics by using a correlation analysis method to form a corresponding physiological characteristic vector.
The physiological characteristics in this embodiment may include at least one of electroencephalogram characteristics, electrodermal characteristics, cardiac characteristics, eye movement characteristics, image characteristics, voice characteristics, and behavior characteristics.
The brain electrical features may include brain rhythm features and/or estimated brain electrical features. The characteristic value of the brain rhythm characteristic can be obtained by performing wavelet transformation on the electroencephalogram signal. In one example, the characteristic value for evaluating the electroencephalogram characteristic can be obtained by comparing the calculation results of four information entropies of the electroencephalogram signal; or extracted from the cross-brain region by a spectral coherence estimation technology in a time-frequency space; the method can also be obtained by whole brain extraction through a global synchronization estimation method among multi-channel electroencephalograms.
The eye movement characteristics may include at least one of eye movement characteristics reflecting blink time, eye movement characteristics reflecting blink rate, eye movement characteristics reflecting pupil diameter, eye movement characteristics reflecting gaze time, eye movement characteristics reflecting eye closure time, eye movement characteristics reflecting saccade velocity.
The electrodermal features may include time-domain electrodermal features, which may include amplitude means and/or variances of electrodermal data, and/or frequency-domain electrodermal features, which may include a Power Spectral Density (PSD) of the sympathetic nervous system (EDASymp) band.
The electrocardiogram characteristics may include at least one of time-domain electrocardiogram characteristics, frequency-domain electrocardiogram characteristics, and frequency-domain respiration characteristics. The time-domain electrocardiographic features may include at least one of mean Heart Rate (HR), Heart Rate Variability (HRV), and NN interval Standard Deviation (SDNN). The frequency domain cardiac electrical features may include a Power Spectral Density (PSD) of Low Frequency (LF) and/or High Frequency (HF) bands. The frequency domain respiratory characteristics may include a Power Spectral Density (PSD) of a primary respiratory frequency (DRF) band of 0-2 Hz and Respiratory Frequency (RF) bands spaced 0.5Hz apart.
The speech features may include at least one of speech features reflecting overall articulation time, speech features reflecting overall dwell time, speech features reflecting overall dialog time, speech features reflecting number of pauses, speech features reflecting average dwell time, speech features reflecting articulation rate, speech features reflecting clear articulation rate, speech features reflecting percentage of dysfluent articulation.
The image features may include image features reflecting a percent closed eye over a fixed time window (PERCLOS), image features reflecting an aspect ratio, image features reflecting a mouth ratio, image features reflecting a yawning number.
The behavior feature may include at least one of a behavior feature reflecting a movement trajectory of the manipulation target object, a behavior feature reflecting an acceleration of the joystick, and a behavior feature reflecting an angle of the joystick.
Since it is difficult to estimate the importance of each initial physiological characteristic related to the cognitive state of the control player, in one embodiment of the present disclosure, a correlation analysis method may be used to screen out at least some initial physiological characteristics that are highly correlated with the cognitive state of the control player from preset initial physiological characteristics, so as to form a corresponding physiological characteristic vector.
Specifically, the first physiological feature vector or the second physiological feature vector may be used as the target physiological feature vector, and then the method may further include a step of obtaining the target physiological feature vector, including steps S6061 to S6064 as follows:
step S6061, a third training sample is obtained.
One third training sample corresponds to one testing person, and one third training sample comprises control behavior data and physiological information data corresponding to the testing person.
In step S6062, for each third training sample, a preset feature value of each physiological feature is determined.
Step S6063, selecting a set number of physiological characteristics from the physiological characteristics according to the characteristic value of each physiological characteristic of the third training sample by using a canonical correlation analysis algorithm, as target physiological characteristics.
A typical correlation analysis algorithm (CCA) may automatically learn physiological characteristics that best reflect common intrinsic processes.
For example, in the case where the initial physiological characteristics include an electroencephalogram characteristic and an electrocardiograph characteristic, a characteristic value X1 of each electroencephalogram characteristic and a characteristic value X2 of each electrocardiograph characteristic of the third training sample are determined.
X1=[x1 1 ,x1 2 ,…,x1 L ],X∈R U×L
X2=[x2 1 ,x2 2 ,…,x2 L ],X2∈R V×L
And L is the number of the third training samples, U is the data dimension of the electroencephalogram characteristic, and V is the data dimension of the electrocardio characteristic.
Using canonical correlation analysis algorithm, according to the optimal weight
Figure GDA0003731093570000161
The typical correlation of X1 and X2 will be maximized:
Figure GDA0003731093570000162
the solution to CCA is a set of typical variables
Figure GDA0003731093570000163
And
Figure GDA0003731093570000164
each one of which is
Figure GDA0003731093570000165
A subspace is expanded in the ith data space to maximize the typical correlation between the two variables.
Typical correlation equations can be solved by:
Figure GDA0003731093570000166
and Λ is a diagonal matrix formed by all the generalized eigenvalues.
According to the solved W x 1 and W x2 And the target physiological characteristics of the corresponding physiological characteristic vector to be constructed can be selected from the electroencephalogram characteristics and the electrocardio characteristics.
For another example, in the case that the initial physiological characteristics further include a bioelectrical characteristic, the target physiological characteristics of the corresponding physiological characteristic vector to be constructed may be reselected from the target physiological characteristics and the bioelectrical characteristics selected from the electroencephalogram characteristics and the electrocardiographic characteristics according to the characteristic value of the target physiological characteristics selected from the electroencephalogram characteristics and the electrocardiographic characteristics and the characteristic value of each bioelectrical characteristic of the third training sample based on a typical correlation analysis algorithm.
And step S6064, obtaining a target physiological characteristic vector according to the target physiological characteristics.
And under the condition that the target physiological feature vector is the first physiological feature vector, the target physiological feature is the first physiological feature. And under the condition that the target physiological characteristic is the second physiological characteristic vector, the target physiological characteristic is the second physiological characteristic.
In an embodiment of the present disclosure, the first physiological feature included in the first physiological feature vector may be completely the same as, may be partially the same as, or may be completely different from the second physiological feature included in the second physiological feature vector, and is not limited herein.
Step S6032, inputting the vector value of the first physiological characteristic vector into a preset mental fatigue identification model, and obtaining the score of the control player for the set mental fatigue evaluation index.
The mental fatigue identification model can reflect the mapping relation between the first physiological characteristic vector and the score of the mental fatigue evaluation index.
In one embodiment of the present disclosure, the method may further include the step of obtaining a mental fatigue recognition model, including steps S510 to S520 as follows:
step S510, a first training sample is obtained.
One first training sample corresponds to one tester, and one first training sample reflects the mapping relation between the vector value of the first physiological characteristic vector corresponding to the tester and the score of the known mental fatigue evaluation index.
The score of the known mental fatigue evaluation index in the first training sample can be determined according to the subjective evaluation result of the corresponding tester on the mental fatigue state of the tester.
And S520, training the Gaussian model according to the first training sample to obtain a mental fatigue recognition model.
The Gaussian model has a strict statistical theory basis and has good adaptability when processing complex problems. The performance of the method is superior to the most advanced supervision learning methods such as ANN and SVM at present, and the method is easy to realize under the condition of simultaneously keeping good performance and flexible nonparametric reasoning capability, thereby solving the defects of the ANN and SVM to a certain extent.
In an embodiment of the present disclosure, training the gaussian model according to the first training sample to obtain the mental fatigue recognition model may include steps S521 to S523 as follows:
step S521, determining a mental fatigue score prediction expression of the first training sample by using the first network parameter of the gaussian model as a variable according to the vector value of the first physiological feature vector of the first training sample.
Step S522, a first loss function is constructed according to the mental fatigue score prediction expression of the first training sample and the score of the mental fatigue evaluation index of the first training sample.
Step S523, determining a first network parameter according to the first loss function, so as to obtain a mental fatigue identification model.
The gaussian process can be determined by its mean function m (x) and kernel function k (x, x'), which can be expressed as:
f~GP(m(x2),k(x2,x2′))
a Gaussian (GP) model is a probabilistic model in function space, and GP can be viewed as a process that defines the distribution of functions, with inferences made directly in function space. To identify mental fatigue states, a constant mean is used for modeling. The kernel function characterizes the correlation of different data points in the GP, and can be learned through training data. The kernel function used in this embodiment is a square exponential covariance function defined as follows:
Figure GDA0003731093570000171
where x2 and x 2' are vector values of the first physiological eigenvectors of any two first training samples of the input, σ 2 Is the signal variance, and the matrix P is an automatically determined correlation parameter (ARD) diagonal matrix having a value of
Figure GDA0003731093570000172
Where d is the dimension of the input space. In the present a-priori model,
Figure GDA0003731093570000173
is a hyper-parameter.
The dataset is D { (x2) i ,y2 i ) 1,2, …, n to get new data point x * (the first one isVector values of the first physiological feature vector of a training sample), i.e., f (x2) at x * The function can be considered to be a gaussian prior function, i.e. any set of points estimated by the function has a multivariate gaussian probability density. Assuming that the hyper-parameter of the a priori GP is Θ, its class label can be determined by calculating the class probability of the new data point, i.e.:
p(y2 * |x2 * ,D,Θ)=∫p(y2 * |f2 * ,Θ)p(f * |x2 * ,D,Θ)df *
p(f * |D,x2 * ,Θ)=∫p(f,f * |D,x2 * ,Θ)df=∫p(f|D,Θ)p(f * |f,x2 * ,Θ)df
f=[f 1 ,f 2 ,…,f n ]
p(f * |f,x2 * ,Θ)=p(f,f * |x2 * ,X2,Θ)/p(f|X2,Θ)
Figure GDA0003731093570000174
implicitly writing the dependency of f on x2, the gaussian prior function can be expressed as:
Figure GDA0003731093570000175
where μ is the mean value, which can be generally represented as 0. K is i,j =k(x2 i ,y2 j ) Is the covariance matrix of X2, probability term p (y 2) i |f i Θ) can be represented as
Figure GDA0003731093570000176
It follows that it is not appropriate to assume that p (Y2| f, X2, Θ) is gaussian distributed, and non-gaussian probability terms can entangle the posterior probability to be non-gaussian, so posterior propagation methods generally approximate posterior non-gaussian distributions with posterior gaussian distributions.
GP can be determined entirely by the choice mean function m (x2) and the kernel function k (x2, x 2'), and in general, the available data sets are used to determine the properties of the gaussian model, i.e. the values of the explicit hyper-parameters. The process of determining the value of the hyper-parameter may be performed by calculating the probability of the data set. The log-edge probability is as follows:
Figure GDA0003731093570000181
selection of the hyperparameter may be achieved by maximizing the log-marginal probability. In one embodiment of the present disclosure, the hyper-parameters may be optimized based on an adaptive pollen propagation algorithm.
The flower pollination algorithm is a group intelligent optimization algorithm based on a plant pollination mechanism. Flower self-pollination is physically close, thus corresponding to an optimized local search process. In most cases, cross pollination is a long-distance pollination by pollinators, so that the cross pollination corresponds to a global search process. In fact, the process of flower pollination of plants is quite complex, and for FPA to be simple and easy to implement, it is assumed that each plant has only one flower, each flower has only one pollen gamete to spill out, where each pollen gamete represents one solution to the problem. Therefore, according to the characteristics of flower pollination, the algorithm is assumed to satisfy the following idealized rules:
1) when cross pollination is carried out, pollen is spread by pollinators through levy flight, and the process is mapped to be a global search process.
2) Self-pollination maps to a local search process.
3) The constancy of a flower is considered the probability of reproduction, which correlates with the similarity of flowers during pollination.
The change of pollination mode is controlled by switching probability p (p is equal to 0, 1). I.e. when the random number r < p, self-pollination is performed, otherwise cross-pollination is performed.
In the cross pollination process, pollinators follow the levy flight rule, and carry out pollination in a relatively long flight path, the process ensures the most suitable pollination and propagation, and g is used * And (4) showing. Mathematical representation of the cross-pollination process:
Figure GDA0003731093570000182
wherein,
Figure GDA0003731093570000183
is the solution at the t-th iteration, g * The current optimal solution for the optimization problem found among all solutions for the current iteration. The parameter L is the step size. L satisfies the Levy distribution, i.e.
Figure GDA0003731093570000184
Where λ is 1.5, which is a constant.
The self-pollination process can be expressed as:
Figure GDA0003731093570000185
wherein,
Figure GDA0003731093570000186
and
Figure GDA0003731093570000187
are different solutions in the same iteration process. If it is used
Figure GDA0003731093570000188
And
Figure GDA0003731093570000189
from the same population if ε is from [0,1]]Uniformly distributed, the process becomes a local random walk. And selecting p as 0.8 as the switching probability of the global search and the local search.
The pollen propagation algorithm (FPA) has good performance, but inevitably has the problems of large calculation amount and long convergence time. The key steps of a conventional FPA are global search and local search. It is therefore proposed to make the search process more robust using an adaptive approach.
For global search, the key step is the setting of a Levy step size L, which is defined as a function of λ. In the conventional algorithm, λ is generally regarded as a constant, and is optimally set to 1.5. However, this way of fixing the parameters is not an optimal setting for all problems, and therefore, an adaptive Levy step size can be used to improve the overall performance of the FPA, and it is proposed to use an adaptive Levy step size factor as follows:
Figure GDA00037310935700001810
wherein,
Figure GDA00037310935700001811
is the solution to be corrected at present, g * Is the optimal solution in the current iteration. Since the 2-norm term is wireless, which may result in a very large Levy step size, the projection matrix a is used to map the result to an acceptable range.
In the method, the self-adaptive Levy step length is related to the distance between the current solution and the optimal solution, a large step length is caused to carry out global search in a long distance, and a short distance can be moved more accurately to search accurately. For local search, traditional FPAs rely on local pollination rather than global pollination. Here, we propose another Levy flight strategy to address local search during global pollination, as shown below:
Figure GDA00037310935700001812
wherein,
Figure GDA00037310935700001813
in order to correct the solution, the solution is,
Figure GDA00037310935700001814
for the current optimal solution, gamma is the local search step length, alpha is a constant, gamma is limited within a small range,l is the Levy step size.
The self-adaptive flower pollination process comprises the following steps:
Figure GDA0003731093570000191
in the embodiment, the self-adaptive pollination algorithm (AFPA) is used for optimizing the hyper-parameter applied to the Gaussian model, so that the accuracy of the mental fatigue recognition model can be improved.
And step S6033, inputting the vector value of the second physiological characteristic vector into a preset brain load identification model, and obtaining the score of the control player on the brain load evaluation index.
The brain load identification model can reflect the mapping relation between the second physiological characteristic vector and the grade of the brain load evaluation index.
In one embodiment of the present disclosure, the method may further include steps S6035 to S6036 as follows:
and step S6035, determining a vector value of the depth characteristic vector according to the control behavior data and the physiological information data based on a preset depth belief network.
In this embodiment, the control behavior data and the physiological information data obtained in step S610 may be directly input into a depth belief network trained in advance, and the output of the depth belief network may be used as a vector value of the depth feature vector.
Step S6036, determine a vector value of the stitching feature vector according to the vector value of the second physiological feature vector and the vector value of the depth feature vector.
And the splicing characteristic vector is obtained by splicing the second physiological characteristic vector and the depth characteristic vector.
Step S6033, inputting the vector value of the second physiological feature vector into a preset brain burden recognition model, and obtaining the score of the control player for the brain burden evaluation index may further include:
and inputting the vector value of the spliced feature vector into the brain load recognition model to obtain the score of the control player on the brain load evaluation index.
In one embodiment of the present disclosure, the method may further include the step of obtaining a brain burden recognition model, including steps S530 to S540 as follows:
in step S530, a second training sample is obtained.
And step S540, training the Gaussian kernel vector machine according to the second training sample to obtain a brain load identification model.
One second training sample corresponds to one tester, and the other second training sample reflects the mapping relation between the vector value of the splicing feature vector corresponding to the tester and the score of the known brain load evaluation index.
In this embodiment, the vector value of the splicing feature vector of the second training sample may be obtained according to the control behavior data and the physiological information data generated when the corresponding tester executes the corresponding target task.
In this embodiment, in order to reduce the influence of the proficiency of the test personnel on the brain load assessment result, the test personnel needs to execute the corresponding experimental tasks before the formal experiment until the performance of the experimental tasks is stable. In order to reduce the influence of the individual capability difference of the testers on the brain load evaluation result, a titration process can be adopted to determine the task difficulty parameter setting range in the process of executing the experiment task by the individuals.
In one example, the difficulty of the target task of the 8-shaped orbit flight is standardized using the n-back task as the experimental task. Specifically, the target task is standardized according to physiological information data generated in the n-back experiment task process, and standardized parameter settings under different difficulty experiment conditions are determined.
The n-back task is a standardized task of working memory and attention with n incremental difficulty levels. The tester is asked to continuously monitor the stimuli (single letters) appearing on the screen and to click a button when the target stimulus arrives. The setting of n is used to gradually change the workload. Under the 0-back condition, their dominant hand is tried to respond to a single target stimulus (e.g.: X') (the stimulus is identified by a button). Under the 1-back condition, a target is defined as any letter that is identical to its previous letter (i.e., 1Trial back). Under the conditions of 2-back and 3-back, a target is defined as any letter that is the same as the first 2 or 3 letters, and so on.
Each tester completes at least one hour of training tasks (n-back task and 8-shaped orbit flight task) every day, and the training time is 5 days. And when the accuracy rate of the n-back task reaches 80%, performing the next difficult training. And when the task score of the flight task at the current difficulty reaches 80% of the total score, performing next difficulty training.
And calibrating the task difficulty parameter of the tester by using a titration process. And (4) executing the N-back task by the tester, gradually increasing N until only 30% of the current task can be completed correctly, and recording the task difficulty N at the moment as N. The method comprises the steps that a tester executes a flight mission, in the process of executing the flight mission, the difficulty of the mission is changed by changing the wind power of the surrounding environment until the score of the tester for completing the flight mission can only reach 30% of the total score, mission parameters (parameters representing the wind power of the surrounding environment) are recorded as lm, the mission parameters and the mission score of each experiment are recorded, and the final mission parameter I is averaged.
Standardized parameter setting: acquiring a vector value of a second physiological characteristic vector of physiological information data in the N-back experimental process, establishing a linear model, fitting the vector value of the second physiological characteristic vector of the physiological information data generated in the flight task by using the model parameter, standardizing the parameter of the flight task, and determining the difficulty parameter of the unmanned aerial vehicle flight task equivalent to the standard N-back task difficulty (0-N).
Specifically, a first-order polynomial regression model may be respectively established for each tester, and a parameter of the normalized model is estimated by using a vector value of a second physiological feature vector obtained from physiological information data in an n-back experiment process, where the normalized model may be represented as:
Y3 i =β 01 X3 i
where X3 is the vector value through the second eigenvector and Y3 is the standard output (0, 1,2 …)N),β 0 And beta 1 To normalize the parameters of the model. Specifically, the parameters of the normalized model can be solved as follows:
Figure GDA0003731093570000201
minimizing the total error, beta 0 And beta 1 The following conditions should be satisfied:
Figure GDA0003731093570000202
and fitting the electrophysiological data of the flight training task by using the trained model parameters to obtain the standard output corresponding to each task parameter lm. The task parameter lm when the output is an integer (0, 1,2 … …) is selected as the difficulty level parameter of the flight task.
In order to improve the accuracy of the brain load recognition model and ensure that the model has certain interpretability, a depth belief network is used for extracting depth features from sensor raw data while multi-modal information extracted by CCA is utilized, the two parts of features are jointly used as the features of the brain load recognition model, and a Gaussian kernel support vector machine (FGSVM) is used as a classifier for recognizing the brain load condition.
The deep belief network is formed by stacking a Restricted Boltzmann Machine (RBM) and a Sigmoid belief network.
The DBN contains 3 stacked RBMs with 3 hidden layers h 1 ,h 2 ,h 3 H, an input vector { X4 ═ h } 0 }. The RBM1 is trained with a contrast divergence algorithm. For layer two networks, the freezing weight w 1 And training the RBM 2. For the layer three network, the weight w is frozen 1 、w 2 And training a third-layer network RBM 3. The mathematical model of the DBN is as follows:
P(X4,h 1 ,h 2 ,…,h n )=P(X4|h 1 )P(h 1 |h 2 )…P(h (n-2) |h (n-1) )P(h (n-1) |h n )
wherein, P (h) (n-1) |h n ) Can be determined by the RBM of the following two formulas:
Figure GDA0003731093570000211
Figure GDA0003731093570000212
and training the RBM of the DBN by adopting a greedy training method. The RBM can construct features and reconstruct input. Therefore, we train the RBM using the contrastive divergence algorithm. The contrast divergence method based on Gibbs sampling is as follows:
1) physiological information data is input into the RBM 1.
2) The activation probability of the hidden layer is determined using the following equation:
Figure GDA0003731093570000213
3) the activation probability of the input layer is determined using the following equation:
Figure GDA0003731093570000214
4) the edge weights are updated using the following equation:
W ij =W ij +α(P(h j =1|X4)-P(X4 i =1|h))
and alpha is a learning rate, after the RBM of the first layer is trained, the weight of the first layer is frozen, and the RBMs of the second layer and the third layer are trained by using the same contrast divergence algorithm. The output of the previous layer is used as the input to the RBM of the next layer. And after RBMs of all layers are trained, extracting depth feature vectors from the top layer.
In an embodiment of the present disclosure, training the gaussian kernel vector machine according to the second training sample to obtain the brain load recognition model may include:
determining a brain load score prediction expression of the second training sample by taking a second network parameter of the Gaussian kernel vector machine as a variable according to the vector value of the splicing feature vector of the second training sample;
constructing a second loss function according to the brain load score prediction expression of the second training sample and the score of the brain load evaluation index corresponding to the second training sample;
and determining a second network parameter according to the second loss function to obtain a brain load identification model.
Determining a second network parameter according to the second loss function, and obtaining a brain load recognition model comprises:
and determining a second network parameter according to a second loss function based on a Lagrange multiplier method to obtain a brain load identification model.
The support vector machine is a separate hyperplane that optimally classifies data points into positive and negative classes. The following formula is a separation hyperplane:
W T X5+w 0 =0
W T is the coefficient vector for data point X5, and X5 is the vector value of the stitched eigenvector.
The function g is defined as:
Figure GDA0003731093570000215
now, it is an optimization problem to find a hyperplane that separates the data points to the maximum extent
Figure GDA0003731093570000216
The lagrange multiplier method is used to solve the above problem:
Figure GDA0003731093570000221
Figure GDA0003731093570000222
α t is a function of the lagrange multiplier and,<x5,x5 t >for scalar products, the following gaussian kernel function can be used at a scalar product:
Figure GDA0003731093570000223
wherein
Figure GDA0003731093570000224
P is the number of predictors.
In step S620, a manipulation score of the manipulation player is obtained based on the score of the manipulation player for the brain load evaluation index, the score for the mental fatigue evaluation index, and the score for the mood evaluation index, which are obtained in step S430, obtained in step S610.
In one embodiment of the present disclosure, obtaining a manipulation score of the manipulation player according to the score of the manipulation player for the mental fatigue evaluation index, the score for the brain load evaluation index, and the score for the emotion evaluation index includes:
and inputting the scores of the control players for the mental fatigue evaluation indexes, the scores of the brain load evaluation indexes and the scores of the emotion evaluation indexes into a preset structural equation model to obtain the control scores of the control players.
In one embodiment of the present disclosure, at least one mental fatigue evaluation index, at least one brain load evaluation index, and at least one emotion evaluation index may be preset. The score for each evaluation index may be obtained according to the corresponding embodiment described above.
For example, the mental fatigue evaluation index can be a mental fatigue endogenous evaluation index, a mental fatigue exogenous evaluation index and a mental fatigue subjective evaluation index which are preset, and a score f1 of the mental fatigue endogenous evaluation index is obtained according to physiological information data acquired by electroencephalogram acquisition equipment, skin electricity acquisition equipment and electrocardio acquisition equipment; obtaining a score f2 of a mental fatigue exogenous evaluation index from physiological information data collected by the root eye movement tracking device, the video collecting device and the voice collecting device; and obtaining a score f3 of the mental fatigue subjective evaluation index according to the subjective evaluation of the control player on the mental fatigue state in the control behavior data.
For example, the brain load evaluation index may be a brain load endogenous evaluation index, a brain load exogenous evaluation index and a brain load subjective evaluation index which are preset, and the score m1 of the brain load endogenous evaluation index is obtained according to physiological information data acquired by electroencephalogram acquisition equipment, electrodeionization acquisition equipment and electrocardiograph acquisition equipment; obtaining a score m2 of a brain load exogenous evaluation index from physiological information data collected by the root eye movement tracking device, the video collecting device and the voice collecting device; and obtaining a score m3 of the brain load subjective evaluation index according to the subjective evaluation of the control player on the brain load state in the control behavior data.
For another example, the emotion evaluation index may be an emotion endogenous evaluation index, an emotion exogenous evaluation index and an emotion subjective evaluation index which are preset, and a score e1 of the emotion endogenous evaluation index is obtained according to physiological information data acquired by the electroencephalogram acquisition device, the electrodeionization acquisition device and the electrocardiograph acquisition device; obtaining a score e2 of the external emotion evaluation index according to the physiological information data collected by the eye tracking device, the video collecting device and the voice collecting device; and obtaining a score e3 of the emotional subjective evaluation index according to the subjective evaluation of the control player on the emotional state in the control behavior data.
The scores of the control players for each mental fatigue evaluation index, the scores of each brain load evaluation index and the scores of each emotion evaluation index are input into a preset structural equation model, so that the control scores of the control players can be obtained.
In one embodiment of the present disclosure, the method further includes a step of obtaining a structural equation model, including steps S570 to S580:
step S570, a fourth training sample is obtained.
And one fourth training sample corresponds to one tester, and one second training sample reflects the mapping relation between the scores of the corresponding tester for the mental fatigue evaluation indexes, the scores of the corresponding tester for the brain load evaluation indexes and the scores of the corresponding tester for the emotion evaluation indexes and the actual control scores.
In this embodiment, for any one fourth training sample, the score of the corresponding tester for the mental fatigue evaluation index, the score of the corresponding tester for the brain load evaluation index, and the score of the corresponding tester for the emotion evaluation index may be obtained according to the control behavior data and the physiological information data generated when the corresponding tester executes the corresponding target task; and obtaining an actual control score according to control result data generated by corresponding testers executing corresponding target tasks.
In one example, the actual maneuver score, Y6, may be determined according to the task duration t and the task score s in the maneuver result data.
And step S580, performing machine learning training according to the fourth training sample to obtain a structural equation model.
The structural equation model construction may be as shown in fig. 5. As shown in fig. 5, the structural equation model contains 5 hidden variables: a mental fatigue score ζ 1, a brain burden score ζ 2, a mood score ζ 3, a cognitive state X6, and a manipulation score Y6. The analytical expressions of the knot measurement equation set and the structural equation set can be written according to the structural equation model as follows:
f 1 =w 11 ζ 111
f 2 =w 21 ζ 121
f 3 =w 31 ζ 131
m 1 =w 41 ζ 241
m 2 =w 51 ζ 251
m 3 =w 61 ζ 261
e 1 =w 71 ζ 371
e 2 =w 81 ζ 381
e 3 =w 91 ζ 391
t=β 1 Y6+δ 1
s=β 2 Y6+δ 2
ζ 1 =w 12 X6+ε 12
ζ 2 =w 22 X6+ε 22
ζ 3 =w 32 X6+ε 32
Y6=αX6+∈
the structural equation model is trained through the fourth training sample, parameter estimation in the structural equation model is carried out through the generalized least square method, the weight on each edge in the structural equation model can be determined, and the influence degree of each cognitive state on the control score is quantified through the method.
In step S620, the score of the control player for the mental fatigue evaluation index, the score of the brain load evaluation index, and the score of the mood evaluation index are input into the structural equation model, and the control score of the control player can be obtained.
The present invention may be a system, method and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present invention may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of computer-readable program instructions, which can execute the computer-readable program instructions.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, implementation by software, and implementation by a combination of software and hardware are equivalent.
While embodiments of the present invention have been described above, the above description is illustrative, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.

Claims (7)

1. A method for emotion-based control ergonomics analysis, comprising:
acquiring physiological information data generated by controlling a target object to execute a target task by a player;
inputting the physiological information data into a preset emotion recognition model to obtain the score of the control player on an emotion evaluation index; the emotion recognition model reflects a mapping relation between the control behavior data and the physiological information data and scores of emotion evaluation indexes;
obtaining a control score of the control player according to the score of the control player on the emotion evaluation index;
executing a set operation according to the control score,
wherein the physiological information data comprises a facial video signal;
the step of inputting the physiological information data into a preset emotion recognition model and obtaining the score of the control player on the emotion evaluation index comprises the following steps:
acquiring a current video sampling interval;
sampling the face video signal based on the current video sampling interval to obtain a current frame video image;
determining the expression similarity between the current frame video image and the corresponding previous frame video image; the previous frame of video image is a frame of video image obtained by sampling the face video signal at the previous time;
determining the emotion recognition result of the current frame video image according to the expression similarity;
obtaining the score of the control player for the emotion evaluation index according to the emotion recognition result of the video image obtained by sampling the face video signal,
wherein the method further comprises:
determining a next video sampling interval for sampling the face video signal next time according to the expression similarity when the expression similarity is less than or equal to a similarity threshold, wherein the smaller the expression similarity of two adjacent frames of images is, the more similar the two adjacent frames of images are,
wherein the current video sample interval Num is determined by the following formula skip
Figure FDA0003731093560000011
Wherein, sim ff Expressing the similarity of the expressions of the first two frames of video images, wherein the lambda is the upper limit of a preset sampling interval; lambda is the lower limit of a preset sampling interval; theta ff Is a threshold value for the degree of similarity,
the method further comprises the step of determining the similarity threshold, comprising:
acquiring a reference face video signal of the control player;
determining the expression similarity of every two adjacent video images in the reference face video signal;
determining the similarity threshold according to the expression similarity of every two adjacent video images;
randomly generating a next video sampling interval for sampling the face video signal next time under the condition that the expression similarity is greater than the similarity threshold; and the next video sampling interval is less than or equal to a preset maximum sampling interval and greater than or equal to a minimum sampling interval.
2. The method of claim 1, wherein the physiological information data comprises brain electrical signals;
the step of inputting the physiological information data into a preset emotion recognition model and obtaining the score of the control player on the emotion evaluation index comprises the following steps:
performing wavelet packet transformation processing on the electroencephalogram signals to obtain electroencephalogram time-frequency characteristics;
acquiring a vector value of a brain electricity emotion feature vector from the brain electricity time-frequency feature based on a preset first depth convolution neural network;
and based on a preset first classifier, obtaining the score of the control player on the emotion evaluation index according to the vector value of the electroencephalogram emotion feature vector.
3. The method of claim 1, wherein said determining an expression similarity between the current frame video image and a corresponding previous frame video image comprises:
acquiring a vector value of an expression feature vector of the current frame video image;
and based on a preset convolution network, determining the expression similarity between the current frame video image and the previous frame video image according to the vector value of the expression feature vector of the current frame video image and the vector value of the pre-stored expression feature vector of the previous frame video image.
4. The method of claim 1, wherein the determining the emotion recognition result of the current frame video image according to the expression similarity comprises:
taking the emotion recognition result of the previous frame of video image as the emotion recognition result of the current frame of video image under the condition that the expression similarity is smaller than or equal to a similarity threshold;
under the condition that the expression similarity is larger than the similarity threshold, acquiring a vector value of a face emotion feature vector of the current frame video image based on a preset second depth convolution neural network; and based on a preset second classifier, obtaining an emotion recognition result of the current frame video image according to the vector value of the face emotion feature vector of the current frame video image.
5. The method of claim 1, wherein obtaining the mood score of the control player based on the mood recognition result of the video image sampled from the video signal comprises:
determining an emotion recognition result of the face video signal according to an emotion recognition result of a frame video image obtained by sampling the face video signal based on a voting method;
and obtaining the score of the control player for the emotion evaluation index according to the emotion recognition result of the face video signal.
6. An emotion-based control ergonomics apparatus comprising at least one computing device and at least one storage device, wherein,
the at least one storage device is to store instructions to control the at least one computing device to perform the method of any of claims 1 to 5.
7. A mood-based control ergonomics analysis system, wherein the system comprises a task execution device, physiological information collection devices, and the control ergonomics analysis device of claim 6, wherein the task execution device and the physiological information collection devices are in communication connection with the control ergonomics analysis device.
CN202011023966.9A 2020-09-25 2020-09-25 Emotion-based control work efficiency analysis method, equipment and system Active CN112256124B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011023966.9A CN112256124B (en) 2020-09-25 2020-09-25 Emotion-based control work efficiency analysis method, equipment and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011023966.9A CN112256124B (en) 2020-09-25 2020-09-25 Emotion-based control work efficiency analysis method, equipment and system

Publications (2)

Publication Number Publication Date
CN112256124A CN112256124A (en) 2021-01-22
CN112256124B true CN112256124B (en) 2022-08-19

Family

ID=74234980

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011023966.9A Active CN112256124B (en) 2020-09-25 2020-09-25 Emotion-based control work efficiency analysis method, equipment and system

Country Status (1)

Country Link
CN (1) CN112256124B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113158925A (en) * 2021-04-27 2021-07-23 中国民用航空飞行学院 Method and system for predicting reading work efficiency of composite material maintenance manual
CN113771859B (en) * 2021-08-31 2024-01-26 智新控制***有限公司 Intelligent driving intervention method, device, equipment and computer readable storage medium
CN113657555B (en) * 2021-09-03 2024-05-07 燕山大学 Improved semi-supervised clustering-based ice and snow environment driving experience evaluation method
CN116935480B (en) * 2023-09-18 2023-12-29 四川天地宏华导航设备有限公司 Emotion recognition method and device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019040669A1 (en) * 2017-08-22 2019-02-28 Silicon Algebra, Inc. Method for detecting facial expressions and emotions of users
CN109784277B (en) * 2019-01-17 2023-04-28 南京大学 Emotion recognition method based on intelligent glasses
CN111062250B (en) * 2019-11-12 2023-05-23 西安理工大学 Multi-subject motor imagery electroencephalogram signal identification method based on deep feature learning
CN111598451B (en) * 2020-05-15 2021-10-08 中国兵器工业计算机应用技术研究所 Control work efficiency analysis method, device and system based on task execution capacity
CN111544015B (en) * 2020-05-15 2021-06-25 北京师范大学 Cognitive power-based control work efficiency analysis method, device and system
CN111553618B (en) * 2020-05-15 2021-06-25 北京师范大学 Operation and control work efficiency analysis method, device and system
CN111598453B (en) * 2020-05-15 2021-08-24 中国兵器工业计算机应用技术研究所 Control work efficiency analysis method, device and system based on execution force in virtual scene
CN111553617B (en) * 2020-05-15 2021-12-21 北京师范大学 Control work efficiency analysis method, device and system based on cognitive power in virtual scene

Also Published As

Publication number Publication date
CN112256124A (en) 2021-01-22

Similar Documents

Publication Publication Date Title
CN112256123B (en) Brain load-based control work efficiency analysis method, equipment and system
CN112256124B (en) Emotion-based control work efficiency analysis method, equipment and system
CN112256122B (en) Control work efficiency analysis method, device and system based on mental fatigue
Zhang et al. Emotion recognition using multi-modal data and machine learning techniques: A tutorial and review
Vinola et al. A survey on human emotion recognition approaches, databases and applications
Abiyev et al. Brain‐Computer Interface for Control of Wheelchair Using Fuzzy Neural Networks
Coyle et al. A time-series prediction approach for feature extraction in a brain-computer interface
KR102281590B1 (en) System nad method of unsupervised training with weight sharing for the improvement in speech recognition and recording medium for performing the method
Al Osman et al. Multimodal affect recognition: Current approaches and challenges
KR102476675B1 (en) Method and server for smart home control based on interactive brain-computer interface
CN111553618B (en) Operation and control work efficiency analysis method, device and system
CN108703824B (en) Bionic hand control system and control method based on myoelectricity bracelet
CN112200025B (en) Operation and control work efficiency analysis method, device and system
CN111553617B (en) Control work efficiency analysis method, device and system based on cognitive power in virtual scene
CN111598453B (en) Control work efficiency analysis method, device and system based on execution force in virtual scene
KR102206181B1 (en) Terminla and operating method thereof
CN108175426B (en) Lie detection method based on deep recursion type conditional restricted Boltzmann machine
Bhamare et al. Deep neural networks for lie detection with attention on bio-signals
Leite et al. Adaptive gaussian fuzzy classifier for real-time emotion recognition in computer games
Ogino et al. Semi-supervised learning for auditory event-related potential-based brain–computer interface
Xu et al. Accelerating reinforcement learning agent with eeg-based implicit human feedback
Tayarani et al. What an “ehm” leaks about you: mapping fillers into personality traits with quantum evolutionary feature selection algorithms
US12033042B2 (en) Apparatus for bias eliminated performance determination
Rodriguez-Bermudez et al. Testing Brain—Computer Interfaces with Airplane Pilots under New Motor Imagery Tasks
Imah et al. A Comparative Analysis of Machine Learning Methods for Joint Attention Classification in Autism Spectrum Disorder Using Electroencephalography Brain Computer Interface.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant