WO2018215575A1 - System or device allowing emotion recognition with actuator response induction useful in training and psychotherapy - Google Patents

System or device allowing emotion recognition with actuator response induction useful in training and psychotherapy Download PDF

Info

Publication number
WO2018215575A1
WO2018215575A1 PCT/EP2018/063593 EP2018063593W WO2018215575A1 WO 2018215575 A1 WO2018215575 A1 WO 2018215575A1 EP 2018063593 W EP2018063593 W EP 2018063593W WO 2018215575 A1 WO2018215575 A1 WO 2018215575A1
Authority
WO
WIPO (PCT)
Prior art keywords
subject
emotional
detection unit
anyone
stress
Prior art date
Application number
PCT/EP2018/063593
Other languages
French (fr)
Inventor
Bernard Martin MAARSINGH
Original Assignee
Jamzone B.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jamzone B.V. filed Critical Jamzone B.V.
Publication of WO2018215575A1 publication Critical patent/WO2018215575A1/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/0205Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety

Definitions

  • the invention relates to emotion recognition devices, systems and methods useful in the field of therapy and health care, the fields of psychological therapy and aspects of psychiatric therapy, herein commonly defined as psychotherapy.
  • psychotherapy research Therapeutic empathy requires that a psychotherapist can recognize both the quality and intensity of a patient's emotional experience and psychotherapists must be able to identify objectively the emotions expressed in psychotherapy samples to accurately determine the role that emotional expression plays in psychotherapeutic improvement and change in many of the psychological conditions that face their clients or patients.
  • ASD autism-spectrum disorders
  • ADH attention deficit disorder
  • Parkinson's disease depression and dementia
  • Apathy has been consistently reported to be associated with executive dysfunction.
  • Borderline personality disorder is a serious mental illness marked by unstable moods, behavior, and relationships. Because some people with severe BPD have brief psychotic episodes, experts originally thought of this illness as atypical, or borderline, versions of other mental disorders. While mental health experts now generally agree that the name "borderline personality disorder" is misleading, a more accurate term does not exist yet. However, most people who have BPD suffer from problems with expressing and regulating emotions and thoughts that often translate into impulsive and reckless behavior and unstable relationships with other people. People with this disorder also have high rates of co-occurring disorders such as depression, anxiety disorders, substance abuse, and eating disorders, along with self-harm, suicidal behaviors, and completed suicides. No medications have been approved by the U.S.
  • the overriding aim is that emotion detection occurs before an emotion has become overwhelming and some sort of emotional steering or regulation would still possible, and often that is a daunting task for a psychotherapist.
  • a method that is directly relevant to affective computing as applied to autism is the Mindreading DVD. This comprises educational software that was designed to be an interactive, systematic guide to emotions (Baron-Cohen S., Golan 0., Wheelwright S., Hill J. J. Mind Reading: the interactive guide to emotions. London, UK: Jessica Kingsley Limited.; 2004.). It was developed to help people with ASD learn to recognize both basic and complex emotions and mental states from video clips of facial expressions and audio recordings of vocal expressions.
  • Mental states include thoughts and emotions, thoughts being traditionally fractionated into beliefs, desires, intentions, goals and perceptions. Emotions are traditionally fractionated into seven 'basic' emotions (joy, sadness, anger, fear, contempt, disgust and surprise) and numerous 'complex' emotions. Complex emotions involve attributing a cognitive state as well as an emotion and are more context and culture dependent. The basic emotions are held to be so because they may be universally recognized and expressed in the same way. This distinction, however, is not without its critics; since it may be that more emotions are universally recognized and expressed than these seven but have been overlooked, as research into complex emotions (usually towards developing taxonomies) has been mostly language and culture specific.
  • Emotional intelligence the "accurate appraisal and expression of emotions in oneself and others and the regulation of emotion in a way that enhances living” encompasses a set of interrelated skills and processes. Because the face is the primary canvas used to express distinct emotions non-verbally (Ekman, Perspectives on Psychological Science 2016, Vol. 11(1) 31-34), the ability to read facial expressions is particularly vital, and thus a crucial component of emotional intelligence. Facial expressions are privileged relative to other nonverbal "channels" of communication, such as vocal inflections and body movements. Facial expressions appear to be the most subject to conscious control. Individuals focus more attention on projecting their own facial expressions and perceiving others' facial expressions than they do on other non-verbal channels and often more than they focus on verbal communication
  • Training psychotherapists to recognize and respond to patient emotions has focused mainly on the accuracy of emotional recognition and of empathic responding, which may be increased by teaching therapists and counsellors to attend to non-verbalized information. Although such specific and focused training has proven to increase the accuracy with which therapists can respond to patients' emotional states, its relevance to conventional training of psychotherapists is uncertain. Moreover, much of the research on this topic has been confined to analogue patients and therapy sessions, calling into question the justification of generalizations to clinical material. Even when research on emotional recognition does include professionals who are conventionally trained and experienced it fails to compare their accuracy to individuals who are inexperienced and untrained. But even if the effects of training were adequately addressed, the question of generalization would not be solved.
  • cues used to convey emotional states in such training are provided by actors who present pre-set verbal and non-verbal messages and are not real-client based. Paradoxically, this methodology has a built-in bias against recognizing authentic emotional expressions; a fatal one if it is indeed true that deception is conveyed by subtle non-verbal cues. Such actor-based practices may yield results that do not represent the authentic display of conflicted emotions in naturalistic settings and psychotherapy practice.
  • Several studies e.g., Rosenthal, Hall, DiMatteo, Rogers, & Archer, 1979 suggest that clinicians are more sensitive to non-verbal communication cues than teachers and business executives but, surprisingly, are somewhat less accurate than graduate students and actors. Indeed, a comparison of M.A.
  • Automated emotion recognition is the process of identifying human emotion, most typically from facial expressions, by computer-assisted means. For this, many computational methodologies have been developed (Neural Networks 18 (2005) 389-405). Putting together an automatic emotion recognition system or device based on knowledge on emotions such as stemming from the modern neurosciences is now very well possible.
  • SE smart environments
  • the invention provides methods and means, computer-based hardware and software system or device, in particular for use in a health environment, so called smart health environments.
  • a health environment or health facility
  • the term usually includes hospitals, clinics, outpatient care centres, and specialized care centres, such as birthing and psychological or psychiatric care centres.
  • the proper home of a person suffering from some kind of disease should also be considered a health environment.
  • the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into a system or device response controlling an actuator unit, herein also called the actuator response.
  • the system sends the selected commands to the actuators that control the actuator response.
  • a change in a perception of motion is than performed by an actuator for driving the subject to that perception.
  • Actuators are a type of tool which is used to put something into automatic action. Actuators are used on a wide variety of sources, from humans putting something into action to computers starting up a
  • the actuator unit provides a movement or change in movement capable of being detected (perceived or notified) in reality, augmented reality or virtual reality by said subject.
  • the invention provides emotion recognition devices, systems and methods useful in the field of therapy and health care, the fields of psychological therapy and aspects of psychiatric therapy, herein commonly defined as psychotherapy.
  • the invention provides a novel tool by which therapists are helped to accurately and rapidly identifying another person's emotional state in diagnosing and treating mental disorders to better study, understand, respond to and empathize with a patient or client and for client(s) and patient(s) (herein generally called subject) to be trained in and understand and reflect on their behavior and communicative skills, for example to improve on these skills.
  • Lie detection is an obvious example of such situations.
  • Another example is clinical studies and therapy of schizophrenia and particularly the diagnosis of flattened affect that so far relies on the psychiatrists' subjective judgment of subjects' emotionality based on various physiological clues.
  • An automatic emotion-sensitive-special-effect-response-actuator system or device as provided herein helps augment these judgments, so minimizing the dependence of the diagnostic procedure on individual psychiatrists' perception of emotionality. More generally along those lines, automatic emotion detection, classification and responding with an effect can be used in a wide range of psychological and neurophysiological studies of human emotional expression that so far rely on subjects' self-report of their emotional state, which often proves problematic.
  • subjects diagnosed with ASD, ADHD, Parkinson, dementias, borderline personality disorders, bipolar disorders and the like may benefit from the invention; but also, couples that are engaged in relationship therapy, subjects that need to handle or be trained in handling difficult of conflictions discussions, and more in general, subjects that would benefit from training their socio-communicative skills, may all benefit from the invention.
  • the invention provides a computer-based hardware and software system or device (see also figure 2) comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into a system or device response controlling an actuator unit.
  • the actuator unit provides a movement or change in movement capable of being detected by said subject.
  • the actuator responds by converting energy into mechanical motion or movement or change in movement capable of being detected by said subject.
  • the actuator responds by generating a field of view projected in a virtual or augmented reality device projecting a movement or change in movement capable of being detected by said subject.
  • detection by said subject is greatly facilitated by several of the special effects that are attributed to emotional cue detection with a system or device of the invention.
  • These special effects may be generated on the bases of distinct algorithms in the software that reflect known psychotherapy strategies such as provided by Gotmann or another psychotherapist known in the field.
  • the system or device may be equipped with self-learning algorithms whereby apparently successful responses are integrated in the software-memory.
  • the subject By automatically notifying the subject (client or patient) with one or more special, moving, effects based on or related to the occurrence or manifestation of an emotional cue of said subject detected by the system or device provided here, the subject will learn that such cues occur and may put the occurrence of emotional cues in a rather harmless perspective of an artificial detect-effect relationship.
  • the therapist can now rely on an automated system or device that helps him or her in recognizing emotional cues and therewith can stay focused on other aspects of the therapeutic or training process.
  • the system or device according to the invention is provided with an effecting unit capable of collecting output data from said integration unit and processing said output into a system or device response that is controlling an actuator unit such as a movable subject support structure (carrier) propelling or affecting a change in motion of said subject supported by said structure or carried.
  • an actuator unit such as a movable subject support structure (carrier) propelling or affecting a change in motion of said subject supported by said structure or carried.
  • said support structure comprises a movable base adapted to ride over a substructure, for example a support surface, rails or track.
  • the invention provides a system or device provided with an electric actuator for moving the supports structure, such as an electric cylinder, an electro motor or a steppen motor, or any other electric drive suitable for moving the support structure.
  • an electric actuator for moving the supports structure such as an electric cylinder, an electro motor or a steppen motor, or any other electric drive suitable for moving the support structure.
  • system or device according to the invention is provided with an effecting unit capable of collecting output data from said integration unit and processing said output into a system or device response that is controlling an actuator unit such as a augmented or virtual reality device capable propelling or affecting a change in detection of motion of said subject.
  • an actuator unit such as a augmented or virtual reality device capable propelling or affecting a change in detection of motion of said subject.
  • the system or device here provided generates emotion-to-(perceived)motion effects that provide profound learning experiences to the subject(s) participating with the system or device.
  • said effecting unit provides (under guidance of the emotional cue or cues detected) a system or device response by controlling an actuator that moves (or stops moving) the furniture at which or wherein said subject is seated.
  • said movement may be directed at moving the subject(s), preferably by moving the furniture wherein or whereon the subject is seated, away from one or more other subjects, preferably that are also participating with the system or device.
  • said movement may be directed at moving the subject(s) towards one or more other subjects participating with the system or device.
  • an actuator is used with which the intensity or speed of moving may be changed, that moving is upwardly or downwardly directed or that the furniture proves a vibrating or shaking sensation off which the frequency is changed by an actuator response under guidance of the emotional cue or cues detected by a system or device according to the invention.
  • An actuator is the mechanism by which a control system or device acts upon an environment.
  • the control system or device can be simple (a fixed mechanical or electronic system or device), software-based (e.g. a printer driver, robot control system or device), a human, or any other input.
  • the support structure preferably comprises a movable base adapted to ride over a substructure, for example a support surface, rails or track.
  • the invention provides a system or device provided with an electric actuator for moving the supports structure, such as an electric cylinder, an electro motor or a steppen motor, or any other electric drive suitable for moving the support structure or subject carrier.
  • the moving system or device preferably comprises an electric actuator such as electric motor for propelling or moving the subject carrier, and a power supply to power the electric actuator.
  • the power supply may also comprise an electrical storage element to store electrical energy.
  • the effecting unit is arranged to control the power supply, optionally such as to operate the power supply to charge the electrical storage element from a power source; and primarily to operate the power supply to power the actuator from the electrical energy stored in the electrical storage element, to thereby propel the subject carrier.
  • a linear motor may be provided to accelerate the subject carrier to a certain speed, the carrier thereby e.g. being unable to travel a remainder of the trajectory of the device on its own.
  • the actuator may for example be comprised in the carrier and be provided with electrical power via sliding contacts.
  • the invention also provides a computer-based hardware and software system or device or device according to the invention having a subject carrier and a propelling system or device for propelling the subject carrier, the propelling system or device comprising an electric actuator to propel the subject carrier, a power supply to power the electric actuator, the power supply comprising an electrical storage element to store electrical energy, and a control unit which is arranged to control operation of the power supply, the control unit being arranged to operate the power supply to charge the electrical storage element from an power source; and operate the power supply to power the electric actuator from the electrical energy stored in the electrical storage element, to thereby propel the subject carrier.
  • the subject carrier may optionally be provided with one or more seating parts, the term seating part used herein is understood to mean that part of the carrier which can accommodate one person or several persons in a sitting, standing or recumbent position.
  • the invention provides a computer-based hardware and software system or device, the system or device optionally comprising a database that may be cloud-based.
  • the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, in particular capable of detecting emotional valence variables of said cue, and further comprising an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, and an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject, preferably wherein said response depends on the emotional valence of the cue detected.
  • the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, in particular capable of detecting emotional arousal variables of said cue, and further comprising an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, and an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject, preferably wherein said response depends on the emotional arousal of the cue detected.
  • the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, in particular capable of detecting emotional valence variables and emotional arousal variables of said cue, and further comprising an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, and an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject, preferably wherein said response depends on the emotional valence and arousal of the cue detected.
  • a detection unit capable of detecting at least one emotional cue variable of a subject, in particular capable of detecting emotional valence variables and emotional arousal variables of said cue
  • an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data
  • an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject, preferably wherein said response depends
  • At least four different outputs may be generated that respectively relate to high valence, high arousal, low valence, low arousal, high valence, low arousal and low variables, high arousal (see also figure 1).
  • the invention provides a set of computerized devices (a system or device) helping the therapist assess emotional states of humans and improves his or her ability to modulate emotional states of the subject in therapy.
  • the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, and an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time.
  • the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, and an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in real-time.
  • the invention provides real-time or near real time devices, system or devices and methods to detect an emotion and provide a response to modulate behavior.
  • RTC real-time computing
  • reactive computing describes hardware and software system or devices subject to a "real-time constraint" or “near-real-time constraint", for example from detection of event to system or device response.
  • Near-real-time and real-time programs must both guarantee response within specified time constraints, often referred to as “deadlines”. The distinction between “near real-time” and “real-time” varies, and the delay is dependent on the type and speed of the transmission.
  • the delay in near real-time as provided by the invention herein is typically of the order of several seconds to several minutes.
  • Real-time responses are often understood to be in the order of milliseconds, and sometimes microseconds; near-real-time responses often incorporate a deliberate lag phase, preferably of up to 20 seconds, more preferably up to 10 seconds, more preferably up to 5 seconds, most preferably up to 3 seconds.
  • This lag phase is selected in line with research from humans and animals showing that somewhere between 2-5 seconds is the longest window of what we can perceive as an independent event - a moment. Anything longer, or separated by a longer window, is perceived by us as a separate occurrence - close in time, but distinct.
  • our autonomic system or device prepares us about 3 seconds ahead of time, which makes sense because that's about how long our vagal nervous system or device takes to alter our heart rate and breathing.
  • a near-real-time system or device as herein described is one, which "controls an
  • near-real-time herein is also used to mean “without significant delay”.
  • near real-time or “nearly real-time” refers to the time delay introduced, by automated data processing or network transmission, between the occurrence of an event and the use of the processed data, such as for display or feedback and control purposes. For example, a near-real-time display depicts an event or situation, as it existed at the current time minus the processing time, as nearly the time of the live event. Both terms “near real time” and “real time” imply that there are no significant delays. In many cases, processing described as “real-time” would be more accurately described as "near real-time”.
  • Near real-time also refers to delayed real-time transmission of voice and video. It allows playing or projecting video images, in approximately real-time, without having to wait for an entire large video file to download.
  • Incompatible databases can export/import to common flat files that the other database can import/export on a scheduled basis so that they can sync/share common data in "near real-time" with each other.
  • the devices, system or devices and methods of the invention as provided herein are useful during psychotherapy sessions, relationship therapy sessions and during socio- communicative interactions and discussions wherein an emotional cue given out by a subject is automatically responded to with a special effect detectable by said subject, preferably executed in near-real-time, more preferably in real time.
  • the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time, and wherein said detection unit is capable of detecting at least one, preferably at least two, more preferably at least three emotional cue variables selected from the group of heart rate, heart rate variability, galvanic skin response, galvanic skin response variability, breathing rate, breathing rate variability, core temperature, skin temperature, skin temperature variability, electro-myography, electro-myography variability, electro-encephalography, electro-encephalography variability, electro-cardiography, electro-cardiography variability, photoplethysmography, photoplethysmography variability, goose bumps, goose bumps variability, posture, posture variability,
  • the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time, and wherein said detection unit is capable of detecting at least one, preferably at least two, more preferably at least three emotional cue variables selected from the group of heart rate, galvanic skin response, electro-cardiography, photoplethysmography, posture, eye movement,
  • said emotional cue or cues are preferably detected by a detection unit comprising an electrode, in another embodiment of the invention, said detection unit comprises an optical sensor, in yet another embodiment, said detection unit comprises a camera, in yet another said detection unit comprises a microphone. In a further preferred embodiment of the invention, the detection unit comprises an electrode and a camera and/or an optical sensor and/or a microphone. In a particularly preferred embodiment, the detection unit comprises an electrode and a camera and a microphone.
  • the invention provides a computer-based hardware and software system or device with a detection unit, an integration unit and an effecting unit wherein said integration unit is provided with software capable of providing a measure of the emotional experience of said suspect. It is preferred that said emotional experience is classifiable as anyone selected from the group of joy, anger, surprise, fear, sadness, disgust or contempt.
  • the invention provides a system or device wherein said integration unit is provided with software capable of providing a measure of the facial expression of said suspect.
  • said facial expression is classifiable as anyone selected from the group of attentions, brow furrow, brow raise, inner brow raise, eye closure, nose crinkle, upper lip raise, lip suck, lip pucker, lip press, mouth open, lip corner depressor, chin raise, smirk or smile.
  • the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said subject is also provided with a system or device for augmented or virtual reality detection.
  • a system or device for augmented or virtual reality detection comprising a system or device for augmented or virtual reality detection.
  • Alternative embodiment wherein stress-detection by virtual reality system or device worn by a subject moves or adapts the augmented or virtual reality perceived by said subject.
  • Virtual reality (VR) system or devices can present fields of view to a user via a display to provide the user with the perception of being in an environment other than reality.
  • a field of view presents to the user scenes from a virtual reality world.
  • Virtual reality system or devices can use an opaque background for the displayed fields of view or use a transparent background so that the field of view is overlaid on the user's view of the real world.
  • Virtual reality system or devices can also acquire a video stream of the real world and superimpose objects and people on the video stream representing the real world. These latter two schemes can be called augmented reality.
  • Examples of augmented virtual reality systems or devices providing perception of motion include car racing simulators, flight simulators, video games and video conferencing system or devices.
  • Virtual reality system or devices can permit a user to simulate driving vehicles, flying airplanes, exploring alien worlds or being at a simulated meeting with participants from different parts of the world without any of the participants leaving home, for example.
  • the fields of view that comprise the virtual reality world can be arranged to provide the user with the perception of being in a virtual world.
  • the fields of view can change according to the simulated physical dynamics of the world being simulated. For example, in a driving or flying system or device, the fields of view will change according to the simulated motion of the vehicle or airplane. Fields of view can also be changed by the user interacting with a controller, for example. Many video games are controlled by a handheld controller that includes buttons and switches that can change the point of view of the user in the virtual world and hence the fields of view displayed.
  • the display of some virtual reality system or devices include a virtual reality headset, for example.
  • Accelerometers can be used in a virtual reality headset to detect the location and attitude of the headset and thereby control the field of view to track the user's head motions and arrange the field of view accordingly.
  • Virtual reality system or devices can include other types of displays such as a stationary screen in front of the user not worn on a headset, multiple stationary screens surrounding the user, screens placed on lenses worn on the user's eyes, or hologram images projected around the user. None of these ways to control the field of view selection can display fields of view to the user that reflect stress of the user. In real life, if a person is affected by stress often the person is more alert to cues in the immediate real-world environment that can alert the user that stress was justified or not.
  • the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject
  • said subject is also provided with a system or device for providing fields of view allowing augmented or virtual reality detection by a subject (a virtual reality system or device) comprising detection unit capable of detecting at least one stress variable of a subject (a stress sensor) and an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data with an effecting unit capable of collecting and processing output data from said integration unit and providing output data affecting a change in field of view (a controller).
  • said subject is also provided with a virtual reality computing device or system or device wherein
  • the invention provides a virtual reality computing device or system or device that tracks emotion or stress parameters such as heart rate variation or respiratory rate variations.
  • stress detection is in one embodiment provided by a stress sensor that is located in a breast band or glove or other fixative element relative to a portion of a user's skin, in another embodiment such a stress sensor may be incorporated into the VR headset relative to a portion of a user's skin, or both.
  • the system or device includes a controller configured to identify differences between various stress levels, such as heart rate variations or respiratory rate variations and to determine output related to changes in fields of view configured to reflect changes in stress level of the user and back feed these to this user, or to (an)other user of the game. In this way, the augmented or virtual reality may be adjusted to the stress level(s) of its user(s).
  • various stress levels such as heart rate variations or respiratory rate variations
  • output related to changes in fields of view configured to reflect changes in stress level of the user and back feed these to this user, or to (an)other user of the game.
  • the invention provides a virtual reality computing device or system or device comprising a heart rate variation (HRV) sensor with a breast band coupled to a virtual reality console, configured to capture a plurality stress levels and a controller configured to identify differences between some of stress levels, the differences corresponding to differences in overall stress state of a user of the device or system or device and determine fields of view response based in part on the identified differences in stress level.
  • HRV heart rate variation
  • the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time, wherein said effecting unit provides an actuator response including a physical notification of said subject.
  • the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time, wherein said effecting unit provides an actuator response that moves the furniture at which or wherein said subject is seated.
  • the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time, wherein said effecting unit provides an actuator response that includes a change of lighting.
  • the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time, wherein said effecting unit provides a system or device response that includes a sound.
  • the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time, wherein said effecting unit provides a system or device response that includes a change of temperature.
  • the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time, wherein said effecting unit provides an actuator response that includes a gust of air.
  • the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time, wherein said effecting unit provides a system or device response that includes a smell.
  • the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time, wherein said effecting unit provides a system or device response that includes projection of an image.
  • the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into a system or device response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time wherein said detection unit is capable of detecting at least one emotional cue variable of at least two subjects.
  • the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into a system or device response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time wherein said detection unit is capable of detecting at least one emotional cue variable of at least three subjects.
  • the invention also provides a machine-readable medium storing the software that is capable to perform the detecting, collecting and processing functions of a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time.
  • the invention also provides a computer (or server) provided with the software of a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time
  • the invention also provides use of a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time
  • the invention also provides use of a machine-readable medium storing the software that is capable to perform the detecting, collecting and processing functions of a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time in psychotherapy.
  • the invention also provides use of a computer (or server) provided with the software of a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time in psychotherapy.
  • a computer or server
  • a detection unit capable of detecting at least one emotional cue variable of a subject
  • an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data
  • an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time in psychotherapy.
  • the invention also provides a method for providing a subject with psychotherapy comprising detecting at least one emotional cue variable of said subject with a detection unit, collecting input data from said detection unit and processing said input relating to said cue variable into output data, collecting output data from said integration unit and processing said output in an effecting unit into an actuator response capable of being detected by said subject. It is preferred that said response is provided in near-real-time or in real-time.
  • the invention also provides a method for providing a subject with psychotherapy comprising detecting at least one emotional cue variable of said subject with a detection unit, collecting input data from said detection unit and processing said input relating to said cue variable into output data, collecting output data from said integration unit and processing said output in an effecting unit, providing output data to an actuator unit providing fields of view allowing augmented or virtual reality detection by a subject, affecting a detection of a change in motion by said subject by said actuator unit, preferably wherein said change in motion is provided in near-real-time or real-time.
  • said detection unit is capable of detecting at least one emotional cue variable selected from the group of heart rate, heart rate variability, galvanic skin response, galvanic skin response variability, breathing rate, breathing rate variability, core temperature, skin temperature, skin temperature variability, electro-myography, electromyography variability, electro-encephalography, electro-encephalography variability, electro-cardiography, electro-cardiography variability, photoplethysmography, photoplethysmography variability, goose bumps, goose bumps variability, posture, posture variability, body movement, body movement variability, eye movement, eye movement variability, pupil size, pupil size variability, hand movement, hand movement variability, facial expression, facial expression variability, speech, speech variability, sound and sound variability.
  • the invention also provides a method for providing a subject with psychotherapy comprising detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time, and wherein said detection unit is capable of detecting at least one, preferably at least two, more preferably at least three emotional cue variables selected from the group of heart rate, galvanic skin response, electro-cardiography, photoplethysmography, posture, eye movement, pupil size, facial expression, and sound.
  • the invention also provides a method of doing business comprising the steps of teaching a group of at least two, preferably at least three subjects socio-communicative skills and charging a fee for said teaching, said method further comprising detecting at least one emotional cue variable of at least one subject with a detection unit, collecting input data from said detection unit and processing said input relating to said cue variable into output data, collecting output data from said integration unit and processing said output in an effecting unit into an actuator response capable of being detected by said subject. It is preferred that said response is provided in near-real-time or in real-time.
  • said detection unit is capable of detecting at least one emotional cue variable selected from the group of heart rate, heart rate variability, galvanic skin response, galvanic skin response variability, breathing rate, breathing rate variability, core temperature, skin temperature, skin temperature variability, electro-myography, electro-myography variability, electro-encephalography, electro-encephalography variability, electrocardiography, electro-cardiography variability, photoplethysmography,
  • photoplethysmography variability goose bumps, goose bumps variability, posture, posture variability, body movement, body movement variability, eye movement, eye movement variability, pupil size, pupil size variability, hand movement, hand movement variability, facial expression, facial expression variability, speech, speech variability, sound and sound variability.
  • the invention also provides a method of doing business comprising the steps of teaching a group of at least two, preferably at least three subjects socio-communicative skills and charging a fee for said teaching, said method further comprising detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time, and wherein said detection unit is capable of detecting at least one, preferably at least two, more preferably at least three emotional cue variables selected from the group of heart rate, galvanic skin response, electro-cardiography, photoplethysmography, posture, eye movement,
  • Figure 1 One of the most common frameworks in the emotions field proposes that two main dimensions best characterize emotional cues: arousal and valence.
  • the dimension of valence ranges from highly positive to highly negative, whereas the dimension of arousal ranges from calming or soothing (low) to exciting or agitating (high).
  • Figure 2 A diagram of the architecture of a system or device according to the invention equipped with one or more cameras, one or more sensors and one or more microphones, a detection unit
  • an integration unit for processing data from subject(s)
  • effector unit for providing effects and control
  • an actuator unit for moving subject(s).
  • Figure 3 A sketch of a table useful in a type 3 multiple person system or device allowing moving subjects in or out groups depending on emotional cue detection.
  • Facial expression recognition has been a highly-researched topic in the field of biometrics for decades. It has been used with the intent of both identifying specific individuals, and in understanding human relations and communication. A potential case study with this type of software is in medicine and geriatric care. Patients in these situations may not always be able to communicate their state of being with a care provider. Humans have been studying themselves for a long time, and the description of facial features is no exception. The measurement, collection, and systemtic analysis of facial expression has been a focus of study since the initial publication by Paul Ekman and Wallace V. Friesen in 1976, almost half a century ago. The specific method and deliberate analysis of such features are commonly known as the Facial Action Coding System or device (FACS), originally created by P. Ekman.
  • FACS Facial Action Coding System
  • Facial expressions are a gateway into the human mind, emotion, and identity. They are a way for us to relate with each other, share understanding, and compassion. They are also a way for us to express pain, grief, remorse, and lack of understanding. These characteristics can be crucial to understand when working with patients, especially patients who are unable to communicate in other ways. These victims include post-stroke patients and those suffering from dementia or Alzheimer's disease.
  • iMotions A useful biometric research platform software system or device called “iMotions” is used herein, supplemented with customized software (iMotions, Copenhagen, Denmark.
  • the software can combine detection of emotional cues such as "eye tracking, facial expression analysis, EEG, GSR, EMG, ECG and Surveys"
  • the platform is generally used for various types of academic and business-related research.
  • GSR Module of iMotions is a plug & play integration with GSR devices that delivers real time sensor output (emotional reactions) in the user interface and in synchronized raw data export.
  • iMotions also provides an open application programming interface (API) to allow integration of other sensors or detection means to forward data into the software, thereby allowing multi-modal-cue and/or multi- modal-subject detection.
  • API application programming interface
  • Multi-modal integration with other sensors like EEG, EMG, ECG.
  • GSR solution allows analysing the different emotions of various people and responding with various effects.
  • Another useful biometric research platform software system or device is provided Noldus Information Technology bv Wageningen - The Netherlands. FaceReader automatically analyses 6 basic facial expressions, as well as neutral and contempt. It also calculates gaze direction, head orientation, and person characteristics. The Project Analysis Module is ideal for advanced analysis and reporting: you quickly gain insight into the effects of different stimuli. Analysis of Action Units is available. Yet another useful biometric research platform software system or device called “Crowdsight” is used herein, supplemented with customized software. CrowdSight Software Development Kit (SDK) is a flexible and easy-to-integrate Crowd Face Analysis Software which allows to gather realtime, anonymous information about people while they behave spontaneously in different life environments. It detects emotional reactions and engagement. CrowdSight works offline as well as online on the most popular desktop and mobile platforms (Windows, Mac, Linux, iOS, Android).
  • Galvanic Skin Response is another biophysical sensor, which determines human skin resistance under different psychological conditions.
  • the GSR is an older term for electro dermal activity (EDA), which is simply the electrical conductance of the skin.
  • EDA electro dermal activity
  • These sensors also detect an increase in physical attributes marking a state of being including: heart rate and sweat measurements. Sweat glands are controlled by the sympathetic nervous system or device. A change in the electrical resistance of the skin that is a physiochemical response to emotional stimulation increases the sympathetic nervous system or device activity.
  • GSR is a method of measuring the electrical resistance of the skin, which varies with its moisture level. With other sensors, these devices can help determine wellness, and emotional responses to external stimuli. Typical emotional cues that allow detection by Affectiva Facial Expression Emotion Analysis with iMotions Core License are listed below and can be extended.
  • Valence A measure of the positive or negative nature of the recorded person emotional experience. The range of values for the metric is arbitrarily set between -100 to 100. Arousal: A measure of the excitation nature of the recorded person's emotional experience. The range of values for the metric is arbitrarily set between -100 to 100.
  • the range of values is between 0 and 100.
  • Emotion metrics scores indicate when users express a specific emotion, along with the degree of confidence.
  • the metrics can be thought of as detectors: as the emotion occurs, the score rises from 0 (no expression) to 100 (expression fully present).
  • Facial Expressions Attention, Brow Furrow, Brow Raise, Inner Brow Raise, Eye Closure, Nose Wrinkle, Upper Lip Raise, Lip Suck, Lip Pucker, Lip Press, Mouth Open, Lip Corner Depressor, Chin Raise, Smirk, Smile.
  • Expression metrics also known as Action Units (AUs) in the FACS methodology, scores indicate when users make a specific expression (e.g., a smile) along with the degree of confidence.
  • the metrics can be thought of as detectors: as the facial expression occurs and becomes more apparent, the score rises from 0 (no expression) to 100.
  • Interocular Distance Distance between the two outer eye corners.
  • Head Orientation Estimation of the head position in a 3-D space in Euler angles (pitch, yaw, roll).
  • GSR Galvanic Skin Response
  • ECG electrocardiography
  • PPG photoplethysmography
  • EEG electroencephalograpy
  • EMG electroencephalograpy
  • Philips HUE Light effect geared to respond to negative low state, optionally accommodating four different light effects each relating to one of the four different valency and arousal states of figure 2.
  • the type 1 computer-based hardware and software emotion- effect system or device provided herein is specifically developed for one-to-one
  • Such subjects are typically selected from patients diagnosed with ASD, ADHD, Parkinson, dementias, borderline personality disorders, bipolar disorders and schizophrenias and the like.
  • Emotional cues detectable by a Facial Emotion Recognition Software using Camera directed at each subject's faces, optionally provided with Galvanic Skin Resistance detection of each of the subjects in therapy.
  • Philips HUE Light effect geared to respond to negative low state, optionally accommodating four different light effects each relating to one of the four different valency and arousal states of figure 2.
  • the type 2 computer-based hardware and software emotion- effect system or device provided herein is specifically developed for couple-related psychotherapy sessions of a therapist with two subject (clients or patients) that may benefit from learning about the emotional cues they each project. Such subjects are typically selected from clients wishing to engage in relationship therapy or couple therapy with help of a therapist.
  • a smartphone device or webcam may be used.
  • - Processing determining emotional valence and arousal levels of both subjects and applying Gottman Algorithm Software wherein a negative low emotional valence/arousal state is learned to be met by responding with at least 1, preferably at least 3, preferably at least 5 positive high emotional valence/arousal state responses, based on learning by effects generated by the system or device in reaction to emotional cues displayed by each of the subjects.
  • Such subjects are typically selected from clients wishing to practice relationship therapy or couple therapy in a private setting such as their home.
  • Philips HUE Light effects geared to respond to negative low state, optionally accommodating four different light effects each relating to one of the four different valency and arousal states of figure 2.
  • Alternative outputs may be generated in the type 2Privateweb version via smartphone connection or computer/webcam connection.
  • the type 2Privateweb computer-based hardware and software emotion-effect system or device provided herein is specifically developed for couple-related sessions without a therapist with persons that may benefit from learning from each other about the emotional cues they each project wherein the subjects and system or device is interacting via internet communication and webcam detection.
  • Database facilities may be provided in the Cloud.
  • Multiple-person system or device (2 or more subjects - optionally at least one therapist).
  • the type 3 computer-based hardware and software emotion-effect system or device provided herein is specifically developed to be used group sessions with or without a therapist wherein at least two subject (clients or patients) that may benefit from learning about the emotional cues they each utter to improve their socio-communicative skills.
  • Philips HUE Light effect geared to respond to negative low state, optionally accommodating four different light effects each relating to one of the four different valency and arousal states of figure 2.
  • Lights respond to impulses from the effecting unit by going dark or changing color (group effect or personal effect).
  • Vibrating wristbands or other on body devices respond to impulses from the effecting unit.
  • wristbands or other devices Neutralize or change sounds in the room. Volume up and down.
  • the table projection shows a preferred breathing rate. For example, 5 seconds in and 5 seconds out.
  • Virtual reality (VR) system or devices can present fields of view to a user via a display to provide the user with the perception of being in an environment other than reality.
  • a field of view presents to the user scenes from a virtual reality world.
  • Virtual reality system or devices can use an opaque background for the displayed fields of view or use a transparent background so that the field of view is overlaid on the user's view of the real world.
  • Virtual reality system or devices can also acquire a video stream of the real world and superimpose objects and people on the video stream representing the real world. These latter two schemes can be called augmented reality.
  • Examples of virtual reality system or devices include car racing simulators, flight simulators, video games and video conferencing system or devices.
  • Virtual reality system or devices can permit a user to simulate driving vehicles, flying airplanes, exploring alien worlds or being at a simulated meeting with participants from different parts of the world without any of the participants leaving home, for example.
  • the fields of view that comprise the virtual reality world can be arranged to provide the user with the perception of being in a virtual world.
  • the fields of view can change according to the simulated physical dynamics of the world being simulated. For example, in a driving or flying system or device, the fields of view will change according to the simulated motion of the vehicle or airplane. Fields of view can also be changed by the user interacting with a controller, for example. Many video games are controlled by a handheld controller that includes buttons and switches that can change the point of view of the user in the virtual world and hence the fields of view displayed.
  • the display of some virtual reality system or devices include a virtual reality headset, for example.
  • Accelerometers can be used in a virtual reality headset to detect the location and attitude of the headset and thereby control the field of view to track the user's head motions and arrange the field of view accordingly.
  • Virtual reality system or devices can include other types of displays such as a stationary screen in front of the user not worn on a headset, multiple stationary screens surrounding the user, screens placed on lenses worn on the user's eyes, or hologram images projected around the user. None of these ways to control the field of view selection can display fields of view to the user that reflect stress of the user. In real life, if a person is affected by stress often the person is more alert to cues in the immediate real-world environment that can alert the user that stress was justified or not.
  • a computer-based hardware and software system or device for providing fields of view allowing augmented or virtual reality detection by a subject comprising detection unit capable of detecting at least one stress variable of a subject (a stress sensor) and an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data with an effecting unit capable of collecting and processing output data from said integration unit and providing output data affecting a change in field of view (a controller).
  • the invention provides a virtual reality computing device or system or device wherein stress-detection by the virtual reality system or device that is worn by a subject moves or adapts the augmented or virtual reality fields of view perceived by said subject.
  • the invention provides a virtual reality computing device or system or device that tracks stress parameters such as heart rate variation or respiratory rate variations.
  • stress detection is in one embodiment provided by a stress sensor that is located in a breast band or glove or other fixative element relative to a portion of a user's skin, in another embodiment such a stress sensor may be
  • the system or device includes a controller configured to identify differences between various stress levels, such as heart rate variations or respiratory rate variations and to determine output related to changes in fields of view configured to reflect changes in stress level of the user and back feed these to this user, or to another user of the game. In this way, the augmented or virtual reality may be adjusted to the stress level(s) of its user(s).
  • the invention provides a virtual reality computing device or system or device comprising a heart rate variation (HRV) sensor with a breast band coupled to a virtual reality console, configured to capture a plurality stress levels and a controller configured to identify differences between some of stress levels, the differences corresponding to differences in overall stress state of a user of the device or system or device and determine fields of view response based in part on the identified differences in stress level.
  • HRV heart rate variation
  • a computing device in one example can be connected to a stress detecting device equipped to detect stress levels or parameters of the subject using the virtual reality system or device, additionally it can include an internal configuration of hardware including a processor such as a central processing unit (CPU) and a digital data storage exemplified by memory.
  • CPU can be a controller for controlling the operations of the computing device, and may be a microprocessor, digital signal processor, field
  • CPU can be connected to memory by a memory bus, wires, cables, wireless connection, or any other connection, for example.
  • Memory may be or include read-only memory (ROM), random access memory (RAM), optical storage, magnetic storage such as disk or tape, non-volatile memory cards, cloud storage or any other manner or combination of suitable digital data storage device or devices.
  • ROM read-only memory
  • RAM random access memory
  • Memory can store data and program instructions that are used by CPU. Program instructions may be altered when stress levels alter.
  • Other suitable implementations of computing device are possible. For example, the processing of computing device can be distributed among multiple devices communicating over multiple networks.
  • a virtual reality computing and stress detecting device can include a virtual reality (VR) headset, which can be worn by a user to facilitate experiencing the virtual reality system or device.
  • Virtual reality computing device can also include a computer, a mobile device, a server, or any combination thereof.
  • a VR headset can constitute a display of the virtual reality system or device, wherein the display outputs data indicative of a field of view according to the user's stress.
  • a VR headset can use video display technology to create displays or field of view that effectively cover the user's visual field. When wearing a VR headset , a user's entire visual perceptional field can be supplied as successive fields of view by the virtual reality system or device, thereby producing the effect of viewing scenes from a virtual world.
  • a VR headset can also be equipped with accelerometers, for example, that can measure the location and attitude of the VR headset and thereby the location and attitude of the user's head.
  • all or a portion of implementations of the present invention can take the form of a computer program product accessible from, for example, a computer-usable or computer-readable medium.
  • a computer-usable or computer-readable medium can be any device that can, for example, tangibly contain, store, communicate, or transport the program for use by or in connection with any processor.
  • the medium can be, for example, an electronic, magnetic, optical, electromagnetic, or a semiconductor device. Other suitable mediums are also available.
  • Stressjam is a native Virtual Reality Serious Game provided with stress sensor to detect for example heart rate variation or repiratory rate variation as measures of stress herewith and written for the HTC VIVE VR Headset.
  • the VIVE set-up is provided with a headset.
  • HTC Vive is a virtual reality headset developed by HTC and Valve Corporation, released on 5 April 2016. The headset is designed to utilise "room scale" technology to turn a room into 3D space via sensors, with the virtual world allowing the user to navigate naturally, with the ability to walk around and use motion tracked handheld controllers to vividly manipulate objects, interact with precision, communicate and experience immersive environments.
  • the Vive has a refresh rate of 90 Hz.
  • the device uses two screens, one per eye, each having a display resolution of 1080x1200.
  • the device uses more than 70 sensors including a MEMS (Microelectromechanical system or devices) gyroscope, accelerometer and laser position sensors, and is said to operate in a 15-by-15-foot (4.6 by 4.6 m) tracking space if used with both "Lighthouse" base stations that track the user's movement with sub-millimetre precision.
  • the Lighthouse system or device uses simple photo sensors on any object that needs to be captured; to avoid occlusion problems this is combined with two lighthouse stations that sweep structured light lasers within a space.
  • the front-facing camera allows the software to identify any moving or static objects in a room; this functionality can be used as part of a "Chaperone" safety system or device, which will automatically display a feed from the camera to the user to safely guide users from obstacles.
  • Binaural or 3D audio can be used by app and game developers to tap into VR headsets' head-tracking technology to take advantage of this and give the wearer the sense that sound is coming from behind, to the side of them or in the distance.
  • the ZephyrTM BioPatchTM HP Monitoring Device measures and transmits live physiological data on heart rate variation trough protocols like ECHO, Bluetooth or USB to the HTC VIVE VR Headset.
  • Stressjam we're using bluetooth to require the live raw data from the ZephyrTM HRV sensors and run it through smart algorithms of the computing device within the HTC VIVE VR Headset to create an additional variable with which we can feed to the game.
  • the game is designed to adjust itself, based on this real-time personal data feedback regarding stress levels detected. In this way, the VR game is adjusted to the stress levels of the user. In some cases, you'll need to calm yourself to be able to play a certain part of the game but in some cases, you'll need to trigger your stress response to be able to overcome a part of the gameplay.
  • the game is built around training levels which will be expended in future development. Training levels need to be completed before entering the next level and you'll be able to collect energy points along the way.
  • the training levels are based on ground-breaking research by Harvard university and Stanford university on mind-set and stress.
  • Putting people to practice stress is a key training tool for athletes, emergency responders, professional musicians, artists, astronauts and others that have to deliver under stress. And that is because of that rewiring and stress inoculation effect.
  • the Olympic skaters of a famous Dutch speedskating coach, Jillert Anema are training themselves with a specific stress- training tool, to reach for the top. Stress is no longer seen as the enemy, but as an important friend on the road to success.
  • Stressjam an award winning innovative health tech solution that uniquely combines virtual reality, biofeedback technology and applied games to provide a fully personalized digital coach to train players regulate their own stress system, and to develop a new stress-mindset in which stress can also be healthy.
  • the player undergoes a lifelike, virtual reality interactive experience on a tropical island. This experience is fully personalized by using a biosensor on the chest. Therefore, the player has only one superpower: his/her own stress system.
  • a truly engaging game that leads to a high level of personal involvement.
  • a training tool that scores an A on usability and learnability.
  • a training tool with 'duration of game-play' as a distinctive feature is provided.
  • Another relevant direction is to study if it is possible, by playing Stressjam for a longer period of time, to stimulate the vagal afferent system to have positive effects on disorders of negative affectivity and physiological health.
  • the Stress Mindset Measure contains a scale that ranges from 0 (negative attitude) to 4 (positive attitude)
  • the Stress Mindset Measure contains a scale that ranges from 0 (negative attitude) to 4 (positive attitude)
  • the Stress Mindset Measure contains a scale that ranges from 0 (negative attitude towards stress) to 4 (positive attitude towards stress)

Abstract

The invention provides emotion recognition and augmented or virtual reality devices, systems and methods useful in the field of therapy, the fields of psychological therapy and aspects of psychiatric therapy, herein commonly defined as psychotherapy. Therewith, the invention provides a novel tool by which therapists are helped to accurately and rapidly identifying another person's emotional state in diagnosing and treating mental disorders to better study, understand, respond to and empathize with a patient or client and for client(s) and patient(s) (herein generally called subject) to be trained in and understand and reflect on their behavior and communicative skills, for example to improve on these skills.

Description

Title: System or device allowing emotion recognition with actuator response induction useful in training and psychotherapy.
Field of the invention.
The invention relates to emotion recognition devices, systems and methods useful in the field of therapy and health care, the fields of psychological therapy and aspects of psychiatric therapy, herein commonly defined as psychotherapy.
Background.
The ability to accurately perceive other people's emotional states is especially important in psychotherapy and psychotherapy research (Journal of Clinical Psychology, Vol. 55(1), 39-57 (1999)). Accurately and rapidly identifying another person's emotional state is an ability that may be necessary for a psychotherapist or other mental health professional experienced in diagnosing and treating mental disorders to study, understand, respond to and empathize with a patient or client and for client(s) and patient(s) to be trained in and understand and reflect on their behavior and communicative skills, for example to improve on these skills. Although it is generally agreed that perceiving another person's emotional state enhances the effectiveness of communication, the ability to timely and accurately perceive other people's emotional states and act on that emotional state with an accurate and
therapeutically relevant response is especially important in psychotherapy and
psychotherapy research. Therapeutic empathy requires that a psychotherapist can recognize both the quality and intensity of a patient's emotional experience and psychotherapists must be able to identify objectively the emotions expressed in psychotherapy samples to accurately determine the role that emotional expression plays in psychotherapeutic improvement and change in many of the psychological conditions that face their clients or patients.
For example, impairment in social-communicative functioning is a defining feature of autism-spectrum disorders (ASD), exacerbated by significant problems in reading emotions in the facial and vocal expressions of others. To date, attempts to improve the emotion recognition skills of children with ASD and thus their social functioning and symptoms have been largely unsuccessful. Also, lack or recognition of emotional cues, apparent emotional apathy and lack of communicative skills are frequent features of an attention deficit disorder (ADH or ADHD), Parkinson's disease, depression and dementia, with a prevalence ranging from 5% to 51% of patients. An even higher prevalence has been reported in patient samples including other neuropsychiatric disturbances, such as bipolar disorders or schizophrenias. Although apathy and depression frequently coexist, they can develop separately. Apathy has been consistently reported to be associated with executive dysfunction. Nevertheless, disruption of emotional-affective functional circuits seems also to be present since the early stages of the disease, and may play an additional role in the development of apathy in patients with disorders such as Parkinson's disease (PD), depression or dementia that otherwise have no apparent cognitive deficits.
Similarly, Borderline personality disorder (BPD) is a serious mental illness marked by unstable moods, behavior, and relationships. Because some people with severe BPD have brief psychotic episodes, experts originally thought of this illness as atypical, or borderline, versions of other mental disorders. While mental health experts now generally agree that the name "borderline personality disorder" is misleading, a more accurate term does not exist yet. However, most people who have BPD suffer from problems with expressing and regulating emotions and thoughts that often translate into impulsive and reckless behavior and unstable relationships with other people. People with this disorder also have high rates of co-occurring disorders such as depression, anxiety disorders, substance abuse, and eating disorders, along with self-harm, suicidal behaviors, and completed suicides. No medications have been approved by the U.S. Food and Drug Administration to treat BPD. Only a few studies show that medications are necessary or effective for people with this illness, most people with BPD are treated with medications in addition to psychotherapy but there is little evidence that this practice is necessary or effective. Types of psychotherapy used to treat BPD include efforts to control intense emotions, which may reduce self-destructive behaviors, and improve relationships. Early detection of emotional states will prevent impulsive and reckless behavior. And detection of specific emotions that trigger those overwhelming emotional states will help patients to make progress in their therapy.
However, the overriding aim is that emotion detection occurs before an emotion has become overwhelming and some sort of emotional steering or regulation would still possible, and often that is a daunting task for a psychotherapist.
In another setting, developing communicative skills and improving on effective
communication is critical to developing successful relationships, between couples as well as among larger groups. Researchers and therapists have found at least nine skills that can help couples learn to talk effectively about important issues. How we interact about issues such as time spent together/apart, money, health, gender differences, children, family, friends, commitment, trust, and intimacy affect our ability to develop and maintain lasting friendships of the couple involved. If learned well, these skills can help put relationships between couples on a positive trajectory for success. Surprisingly, most, if not all, couple- or group-directed therapies pay little attention to learning about emotional cues and about applying non-verbal emotional state recognition to steer communicative skills to help improve relationships.
Similarly, effective communication skills that take emotional cues into account when necessary are fundamental to success in many other aspects of life. Many jobs and non-job functions require strong within-group communication skills and people with good communication skills paired with good emotional intelligence (EQ.) usually enjoy better interpersonal and intergroup relationships with colleagues, friends and family. As affective communication is a key interpersonal skill, learning how we can improve our within-group communication and EQ. by recognizing and acting on various emotional cues has many benefits.
Some psychotherapists use computer-based treatment to increase the emotion recognition skills of children with ASD. These efforts however have resulted in narrow and limited gains that have not translated to more significant improvements in skills and symptoms. A method that is directly relevant to affective computing as applied to autism is the Mindreading DVD. This comprises educational software that was designed to be an interactive, systematic guide to emotions (Baron-Cohen S., Golan 0., Wheelwright S., Hill J. J. Mind Reading: the interactive guide to emotions. London, UK: Jessica Kingsley Limited.; 2004.). It was developed to help people with ASD learn to recognize both basic and complex emotions and mental states from video clips of facial expressions and audio recordings of vocal expressions. It covers 412 distinct emotions and mental states, which are organized developmental^ and classified taxonomically to be attractive to a mind that learns through systemizing. The principle behind this was that individuals with autism may not learn to recognize emotional expressions in real time during live social situations because emotions are fleeting and do not repeat in an exact fashion, which may reduce the number of opportunities to systematically learn from repetition. Putting emotions into a computer- based learning environment enables emotions to be played and replayed repeatedly in an identical fashion, such that the learner can have control over their speed and the number of exposures they need to analyze and memorize the features of each emotion.
Mental states include thoughts and emotions, thoughts being traditionally fractionated into beliefs, desires, intentions, goals and perceptions. Emotions are traditionally fractionated into seven 'basic' emotions (joy, sadness, anger, fear, contempt, disgust and surprise) and numerous 'complex' emotions. Complex emotions involve attributing a cognitive state as well as an emotion and are more context and culture dependent. The basic emotions are held to be so because they may be universally recognized and expressed in the same way. This distinction, however, is not without its critics; since it may be that more emotions are universally recognized and expressed than these seven but have been overlooked, as research into complex emotions (usually towards developing taxonomies) has been mostly language and culture specific. For example, in the English language, there are at least 412 distinct emotions and related mental states (each with their own descriptor that is not just a synonym for another emotion) that are recognizable by independent judges within the UK. Although emotional cues like facial emotion expressions, hand and eye gestures, and so on, allowing recognition of emotional states, all seem to be important in social interaction and development of communicative skills, training these skills very much relies on reflection on behavior after the emotional cues have long gone. Also, there seems no specific training available that applies recognition of emotional cues into real-time responses, nor are there sufficient real-time methods for psychotherapists available to help improve or modulate client's communicative skills that depend on emotional cue detection. Emotional intelligence (EQ.)— the "accurate appraisal and expression of emotions in oneself and others and the regulation of emotion in a way that enhances living" encompasses a set of interrelated skills and processes. Because the face is the primary canvas used to express distinct emotions non-verbally (Ekman, Perspectives on Psychological Science 2016, Vol. 11(1) 31-34), the ability to read facial expressions is particularly vital, and thus a crucial component of emotional intelligence. Facial expressions are privileged relative to other nonverbal "channels" of communication, such as vocal inflections and body movements. Facial expressions appear to be the most subject to conscious control. Individuals focus more attention on projecting their own facial expressions and perceiving others' facial expressions than they do on other non-verbal channels and often more than they focus on verbal communication
Traditionally, most measures of patient experience in psychotherapy ignore the role of nonverbal expressions. Others fail to distinguish among the widely varied emotional qualities that characterize people's interpersonal communication. For example, the widely used Experiencing Scale has proven to be helpful in identifying the level of emotional intensity, but it does not differentiate among the types of emotions that may be represented in socio- communicative interactions. Similarly, the Strength of Feeling Scale, another measure that was designed to assess emotional experience, also omitted ratings of particular emotions. In contrast, the Affect Rating Scale defined the nature of positive emotions but is limited because of its focus on children and their parents in behavior therapy. The Emotional Arousal Scale is an exception to the rule of non-specificity. However, as originally
conceptualized, it was designed to solely assess the presence of anger and its role in psychotherapy. Accordingly, only the presence and intensity of anger is rated. It however may apply equally well to verbal and non-verbal material and offers a basis for expanding the focus to include six other primary emotions and their intensities.
Considerable research has been devoted to identifying the factors that enhance and inhibit accurate emotional recognition in automated system or devices. Much of this research has taken place in contexts other than psychotherapy, however, and frequently the dimensions identified lack relevance to this domain of communication. Following a long tradition going back to Descartes and Darwin that supports the existence of a small, fixed number of discrete (basic) emotions, Silvan Tomkins proposed in 1962 that there exist nine basic affective states (two are positive, one is neutral and six are negative), each indicated by a specific configuration of facial features. This assumption has been perpetuated by many researchers who followed (Ekman, 1972; Izard, 1971; Oatley & Johnson-Laird, 1987) with each researcher producing their own list of basic emotions that are different in the number and the type of basic emotions with those on the others' lists. This disparity is to say the least confusing in trying to understand the characteristics of the internal representations of the various emotional states considered to be most crucial for the development of an automatic emotion recognition system or device.
Research confirms both that non-language vocalization and postural cues account for the preponderance of communication accuracy and that self-reports are frequently inaccurate indices of emotions about which one has conflict (Burgoon & Ruffner, 1978). It indicates, for example, that denied or hidden emotions often reveal themselves to be present through non-verbal behaviors, such as heart rate, galvanic skin resistance or temperature, breathing rates, etcetera, over which individuals may have relatively less control than they do over verbal reports. Emotional recognition is dependent on information received through both verbal and non-verbal channels. Individuals seem to be able to recognize emotions with a fair level of accuracy, but this accuracy may decline when information is limited to either verbal or non-verbal channels of communication, such as facial expressions and vocal cues alone or when different channels provide contradictory information. Not only does this suggest the possibility of teaching psychotherapists to attend to non-verbal behaviors, but also it calls into question the procedure of using either written transcripts of psychotherapy or patient self-reports as measures of emotional states in psychotherapy research.
Training psychotherapists to recognize and respond to patient emotions has focused mainly on the accuracy of emotional recognition and of empathic responding, which may be increased by teaching therapists and counsellors to attend to non-verbalized information. Although such specific and focused training has proven to increase the accuracy with which therapists can respond to patients' emotional states, its relevance to conventional training of psychotherapists is uncertain. Moreover, much of the research on this topic has been confined to analogue patients and therapy sessions, calling into question the justification of generalizations to clinical material. Even when research on emotional recognition does include professionals who are conventionally trained and experienced it fails to compare their accuracy to individuals who are inexperienced and untrained. But even if the effects of training were adequately addressed, the question of generalization would not be solved. Typically, cues used to convey emotional states in such training are provided by actors who present pre-set verbal and non-verbal messages and are not real-client based. Paradoxically, this methodology has a built-in bias against recognizing authentic emotional expressions; a fatal one if it is indeed true that deception is conveyed by subtle non-verbal cues. Such actor-based practices may yield results that do not represent the authentic display of conflicted emotions in naturalistic settings and psychotherapy practice. Several studies (e.g., Rosenthal, Hall, DiMatteo, Rogers, & Archer, 1979) suggest that clinicians are more sensitive to non-verbal communication cues than teachers and business executives but, surprisingly, are somewhat less accurate than graduate students and actors. Indeed, a comparison of M.A. candidates, Ph.D. students, Ph.D. candidates, and clinical psychology faculty revealed a negative relationship between academic achievement and indicators of emotional sensitivity. It is possible that insensitivity to emotional cues among formally trained individuals accounts for the low relationship between clinical experience and training, on one hand, and clinical effectiveness, on the other. However, this may be compounded by the fact that lack of sensitivity to one's own emotions is detrimental understanding the variations in both accuracy of identifying others' emotions and clinical effectiveness.
Automated emotion recognition is the process of identifying human emotion, most typically from facial expressions, by computer-assisted means. For this, many computational methodologies have been developed (Neural Networks 18 (2005) 389-405). Putting together an automatic emotion recognition system or device based on knowledge on emotions such as stemming from the modern neurosciences is now very well possible.
Indeed, recent technological advances have allowed us to probe the human brain and particularly the physico-emotional circuitry that is involved in recognizing emotions, which is yielding a more detailed understanding of the function and structure of emotion recognition in the brain. At the same time, technological advances have significantly improved the signal processing techniques applied to the analysis of the physical correlates of emotions (such as the facial, vocal and physiological features) thus allowing efficient multimodal emotion recognition interfaces of emotional cues to be built. These systems are useful in so-called smart environments (SE) that evolve from ubiquitous computing following the idea that "a physical world that is richly and invisibly interwoven with sensors, actuators, displays, and computational elements, embedded seamlessly in the everyday objects of our lives, and connected through a continuous network". SE are composed of several heterogeneous sensors placed throughout the environment, thus providing great amounts of data. Scalable and flexible platforms integrate such devices and provide applications with the necessary interfaces to interoperate with the information coming from the available resources.
Therefore, current architectures have to adapt to the number of devices dynamically available in SE and to the high heterogeneity of data. An important feature expected from a backbone connecting an SE is reliable communications to ensure lossless and low-delay data transmission through the network. Another aspect is monitoring the status of the smart devices, handling failure cases and reallocating resources to keep the overall system performance.
The invention
The invention provides methods and means, computer-based hardware and software system or device, in particular for use in a health environment, so called smart health environments. A health environment (or health facility) is, in general, any location where healthcare is provided. The term usually includes hospitals, clinics, outpatient care centres, and specialized care centres, such as birthing and psychological or psychiatric care centres. The proper home of a person suffering from some kind of disease should also be considered a health environment.
The invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into a system or device response controlling an actuator unit, herein also called the actuator response. By using the emotional cue variables, the system sends the selected commands to the actuators that control the actuator response. A change in a perception of motion is than performed by an actuator for driving the subject to that perception. Actuators are a type of tool which is used to put something into automatic action. Actuators are used on a wide variety of sources, from humans putting something into action to computers starting up a
program. The actuator unit provides a movement or change in movement capable of being detected (perceived or notified) in reality, augmented reality or virtual reality by said subject. The invention provides emotion recognition devices, systems and methods useful in the field of therapy and health care, the fields of psychological therapy and aspects of psychiatric therapy, herein commonly defined as psychotherapy. Therewith, the invention provides a novel tool by which therapists are helped to accurately and rapidly identifying another person's emotional state in diagnosing and treating mental disorders to better study, understand, respond to and empathize with a patient or client and for client(s) and patient(s) (herein generally called subject) to be trained in and understand and reflect on their behavior and communicative skills, for example to improve on these skills.
As with many of the mental conditions described herein above, the development or presence of lasting relationships, build on good communicative skills and appropriate recognition of emotional cues, are strong predictors of psychological health; helping psychotherapists and their clients learn to timely react on emotional cues that configure communication are there for major objects of the invention. The applications of a system or device as provided herein that can detect and assessing a human emotional state of a subject (participant, client, patient) and then directly reflecting on that emotional state by responding with a special effect and actuator movement that is noticed or noticeable by the same subject (and optionally by others in the same environment with him or her) are numerous. One of the uses of such a system or device is to enhance human judgment of emotion in situations where objectivity and accuracy are required. Lie detection is an obvious example of such situations. Another example is clinical studies and therapy of schizophrenia and particularly the diagnosis of flattened affect that so far relies on the psychiatrists' subjective judgment of subjects' emotionality based on various physiological clues. An automatic emotion-sensitive-special-effect-response-actuator system or device as provided herein helps augment these judgments, so minimizing the dependence of the diagnostic procedure on individual psychiatrists' perception of emotionality. More generally along those lines, automatic emotion detection, classification and responding with an effect can be used in a wide range of psychological and neurophysiological studies of human emotional expression that so far rely on subjects' self-report of their emotional state, which often proves problematic. Typically, subjects diagnosed with ASD, ADHD, Parkinson, dementias, borderline personality disorders, bipolar disorders and the like may benefit from the invention; but also, couples that are engaged in relationship therapy, subjects that need to handle or be trained in handling difficult of conflictions discussions, and more in general, subjects that would benefit from training their socio-communicative skills, may all benefit from the invention.
In a first embodiment, the invention provides a computer-based hardware and software system or device (see also figure 2) comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into a system or device response controlling an actuator unit. When the control signal is received, the actuator unit provides a movement or change in movement capable of being detected by said subject. In one embodiment of the invention the actuator responds by converting energy into mechanical motion or movement or change in movement capable of being detected by said subject. In another embodiment of the invention the actuator responds by generating a field of view projected in a virtual or augmented reality device projecting a movement or change in movement capable of being detected by said subject. Such detection by said subject is greatly facilitated by several of the special effects that are attributed to emotional cue detection with a system or device of the invention. These special effects may be generated on the bases of distinct algorithms in the software that reflect known psychotherapy strategies such as provided by Gotmann or another psychotherapist known in the field. Also, the system or device may be equipped with self-learning algorithms whereby apparently successful responses are integrated in the software-memory. By automatically notifying the subject (client or patient) with one or more special, moving, effects based on or related to the occurrence or manifestation of an emotional cue of said subject detected by the system or device provided here, the subject will learn that such cues occur and may put the occurrence of emotional cues in a rather harmless perspective of an artificial detect-effect relationship. The therapist can now rely on an automated system or device that helps him or her in recognizing emotional cues and therewith can stay focused on other aspects of the therapeutic or training process. In a preferred embodiment, the system or device according to the invention is provided with an effecting unit capable of collecting output data from said integration unit and processing said output into a system or device response that is controlling an actuator unit such as a movable subject support structure (carrier) propelling or affecting a change in motion of said subject supported by said structure or carried. In this way, emotional cues of a subject or subjects measured are translated into physical motion of (parts of) said subject(s). In a preferred embodiment, said support structure comprises a movable base adapted to ride over a substructure, for example a support surface, rails or track. In a preferred
embodiment, the invention provides a system or device provided with an electric actuator for moving the supports structure, such as an electric cylinder, an electro motor or a steppen motor, or any other electric drive suitable for moving the support structure.
In another preferred embodiment, the system or device according to the invention is provided with an effecting unit capable of collecting output data from said integration unit and processing said output into a system or device response that is controlling an actuator unit such as a augmented or virtual reality device capable propelling or affecting a change in detection of motion of said subject. In this alternative way, emotional cues of a subject or subjects measured are translated into perceived change of motion of (parts of) said subject(s).
The system or device here provided generates emotion-to-(perceived)motion effects that provide profound learning experiences to the subject(s) participating with the system or device. In one embodiment, said effecting unit provides (under guidance of the emotional cue or cues detected) a system or device response by controlling an actuator that moves (or stops moving) the furniture at which or wherein said subject is seated. In another embodiment, said movement may be directed at moving the subject(s), preferably by moving the furniture wherein or whereon the subject is seated, away from one or more other subjects, preferably that are also participating with the system or device. In yet another embodiment, said movement may be directed at moving the subject(s) towards one or more other subjects participating with the system or device. It is herein also provided that an actuator is used with which the intensity or speed of moving may be changed, that moving is upwardly or downwardly directed or that the furniture proves a vibrating or shaking sensation off which the frequency is changed by an actuator response under guidance of the emotional cue or cues detected by a system or device according to the invention. An actuator is the mechanism by which a control system or device acts upon an environment. The control system or device can be simple (a fixed mechanical or electronic system or device), software-based (e.g. a printer driver, robot control system or device), a human, or any other input. The support structure preferably comprises a movable base adapted to ride over a substructure, for example a support surface, rails or track. In a preferred embodiment, the invention provides a system or device provided with an electric actuator for moving the supports structure, such as an electric cylinder, an electro motor or a steppen motor, or any other electric drive suitable for moving the support structure or subject carrier.
The moving system or device preferably comprises an electric actuator such as electric motor for propelling or moving the subject carrier, and a power supply to power the electric actuator. The power supply may also comprise an electrical storage element to store electrical energy. The effecting unit is arranged to control the power supply, optionally such as to operate the power supply to charge the electrical storage element from a power source; and primarily to operate the power supply to power the actuator from the electrical energy stored in the electrical storage element, to thereby propel the subject carrier. As a further example, a linear motor may be provided to accelerate the subject carrier to a certain speed, the carrier thereby e.g. being unable to travel a remainder of the trajectory of the device on its own. Many other configurations are possible: the actuator may for example be comprised in the carrier and be provided with electrical power via sliding contacts. The invention also provides a computer-based hardware and software system or device or device according to the invention having a subject carrier and a propelling system or device for propelling the subject carrier, the propelling system or device comprising an electric actuator to propel the subject carrier, a power supply to power the electric actuator, the power supply comprising an electrical storage element to store electrical energy, and a control unit which is arranged to control operation of the power supply, the control unit being arranged to operate the power supply to charge the electrical storage element from an power source; and operate the power supply to power the electric actuator from the electrical energy stored in the electrical storage element, to thereby propel the subject carrier. The subject carrier may optionally be provided with one or more seating parts, the term seating part used herein is understood to mean that part of the carrier which can accommodate one person or several persons in a sitting, standing or recumbent position.
In a preferred embodiment, the invention provides a computer-based hardware and software system or device, the system or device optionally comprising a database that may be cloud-based. In a preferred embodiment, the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, in particular capable of detecting emotional valence variables of said cue, and further comprising an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, and an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject, preferably wherein said response depends on the emotional valence of the cue detected.
In a preferred embodiment, the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, in particular capable of detecting emotional arousal variables of said cue, and further comprising an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, and an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject, preferably wherein said response depends on the emotional arousal of the cue detected.
Preferably, the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, in particular capable of detecting emotional valence variables and emotional arousal variables of said cue, and further comprising an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, and an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject, preferably wherein said response depends on the emotional valence and arousal of the cue detected. In this way, at least four different outputs may be generated that respectively relate to high valence, high arousal, low valence, low arousal, high valence, low arousal and low variables, high arousal (see also figure 1).
In this way, the invention provides a set of computerized devices (a system or device) helping the therapist assess emotional states of humans and improves his or her ability to modulate emotional states of the subject in therapy. In a preferred embodiment, the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, and an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time.
In another preferred embodiment, the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, and an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in real-time. The invention provides real-time or near real time devices, system or devices and methods to detect an emotion and provide a response to modulate behavior. In computer science, real-time computing (RTC), or reactive computing describes hardware and software system or devices subject to a "real-time constraint" or "near-real-time constraint", for example from detection of event to system or device response. Near-real-time and real-time programs must both guarantee response within specified time constraints, often referred to as "deadlines". The distinction between "near real-time" and "real-time" varies, and the delay is dependent on the type and speed of the transmission.
The delay in near real-time as provided by the invention herein is typically of the order of several seconds to several minutes. Real-time responses are often understood to be in the order of milliseconds, and sometimes microseconds; near-real-time responses often incorporate a deliberate lag phase, preferably of up to 20 seconds, more preferably up to 10 seconds, more preferably up to 5 seconds, most preferably up to 3 seconds. This lag phase is selected in line with research from humans and animals showing that somewhere between 2-5 seconds is the longest window of what we can perceive as an independent event - a moment. Anything longer, or separated by a longer window, is perceived by us as a separate occurrence - close in time, but distinct. When we know something is coming, our autonomic system or device prepares us about 3 seconds ahead of time, which makes sense because that's about how long our vagal nervous system or device takes to alter our heart rate and breathing.
A near-real-time system or device as herein described is one, which "controls an
environment by receiving data, processing them, and returning the results sufficiently quickly to affect the subject at that time. The term "near-real-time" herein is also used to mean "without significant delay". The term "near real-time" or "nearly real-time", refers to the time delay introduced, by automated data processing or network transmission, between the occurrence of an event and the use of the processed data, such as for display or feedback and control purposes. For example, a near-real-time display depicts an event or situation, as it existed at the current time minus the processing time, as nearly the time of the live event. Both terms "near real time" and "real time" imply that there are no significant delays. In many cases, processing described as "real-time" would be more accurately described as "near real-time". Near real-time also refers to delayed real-time transmission of voice and video. It allows playing or projecting video images, in approximately real-time, without having to wait for an entire large video file to download. Incompatible databases can export/import to common flat files that the other database can import/export on a scheduled basis so that they can sync/share common data in "near real-time" with each other.
The devices, system or devices and methods of the invention as provided herein are useful during psychotherapy sessions, relationship therapy sessions and during socio- communicative interactions and discussions wherein an emotional cue given out by a subject is automatically responded to with a special effect detectable by said subject, preferably executed in near-real-time, more preferably in real time. In a preferred embodiment, the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time, and wherein said detection unit is capable of detecting at least one, preferably at least two, more preferably at least three emotional cue variables selected from the group of heart rate, heart rate variability, galvanic skin response, galvanic skin response variability, breathing rate, breathing rate variability, core temperature, skin temperature, skin temperature variability, electro-myography, electro-myography variability, electro-encephalography, electro-encephalography variability, electro-cardiography, electro-cardiography variability, photoplethysmography, photoplethysmography variability, goose bumps, goose bumps variability, posture, posture variability, body movement, body movement variability, eye movement, eye movement variability, pupil size, pupil size variability, hand movement, hand movement variability, facial expression, facial expression variability, speech, speech variability, sound and sound variability.
In a further preferred embodiment the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time, and wherein said detection unit is capable of detecting at least one, preferably at least two, more preferably at least three emotional cue variables selected from the group of heart rate, galvanic skin response, electro-cardiography, photoplethysmography, posture, eye movement,
pupil size, facial expression, and sound.
In one embodiment of the invention, said emotional cue or cues are preferably detected by a detection unit comprising an electrode, in another embodiment of the invention, said detection unit comprises an optical sensor, in yet another embodiment, said detection unit comprises a camera, in yet another said detection unit comprises a microphone. In a further preferred embodiment of the invention, the detection unit comprises an electrode and a camera and/or an optical sensor and/or a microphone. In a particularly preferred embodiment, the detection unit comprises an electrode and a camera and a microphone.
In another embodiment, the invention provides a computer-based hardware and software system or device with a detection unit, an integration unit and an effecting unit wherein said integration unit is provided with software capable of providing a measure of the emotional experience of said suspect. It is preferred that said emotional experience is classifiable as anyone selected from the group of joy, anger, surprise, fear, sadness, disgust or contempt. In yet a further embodiment, the invention provides a system or device wherein said integration unit is provided with software capable of providing a measure of the facial expression of said suspect. It is preferred that said facial expression is classifiable as anyone selected from the group of attentions, brow furrow, brow raise, inner brow raise, eye closure, nose crinkle, upper lip raise, lip suck, lip pucker, lip press, mouth open, lip corner depressor, chin raise, smirk or smile.
In a further preferred embodiment, the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said subject is also provided with a system or device for augmented or virtual reality detection. Alternative embodiment wherein stress-detection by virtual reality system or device worn by a subject moves or adapts the augmented or virtual reality perceived by said subject. Virtual reality (VR) system or devices can present fields of view to a user via a display to provide the user with the perception of being in an environment other than reality. A field of view presents to the user scenes from a virtual reality world. Virtual reality system or devices can use an opaque background for the displayed fields of view or use a transparent background so that the field of view is overlaid on the user's view of the real world. Virtual reality system or devices can also acquire a video stream of the real world and superimpose objects and people on the video stream representing the real world. These latter two schemes can be called augmented reality. Examples of augmented virtual reality systems or devices providing perception of motion include car racing simulators, flight simulators, video games and video conferencing system or devices. Virtual reality system or devices can permit a user to simulate driving vehicles, flying airplanes, exploring alien worlds or being at a simulated meeting with participants from different parts of the world without any of the participants leaving home, for example. The fields of view that comprise the virtual reality world can be arranged to provide the user with the perception of being in a virtual world. The fields of view can change according to the simulated physical dynamics of the world being simulated. For example, in a driving or flying system or device, the fields of view will change according to the simulated motion of the vehicle or airplane. Fields of view can also be changed by the user interacting with a controller, for example. Many video games are controlled by a handheld controller that includes buttons and switches that can change the point of view of the user in the virtual world and hence the fields of view displayed. The display of some virtual reality system or devices include a virtual reality headset, for example.
Accelerometers can be used in a virtual reality headset to detect the location and attitude of the headset and thereby control the field of view to track the user's head motions and arrange the field of view accordingly. Virtual reality system or devices can include other types of displays such as a stationary screen in front of the user not worn on a headset, multiple stationary screens surrounding the user, screens placed on lenses worn on the user's eyes, or hologram images projected around the user. None of these ways to control the field of view selection can display fields of view to the user that reflect stress of the user. In real life, if a person is affected by stress often the person is more alert to cues in the immediate real-world environment that can alert the user that stress was justified or not. The invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said subject is also provided with a system or device for providing fields of view allowing augmented or virtual reality detection by a subject (a virtual reality system or device) comprising detection unit capable of detecting at least one stress variable of a subject (a stress sensor) and an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data with an effecting unit capable of collecting and processing output data from said integration unit and providing output data affecting a change in field of view (a controller). In a preferred embodiment, said subject is also provided with a virtual reality computing device or system or device wherein
emotion=detection such as stress-detection by the virtual reality system or device worn by a subject moves or adapts the augmented or virtual reality fields of view perceived by said subject. The invention provides a virtual reality computing device or system or device that tracks emotion or stress parameters such as heart rate variation or respiratory rate variations. Such stress detection is in one embodiment provided by a stress sensor that is located in a breast band or glove or other fixative element relative to a portion of a user's skin, in another embodiment such a stress sensor may be incorporated into the VR headset relative to a portion of a user's skin, or both. The system or device includes a controller configured to identify differences between various stress levels, such as heart rate variations or respiratory rate variations and to determine output related to changes in fields of view configured to reflect changes in stress level of the user and back feed these to this user, or to (an)other user of the game. In this way, the augmented or virtual reality may be adjusted to the stress level(s) of its user(s). In a preferred embodiment, the invention provides a virtual reality computing device or system or device comprising a heart rate variation (HRV) sensor with a breast band coupled to a virtual reality console, configured to capture a plurality stress levels and a controller configured to identify differences between some of stress levels, the differences corresponding to differences in overall stress state of a user of the device or system or device and determine fields of view response based in part on the identified differences in stress level.
In a preferred embodiment, the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time, wherein said effecting unit provides an actuator response including a physical notification of said subject.
In a preferred embodiment, the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time, wherein said effecting unit provides an actuator response that moves the furniture at which or wherein said subject is seated.
In a preferred embodiment, the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time, wherein said effecting unit provides an actuator response that includes a change of lighting.
In a preferred embodiment, the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time, wherein said effecting unit provides a system or device response that includes a sound.
In a preferred embodiment, the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time, wherein said effecting unit provides a system or device response that includes a change of temperature.
In a preferred embodiment, the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time, wherein said effecting unit provides an actuator response that includes a gust of air.
In a preferred embodiment, the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time, wherein said effecting unit provides a system or device response that includes a smell.
In a preferred embodiment, the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time, wherein said effecting unit provides a system or device response that includes projection of an image. In a preferred embodiment, the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into a system or device response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time wherein said detection unit is capable of detecting at least one emotional cue variable of at least two subjects.
In a preferred embodiment, the invention provides a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into a system or device response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time wherein said detection unit is capable of detecting at least one emotional cue variable of at least three subjects.
The invention also provides a machine-readable medium storing the software that is capable to perform the detecting, collecting and processing functions of a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time.
The invention also provides a computer (or server) provided with the software of a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time
The invention also provides use of a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time
The invention also provides use of a machine-readable medium storing the software that is capable to perform the detecting, collecting and processing functions of a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time in psychotherapy.
The invention also provides use of a computer (or server) provided with the software of a computer-based hardware and software system or device comprising a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time in psychotherapy.
The invention also provides a method for providing a subject with psychotherapy comprising detecting at least one emotional cue variable of said subject with a detection unit, collecting input data from said detection unit and processing said input relating to said cue variable into output data, collecting output data from said integration unit and processing said output in an effecting unit into an actuator response capable of being detected by said subject. It is preferred that said response is provided in near-real-time or in real-time. The invention also provides a method for providing a subject with psychotherapy comprising detecting at least one emotional cue variable of said subject with a detection unit, collecting input data from said detection unit and processing said input relating to said cue variable into output data, collecting output data from said integration unit and processing said output in an effecting unit, providing output data to an actuator unit providing fields of view allowing augmented or virtual reality detection by a subject, affecting a detection of a change in motion by said subject by said actuator unit, preferably wherein said change in motion is provided in near-real-time or real-time.
It is furthermore preferred said detection unit is capable of detecting at least one emotional cue variable selected from the group of heart rate, heart rate variability, galvanic skin response, galvanic skin response variability, breathing rate, breathing rate variability, core temperature, skin temperature, skin temperature variability, electro-myography, electromyography variability, electro-encephalography, electro-encephalography variability, electro-cardiography, electro-cardiography variability, photoplethysmography, photoplethysmography variability, goose bumps, goose bumps variability, posture, posture variability, body movement, body movement variability, eye movement, eye movement variability, pupil size, pupil size variability, hand movement, hand movement variability, facial expression, facial expression variability, speech, speech variability, sound and sound variability. In a further preferred embodiment the invention also provides a method for providing a subject with psychotherapy comprising detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time, and wherein said detection unit is capable of detecting at least one, preferably at least two, more preferably at least three emotional cue variables selected from the group of heart rate, galvanic skin response, electro-cardiography, photoplethysmography, posture, eye movement, pupil size, facial expression, and sound.
The invention also provides a method of doing business comprising the steps of teaching a group of at least two, preferably at least three subjects socio-communicative skills and charging a fee for said teaching, said method further comprising detecting at least one emotional cue variable of at least one subject with a detection unit, collecting input data from said detection unit and processing said input relating to said cue variable into output data, collecting output data from said integration unit and processing said output in an effecting unit into an actuator response capable of being detected by said subject. It is preferred that said response is provided in near-real-time or in real-time. It is furthermore preferred said detection unit is capable of detecting at least one emotional cue variable selected from the group of heart rate, heart rate variability, galvanic skin response, galvanic skin response variability, breathing rate, breathing rate variability, core temperature, skin temperature, skin temperature variability, electro-myography, electro-myography variability, electro-encephalography, electro-encephalography variability, electrocardiography, electro-cardiography variability, photoplethysmography,
photoplethysmography variability, goose bumps, goose bumps variability, posture, posture variability, body movement, body movement variability, eye movement, eye movement variability, pupil size, pupil size variability, hand movement, hand movement variability, facial expression, facial expression variability, speech, speech variability, sound and sound variability.
In a further preferred embodiment the invention also provides a method of doing business comprising the steps of teaching a group of at least two, preferably at least three subjects socio-communicative skills and charging a fee for said teaching, said method further comprising detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data, an effecting unit capable of collecting output data from said integration unit and processing said output into an actuator response capable of being detected by said subject wherein said response is provided in near-real-time or provided in real-time, and wherein said detection unit is capable of detecting at least one, preferably at least two, more preferably at least three emotional cue variables selected from the group of heart rate, galvanic skin response, electro-cardiography, photoplethysmography, posture, eye movement,
Legend with figures
Figure 1 One of the most common frameworks in the emotions field proposes that two main dimensions best characterize emotional cues: arousal and valence. The dimension of valence ranges from highly positive to highly negative, whereas the dimension of arousal ranges from calming or soothing (low) to exciting or agitating (high).
Figure 2 A diagram of the architecture of a system or device according to the invention equipped with one or more cameras, one or more sensors and one or more microphones, a detection unit
equipped for detection of emotional cues of subject(s), an integration unit for processing data from subject(s), effector unit for providing effects and control, and an actuator unit for moving subject(s).
Figure 3 A sketch of a table useful in a type 3 multiple person system or device allowing moving subjects in or out groups depending on emotional cue detection.
Figure 4. Course of Stress Mindset Measures among psychiatric patients A preliminary analysis of the Stress Mindset Measures over time suggests that scores of psychiatric patients become more positive after repeated sessions. In contrast to healthy individuals, psychiatric patients might need more time to achieve a 'healthy' mindset towards stress.
Figure 5. SUS Scores Grading Form
Detailed specification with examples
Table 1 Detailed s ecification of specific emotional cue detection
Figure imgf000029_0001
wavelets
Facial expression recognition has been a highly-researched topic in the field of biometrics for decades. It has been used with the intent of both identifying specific individuals, and in understanding human relations and communication. A potential case study with this type of software is in medicine and geriatric care. Patients in these situations may not always be able to communicate their state of being with a care provider. Humans have been studying themselves for a long time, and the description of facial features is no exception. The measurement, collection, and systemtic analysis of facial expression has been a focus of study since the initial publication by Paul Ekman and Wallace V. Friesen in 1976, almost half a century ago. The specific method and deliberate analysis of such features are commonly known as the Facial Action Coding System or device (FACS), originally created by P. Ekman. There have been many studies performed regarding the collection of facial recognition metrics, and this space is well explored. Topics from understanding the correlation of quality of input, for example varying lighting conditions, to understanding which facial expressions represent reflexes to external stimuli compared to true human emotion. Facial expressions are a gateway into the human mind, emotion, and identity. They are a way for us to relate with each other, share understanding, and compassion. They are also a way for us to express pain, sorrow, remorse, and lack of understanding. These characteristics can be crucial to understand when working with patients, especially patients who are unable to communicate in other ways. These victims include post-stroke patients and those suffering from dementia or Alzheimer's disease. For these patients, it can be helpful to identify a state of being, or emotional response to external stimuli, including pain, through their facial expressions. A useful biometric research platform software system or device called "iMotions" is used herein, supplemented with customized software (iMotions, Copenhagen, Denmark. The software can combine detection of emotional cues such as "eye tracking, facial expression analysis, EEG, GSR, EMG, ECG and Surveys" The platform is generally used for various types of academic and business-related research. GSR Module of iMotions is a plug & play integration with GSR devices that delivers real time sensor output (emotional reactions) in the user interface and in synchronized raw data export. iMotions also provides an open application programming interface (API) to allow integration of other sensors or detection means to forward data into the software, thereby allowing multi-modal-cue and/or multi- modal-subject detection. Multi-modal integration with other sensors like EEG, EMG, ECG. GSR solution allows analysing the different emotions of various people and responding with various effects.
Another useful biometric research platform software system or device (Face reading) is provided Noldus Information Technology bv Wageningen - The Netherlands. FaceReader automatically analyses 6 basic facial expressions, as well as neutral and contempt. It also calculates gaze direction, head orientation, and person characteristics. The Project Analysis Module is ideal for advanced analysis and reporting: you quickly gain insight into the effects of different stimuli. Analysis of Action Units is available. Yet another useful biometric research platform software system or device called "Crowdsight" is used herein, supplemented with customized software. CrowdSight Software Development Kit (SDK) is a flexible and easy-to-integrate Crowd Face Analysis Software which allows to gather realtime, anonymous information about people while they behave spontaneously in different life environments. It detects emotional reactions and engagement. CrowdSight works offline as well as online on the most popular desktop and mobile platforms (Windows, Mac, Linux, iOS, Android).
Galvanic Skin Response (GSR) is another biophysical sensor, which determines human skin resistance under different psychological conditions. The GSR is an older term for electro dermal activity (EDA), which is simply the electrical conductance of the skin. These sensors also detect an increase in physical attributes marking a state of being including: heart rate and sweat measurements. Sweat glands are controlled by the sympathetic nervous system or device. A change in the electrical resistance of the skin that is a physiochemical response to emotional stimulation increases the sympathetic nervous system or device activity. GSR is a method of measuring the electrical resistance of the skin, which varies with its moisture level. With other sensors, these devices can help determine wellness, and emotional responses to external stimuli. Typical emotional cues that allow detection by Affectiva Facial Expression Emotion Analysis with iMotions Core License are listed below and can be extended.
Valence: A measure of the positive or negative nature of the recorded person emotional experience. The range of values for the metric is arbitrarily set between -100 to 100. Arousal: A measure of the excitation nature of the recorded person's emotional experience. The range of values for the metric is arbitrarily set between -100 to 100.
Engagement: A measure of facial muscle activation that illustrates the subject's
expressiveness. The range of values is between 0 and 100.
7 Basic Emotions: Joy, Anger, Surprise, Fear, Sadness, Disgust, Contempt. Emotion metrics scores indicate when users express a specific emotion, along with the degree of confidence. The metrics can be thought of as detectors: as the emotion occurs, the score rises from 0 (no expression) to 100 (expression fully present).
15 Facial Expressions (Action Units): Attention, Brow Furrow, Brow Raise, Inner Brow Raise, Eye Closure, Nose Wrinkle, Upper Lip Raise, Lip Suck, Lip Pucker, Lip Press, Mouth Open, Lip Corner Depressor, Chin Raise, Smirk, Smile. Expression metrics, also known as Action Units (AUs) in the FACS methodology, scores indicate when users make a specific expression (e.g., a smile) along with the degree of confidence. The metrics can be thought of as detectors: as the facial expression occurs and becomes more apparent, the score rises from 0 (no expression) to 100.
Interocular Distance: Distance between the two outer eye corners.
Head Orientation: Estimation of the head position in a 3-D space in Euler angles (pitch, yaw, roll).
One of the most sensitive markers for emotional arousal is Galvanic Skin Response (GSR) . With GSR, the arousal impact of any emotionally packed content can be tested.
ECG (electrocardiography) sensors are commonly electrodes that measure the bio-potential generated by electrical signals that control the expansion and contraction of heart chambers. PPG (photoplethysmography) sensors use a light-based technology to sense the rate of blood flow as controlled by the heart's pumping action. EEG (electroencephalograpy) sensors are commonly electrodes that measure the bio-potential generated by electrical signals that relate to cognitive workload of the brain. EMG (electromyography) sensors are commonly electrodes that measure the bio-potential generated by electrical signals that relate to muscle activity.
Specifications Type 1
One-to-two-person (one subject - if desired with a therapist involved) system or device - Input: Emotional cues detectable by a Facial Emotion Recognition Software using Camera directed at subject's face, optionally provided with Galvanic Skin Resistance detection of the subject in therapy. - Processing: determining emotional valence and arousal levels and applying Gottman Algorithm Software wherein a negative low emotional valence/arousal state is learned to be met by responding with at least 1, preferably at least 3, preferably at least 5 positive high emotional valence/arousal state responses, based on learning by effects generated by the system or device in reaction to emotional cues displayed by the subject.
- Output: Philips HUE Light effect, geared to respond to negative low state, optionally accommodating four different light effects each relating to one of the four different valency and arousal states of figure 2. The type 1 computer-based hardware and software emotion- effect system or device provided herein is specifically developed for one-to-one
psychotherapy sessions of a therapist with one subject (client or patient) that may benefit from learning about the emotional cues they project. Such subjects are typically selected from patients diagnosed with ASD, ADHD, Parkinson, dementias, borderline personality disorders, bipolar disorders and schizophrenias and the like.
Specifications Type 2
Three-person system or device (2 subjects -therapist)
- Input: Emotional cues detectable by a Facial Emotion Recognition Software using Camera directed at each subject's faces, optionally provided with Galvanic Skin Resistance detection of each of the subjects in therapy.
- Processing: determining emotional valence and arousal levels of both subjects and applying Gottman Algorithm Software wherein a negative low emotional valence/arousal state is learned to be met by responding with at least 1, preferably at least 3, preferably at least 5 positive high emotional valence/arousal state responses, based on learning by effects generated by the system or device in reaction to emotional cues displayed by each of the subjects.
- Output: Philips HUE Light effect, geared to respond to negative low state, optionally accommodating four different light effects each relating to one of the four different valency and arousal states of figure 2. The type 2 computer-based hardware and software emotion- effect system or device provided herein is specifically developed for couple-related psychotherapy sessions of a therapist with two subject (clients or patients) that may benefit from learning about the emotional cues they each project. Such subjects are typically selected from clients wishing to engage in relationship therapy or couple therapy with help of a therapist.
Specifications Type 2Private or type 2Privateweb
Two-person system or device (2 subjects)
- Input: Emotional cues detectable by a Facial Emotion Recognition Software using Camera directed at each subject's faces, optionally provided with Galvanic Skin Resistance detection of each of the subjects in session. For Camera, a smartphone device or webcam may be used.
- Processing: determining emotional valence and arousal levels of both subjects and applying Gottman Algorithm Software wherein a negative low emotional valence/arousal state is learned to be met by responding with at least 1, preferably at least 3, preferably at least 5 positive high emotional valence/arousal state responses, based on learning by effects generated by the system or device in reaction to emotional cues displayed by each of the subjects. Such subjects are typically selected from clients wishing to practice relationship therapy or couple therapy in a private setting such as their home.
- Output: Philips HUE Light effects, geared to respond to negative low state, optionally accommodating four different light effects each relating to one of the four different valency and arousal states of figure 2. Alternative outputs may be generated in the type 2Privateweb version via smartphone connection or computer/webcam connection.
The type 2Privateweb computer-based hardware and software emotion-effect system or device provided herein is specifically developed for couple-related sessions without a therapist with persons that may benefit from learning from each other about the emotional cues they each project wherein the subjects and system or device is interacting via internet communication and webcam detection. Database facilities may be provided in the Cloud.
Specifications Type 3
Multiple-person system or device (2 or more subjects - optionally at least one therapist).
The type 3 computer-based hardware and software emotion-effect system or device provided herein is specifically developed to be used group sessions with or without a therapist wherein at least two subject (clients or patients) that may benefit from learning about the emotional cues they each utter to improve their socio-communicative skills.
- Input: Facial Emotion Recognition Software using Camera and Stress detection with Zephyr BH3 heart rate variability detection optionally provided with Galvanic Skin Resistance detection of each of the subjects.
- Processing: determining emotional valence and arousal levels of each subject and applying Gottman Algorithm Software wherein a negative low emotional valence/arousal state is learned to be met by responding with at least 1, preferably at least 3, preferably at least 5 positive high emotional valence/arousal state responses, based on learning by effects generated by the system or device in reaction to emotional cues displayed by each of the subjects.
- Output: Philips HUE Light effect, geared to respond to negative low state, optionally accommodating four different light effects each relating to one of the four different valency and arousal states of figure 2.
- Alternative output: Round table inclusion and/or exclusion effects (see also figure 3).
A) When a person projects a negative emotional cue, which is detected by the system or device in response thereto his/her chair will move backwards (or rotate) in response to impulses from the effecting unit so he/she will be distanced (or facing away) from the meeting table. In this case, the person who does something 'wrong' must change his way of communication to force or rotate his chair back at the table and back into the group.
B) You can turn that backward by 'punishing' the whole group for the action of an individual. One person does something 'wrong', as a response all other people at the table will be moved backwards from the table instead of the single person.
In example A the frustration will be with the single person. In example B the frustration will be with the other persons at the table.
Extension modules to Types 1, 2 and 3 generating special effects under control of the effecting unit. # Moving furniture
Tables and or chairs respond to impulses from the effecting unit. # Changing lights
Lights respond to impulses from the effecting unit by going dark or changing color (group effect or personal effect).
# Physical changes
Vibrating wristbands or other on body devices respond to impulses from the effecting unit.
# Sound
Changes in sounds from respond to impulses from the effecting unit. Sounds from
wristbands or other devices. Neutralize or change sounds in the room. Volume up and down.
# Temperature
Change the temperature in the room in response to impulses from the effecting unit.
# Smell
Add flavoured smell tot the room. in response to impulses from the effecting unit.
# Projection on the table
Project heartbeat, colour or imagery on the table, for the entire group or on a personal slice of the table in response to impulses from the effecting unit. Intervening in inner state by HRV breathing, the table projection shows a preferred breathing rate. For example, 5 seconds in and 5 seconds out.
# Feedback on "horses of the apocalypse"
Four negative, toxic interaction styles trigger a specific personalised feedback. These are criticism, contempt, defensiveness and stonewalling and in couple-therapy detecting these cues and providing a fitting effect is useful. # Feel sensation
Alternative embodiment wherein stress-detection by virtual reality system or device worn by a subject moves or adapts the augmented or virtual reality perceived by said subject.
Virtual reality (VR) system or devices can present fields of view to a user via a display to provide the user with the perception of being in an environment other than reality. A field of view presents to the user scenes from a virtual reality world. Virtual reality system or devices can use an opaque background for the displayed fields of view or use a transparent background so that the field of view is overlaid on the user's view of the real world. Virtual reality system or devices can also acquire a video stream of the real world and superimpose objects and people on the video stream representing the real world. These latter two schemes can be called augmented reality. Examples of virtual reality system or devices include car racing simulators, flight simulators, video games and video conferencing system or devices. Virtual reality system or devices can permit a user to simulate driving vehicles, flying airplanes, exploring alien worlds or being at a simulated meeting with participants from different parts of the world without any of the participants leaving home, for example. The fields of view that comprise the virtual reality world can be arranged to provide the user with the perception of being in a virtual world. The fields of view can change according to the simulated physical dynamics of the world being simulated. For example, in a driving or flying system or device, the fields of view will change according to the simulated motion of the vehicle or airplane. Fields of view can also be changed by the user interacting with a controller, for example. Many video games are controlled by a handheld controller that includes buttons and switches that can change the point of view of the user in the virtual world and hence the fields of view displayed. The display of some virtual reality system or devices include a virtual reality headset, for example.
Accelerometers can be used in a virtual reality headset to detect the location and attitude of the headset and thereby control the field of view to track the user's head motions and arrange the field of view accordingly. Virtual reality system or devices can include other types of displays such as a stationary screen in front of the user not worn on a headset, multiple stationary screens surrounding the user, screens placed on lenses worn on the user's eyes, or hologram images projected around the user. None of these ways to control the field of view selection can display fields of view to the user that reflect stress of the user. In real life, if a person is affected by stress often the person is more alert to cues in the immediate real-world environment that can alert the user that stress was justified or not. A computer-based hardware and software system or device for providing fields of view allowing augmented or virtual reality detection by a subject (a virtual reality system or device) comprising detection unit capable of detecting at least one stress variable of a subject (a stress sensor) and an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data with an effecting unit capable of collecting and processing output data from said integration unit and providing output data affecting a change in field of view (a controller). The invention provides a virtual reality computing device or system or device wherein stress-detection by the virtual reality system or device that is worn by a subject moves or adapts the augmented or virtual reality fields of view perceived by said subject. The invention provides a virtual reality computing device or system or device that tracks stress parameters such as heart rate variation or respiratory rate variations. Such stress detection is in one embodiment provided by a stress sensor that is located in a breast band or glove or other fixative element relative to a portion of a user's skin, in another embodiment such a stress sensor may be
incorporated into the VR headset relative to a portion of a user's skin, or both. The system or device includes a controller configured to identify differences between various stress levels, such as heart rate variations or respiratory rate variations and to determine output related to changes in fields of view configured to reflect changes in stress level of the user and back feed these to this user, or to another user of the game. In this way, the augmented or virtual reality may be adjusted to the stress level(s) of its user(s). In a preferred embodiment, the invention provides a virtual reality computing device or system or device comprising a heart rate variation (HRV) sensor with a breast band coupled to a virtual reality console, configured to capture a plurality stress levels and a controller configured to identify differences between some of stress levels, the differences corresponding to differences in overall stress state of a user of the device or system or device and determine fields of view response based in part on the identified differences in stress level.
Below is a description of a virtual reality system or device in which aspects of the invention can be implemented. A computing device, in one example can be connected to a stress detecting device equipped to detect stress levels or parameters of the subject using the virtual reality system or device, additionally it can include an internal configuration of hardware including a processor such as a central processing unit (CPU) and a digital data storage exemplified by memory. CPU can be a controller for controlling the operations of the computing device, and may be a microprocessor, digital signal processor, field
programmable gate array, discrete circuit elements laid out in a custom application specific integrated circuit (ASIC), or any other digital data processor, for example. CPU can be connected to memory by a memory bus, wires, cables, wireless connection, or any other connection, for example. Memory may be or include read-only memory (ROM), random access memory (RAM), optical storage, magnetic storage such as disk or tape, non-volatile memory cards, cloud storage or any other manner or combination of suitable digital data storage device or devices. Memory can store data and program instructions that are used by CPU. Program instructions may be altered when stress levels alter. Other suitable implementations of computing device are possible. For example, the processing of computing device can be distributed among multiple devices communicating over multiple networks.
A virtual reality computing and stress detecting device as provided can include a virtual reality (VR) headset, which can be worn by a user to facilitate experiencing the virtual reality system or device. Virtual reality computing device can also include a computer, a mobile device, a server, or any combination thereof. A VR headset can constitute a display of the virtual reality system or device, wherein the display outputs data indicative of a field of view according to the user's stress. A VR headset can use video display technology to create displays or field of view that effectively cover the user's visual field. When wearing a VR headset , a user's entire visual perceptional field can be supplied as successive fields of view by the virtual reality system or device, thereby producing the effect of viewing scenes from a virtual world. In addition to display capabilities, a VR headset can also be equipped with accelerometers, for example, that can measure the location and attitude of the VR headset and thereby the location and attitude of the user's head. Further, all or a portion of implementations of the present invention can take the form of a computer program product accessible from, for example, a computer-usable or computer-readable medium. A computer-usable or computer-readable medium can be any device that can, for example, tangibly contain, store, communicate, or transport the program for use by or in connection with any processor. The medium can be, for example, an electronic, magnetic, optical, electromagnetic, or a semiconductor device. Other suitable mediums are also available. The above-described implementations have been described to allow easy understanding of the present invention and do not limit the present invention. On the contrary, the invention is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims, which scope is to be accorded the broadest interpretation to encompass all such modifications and equivalent structure as is permitted under the law.
Example: Stressjam
Stressjam is a native Virtual Reality Serious Game provided with stress sensor to detect for example heart rate variation or repiratory rate variation as measures of stress herewith and written for the HTC VIVE VR Headset. The VIVE set-up is provided with a headset. HTC Vive is a virtual reality headset developed by HTC and Valve Corporation, released on 5 April 2016. The headset is designed to utilise "room scale" technology to turn a room into 3D space via sensors, with the virtual world allowing the user to navigate naturally, with the ability to walk around and use motion tracked handheld controllers to vividly manipulate objects, interact with precision, communicate and experience immersive environments.
The Vive has a refresh rate of 90 Hz. The device uses two screens, one per eye, each having a display resolution of 1080x1200. The device uses more than 70 sensors including a MEMS (Microelectromechanical system or devices) gyroscope, accelerometer and laser position sensors, and is said to operate in a 15-by-15-foot (4.6 by 4.6 m) tracking space if used with both "Lighthouse" base stations that track the user's movement with sub-millimetre precision. The Lighthouse system or device uses simple photo sensors on any object that needs to be captured; to avoid occlusion problems this is combined with two lighthouse stations that sweep structured light lasers within a space.
The front-facing camera allows the software to identify any moving or static objects in a room; this functionality can be used as part of a "Chaperone" safety system or device, which will automatically display a feed from the camera to the user to safely guide users from obstacles.
Controllers
Wireless controllers in each hand combined with precise SteamVR™ Tracking mean you can freely explore and interact with virtual objects, characters and environments. VIVE's controllers are designed specifically for VR with intuitive controls and realistic haptic feedback. Sound
Headphones can be used to increase the sense of immersion. Binaural or 3D audio can be used by app and game developers to tap into VR headsets' head-tracking technology to take advantage of this and give the wearer the sense that sound is coming from behind, to the side of them or in the distance.
Stressjam is developed, from the ground up, as a native VR Game which utilises all the available VR techniques to create an experience for users of the game which takes them into another world. When you enter this virtual world, you'll be experiencing things trough visuals, sounds and incentives. You'll be able to move though the landscape by using the controllers in 360 degrees around view.
While the experience with these factors in place is great, we've added another vital layer to be able to create unique possibilities in VR Healthcare. When you're playing StressJam, we'll be monitoring and processing your stress levels via detection of heart rate variation by using Zephyr™ BioPatch™ HP Monitoring Device.
The Zephyr™ BioPatch™ HP Monitoring Device measures and transmits live physiological data on heart rate variation trough protocols like ECHO, Bluetooth or USB to the HTC VIVE VR Headset. In Stressjam we're using bluetooth to require the live raw data from the Zephyr™ HRV sensors and run it through smart algorithms of the computing device within the HTC VIVE VR Headset to create an additional variable with which we can feed to the game. The game is designed to adjust itself, based on this real-time personal data feedback regarding stress levels detected. In this way, the VR game is adjusted to the stress levels of the user. In some cases, you'll need to calm yourself to be able to play a certain part of the game but in some cases, you'll need to trigger your stress response to be able to overcome a part of the gameplay.
The game is built around training levels which will be expended in future development. Training levels need to be completed before entering the next level and you'll be able to collect energy points along the way. The training levels are based on ground-breaking research by Harvard university and Stanford university on mind-set and stress.
One study tracked 30,000 adults in the United States for eight years, and they started by asking people, "How much stress have you experienced in the last year?" They also asked, "Do you believe that stress is harmful for your health?" And then they used public death records to find out who died.
People who experienced a lot of stress in the previous year had a 43 percent increased risk of dying. But that was only true for the people who also believed that stress is harmful for your health.
People who experienced a lot of stress but did not view stress as harmful were no more likely to die. In fact, they had the lowest risk of dying of anyone in the study, including people who had relatively little stress.
The researchers estimated that over the eight years they were tracking deaths, 182,000 Americans died prematurely, not from stress, but from the belief that stress is bad for you. Can changing how you think about stress make you healthier? And here the science says yes. When you change your mind about stress, you can change your body's response to stress. In certain stressful situations, your heart might be pounding, you might be breathing faster, maybe breaking out into a sweat. And normally, we interpret these physical changes as anxiety or signs that we aren't coping very well with the pressure.
But what if you viewed them instead as signs that your body was energized, was preparing you to meet this challenge? Now that is exactly what participants were told in a study conducted at Harvard University. Before they went through a social stress test, they were taught to rethink their stress response as helpful. That pounding heart is preparing you for action. If you're breathing faster, it's no problem. It's getting more oxygen to your brain. And participants who learned to view the stress response as helpful for their performance, well, they were less stressed out, less anxious, more confident, but the most fascinating finding was how their physical stress response changed. In a typical stress response, your heart rate goes up, and your blood vessels constrict. And this is one of the reasons that chronic stress is sometimes associated with cardiovascular disease. It's not healthy to be in this state all the time. But in the study, when participants viewed their stress response as helpful, their blood vessels stayed relaxed. Their heart was still pounding, but this is a much healthier cardiovascular profile. It looks a lot like what happens in moments of joy and courage. Over a lifetime of stressful experiences, this one biological change could be the difference between a stress-induced heart attack at age 50 and living well into your 90s. And this is really what the new science of stress reveals, that how you think about stress matters.
This research is at the core of Stressjam gameplay and by playing Stressjam regularly you'll train your mind-set to respond differently, more positive, to stressful experiences.
Example: Application of Stressjam
Work-related stress is the health epidemic of the 21st century. According to the American Institute of Stress job stress is far and away the major source of stress for American adults.
Stress is also one of the complaints that predicts depression and generates high costs worldwide. Depression is the third leading cause of disease burden globally, and the WHO predicts that by 2020 depression will be the second leading contributor to the global burden of diseases across all ages.
It is long believed that stress itself is debilitating and there is an enormous amount of research that shows that stress can have negative consequences. Crum and Lyddy (2014) describe that:
Stress has been tied to many emotional, behavioral, cognitive and physical impairments and diseases. Stress is specifically linked to six leading causes of death: heart disease, accidents, cancer, liver disease, lung ailments, suicide (Sapolsky, 1996; Schneiderman, Ironson, & Siegel 2005); absenteeism from work, increased medical expenses, and loss of productivity
(Atkinson, 2004; Schneiderman et al., 2005); cognitive impairment, depression, and other mental illness (e.g., Hammen, 2005; McEwen & Seeman, 1999; Schwabe & Wolf, 2010;
Wang, 2005); and aggression and relational conflict (e.g., Bodenmann, Meuwly, Bradbury, Gmelch, & Ledermann, 2010) (p. 949).
However, a range of recent studies (Crum et al., 2013, 2014, 2017; Akinola, Fridman, Mor, Morris, & Crum, 2016; Park et al., 2017) suggests that one's overall mindset about stress is more important to health than stress itself. Stress mindset is the overarching belief that stress is either enhancing or debilitating. Research has shown that a stress-is-enhancing mindset results in a healthier response to stress (Crum et al., 2013). The emerging science shows us that this stress attitude towards stress can be changed by training.
One of the main tricks to change one's attitude towards stress is to learn to look at stress as an indication of something meaningful happening. One of the main conclusions of a large sized study in 2013 was that 'people with meaningful lives worry more and have more stress than people with less meaningful lives' (Baumeister, Vohs, Aaker, & Garbinsky, 2013). For those who live healthy lives despite high stress levels, stress is a barometer for how engaged they are in activities that are personally meaningful rather than a sign that something is wrong.
Rewiring the Brain The ability to learn from stress is a deep biological response. For several hours after a stress response the brain is rewiring itself to remember and learn from the experience. Stress leaves us with an imprint on the brain in order to prepare for the same sort of situation somewhere in the future. If you are stressed, alert, engaged, the brain releases the neurochemicals to enable neuroplasticity (Merzenich, 2014). And our stress response does not have to be good or tuned to become better at that specific situation, but our brain needs to be in the mood for it (Merzenich, 2014). Everyone that started to ski at an older age, knows that simply trying is enough to become better at a specific skill. But this is not only true for a skill like skiing, it is also true for adequate bodily reactions like hormone responses or responses of the immune system. Scientists call this process stress inoculation, going through a stressful experience gives our brain and body a stress vaccine (McGonigal, 2015, Kashani et al, 2015). The brain learns to deal with stressful events successfully and with a minimum of upset.
Blooming under Stress
Putting people to practice stress is a key training tool for athletes, emergency responders, professional musicians, artists, astronauts and others that have to deliver under stress. And that is because of that rewiring and stress inoculation effect. The Olympic skaters of a famous Dutch speedskating coach, Jillert Anema, are training themselves with a specific stress- training tool, to reach for the top. Stress is no longer seen as the enemy, but as an important friend on the road to success.
Stressjam
Because of the huge impact of the mindset towards stress, we developed Stressjam: an award winning innovative health tech solution that uniquely combines virtual reality, biofeedback technology and applied games to provide a fully personalized digital coach to train players regulate their own stress system, and to develop a new stress-mindset in which stress can also be healthy.
How Stressjam Works
The player undergoes a lifelike, virtual reality interactive experience on a tropical island. This experience is fully personalized by using a biosensor on the chest. Therefore, the player has only one superpower: his/her own stress system.
There is a Stressjam game coach to support and guide the player in the journey.
Nevertheless, you can play on your own.
There are many fun and challenging tasks on the island:
E.g. the player will have to displace a glass ball, the ball will only stick to the handles if the player has the adequate stress level; otherwise the ball will fall in the sand. In other moments flies will attack the player and the insects can only be calmed if the player relaxes. Another mission is to climb some vines and cross a ravine. There are red (stress) and green (relax) vines, so the player can pick the most appropriate one depending on his/her stress level.
Therefore, the player has to explore, find and apply effective mechanisms in his own body to generate stress or be calm. As in any game, Stressjam has different levels and the player gets only a few lives to achieve the goals.
Aim of the Pilot Study Unlike most of the emotional, behavioral and mental health programs available to treat stress, we embrace the evidence-based theory that stress can be enhancing. Instead of diminishing stressful experiences we train Stressjam players to learn to use their stress- system to be healthier, happier and more productive. We tailor Stressjam by applying biofeedback; to provide each user a meaningful, personalized, predictive and precise just-in- time program. The game is suitable for various target groups and situations, because research shows that only a minority of people make use of the benefits of stress. A pilot study was conducted. Methods and Measurements
A pre-test-post-test design was used. Employers and patients (patients with common mental health disorders) filled out questionnaires assessing personal involvement (Pll), usability, learnability (SUS) and stress mindset (SMM). Additional questions about Stressjam feasibility were asked.
The study was conducted with two groups of participants: 1. employees (N=55) and 2.
psychiatric patients (N=27).
The 55 employees played Stressjam for 40 minutes. They were tested on all 3
questionnaires, once on the Pll and the SUS, post-test, and twice on the SMM, pre-test and post-test (N=55).
The 27 psychiatric patients played Stressjam for as long as they wanted, and they were measured on the SMM alone, pre-test and after every single 40-minute session (N=27).
Results
Stress Mindset Measure (SMM)
Both groups improved significantly on the SMM after playing Stressjam (see Appendix Table 3, Table 4 and Figure 4). For the group of healthy employees, one Stressjam session of 40 minutes already leads to a slight (effect size .349), but significant improvement on the SMM (see Appendix Table 3).
For the group of psychiatric patients, SMM scores at the end of a Stressjam training (average of 3 hour and 43 minutes
of game-play) were significantly higher than at the start of the training.
The group of psychiatric patients played themselves towards the same SMM score as the score reached in the first session of the employees group: from 1,55 at the start to 2.09 at the end of the training). They played themselves to a 'normal', more healthy, mindset towards stress (effect size of 1.073) (see Appendix Table 4).
A preliminary analysis of the repeatedly measured SMM scores, indicates that the duration of the game-play seems to be a distinctive feature. The more time people play, the more positive their mindset becomes.
Personal Involvement Inventory (Pll) The group we tested was group 1, 55 employees without stress problems. We found an overall score of 58. A score above 40 indicates a high level of personal involvement. We found very high scores on the parameters 'fascinating', 'interesting', 'valuable' and
'appealing', but the scores on all the other parameters were high as well.
Conclusion: Stressjam is an applied game that triggers high levels of personal involvement both in a cognitive and emotional sense !
System Usability Scale (SUS)
We found an overall score of 81,7 on the SUS (Group 1). This means that the tested employees were enthusiastic about the usability and learnability of Stressjam. We tested Stressjam in a blended setup, with a technical coach that puts the VR-experience to work. We didn't test Stressjam as a self-help game.
Conclusion
There is a strong indication that Stressjam proves to be:
A truly engaging game that leads to a high level of personal involvement. A training tool that scores an A on usability and learnability.
Helpful to people to familiarize themselves with a 'stress is enhancing' mindset in a way that changes their attitude towards stress.
A training tool with 'duration of game-play' as a distinctive feature.
A tool that helps a broad range of people.
Future Directions:
Future research should focus on what the (long-term) effects of Stressjam on players are, specifically patients with emotional, behavioral and mental health problems; and to explore on the applicability of Stressjam in specific different situations with different subjects, especially for people with low socioeconomic status.
Another relevant direction is to study if it is possible, by playing Stressjam for a longer period of time, to stimulate the vagal afferent system to have positive effects on disorders of negative affectivity and physiological health.
References
Akinola, M., Fridman, I., Mor, S., Morris, M. W., & Crum, A. J. (2016). Adaptive Appraisals of Anxiety Moderate the Association between Cortisol Reactivity and Performance in Salary Negotiations. PloS one, 11(12). doi: e0167977.
Baumeister, R. F., Vohs, K. D., Aaker, J. L, & Garbinsky, E. N. (2013). Some key differences between a happy life and a meaningful life. The Journal of Positive Psychology, 8(6), 505- 516. doi: 10.1080/17439760.2013.830764
Crum, A. J., Salovey, P., & Achor, S. (2013). Rethinking stress: The role of mindsets in determining the stress response. Journal of Personality and Social Psychology, 104(4), 716- 733. doi: 10.1037/a0031201
Crum, A., and Lyddy, C. (2014). Destressing stress: The power of mindsets and the art of stressing mindfully. In A. N. le, C. T. Ngnoumen, and E. J. Langer (Eds.), The Wiley Blackwell handbook of mindfulness (1st ed., pp. 948-963). Hoboken, NJ: John Wiley & Sons. Crum, A., Akinola, M., Martin, A., & Fath, S. (2017). The Role of Stress Mindset in Shaping Cognitive, Emotional, & Physiological Responses to Challenging & Threatening Stress.
Anxiety, Stress, & Coping, 30(4), 379-395. doi:10.1080/10615806.2016.127558- Kashani F, Kashani P, Moghimian M, Shakour M, (2015). Effect of Stress Inoculation Training on the Levels of Stress, Anxiety, and Depression in Cancer Patients. Iranian Journal of Nursing and Midwifery Research. 2015 May-Jun; 20(3): 359-64.
Merzenich, M.M., Van Vleet, T.M., Nahum, M, (2014). Brain Plasticity-based Therapeutics. Frontiers of Human Neuroscience, 8:358, onlinedoi: 10.3389/fnhum.2014.00385
Morris, S. B., & DeShon, R. P. (2002). Combining effect size estimates in meta-analysis with repeated measures and independent-groups designs. Psychological Methods, 7(1), 105-125. https://doi.Org/10.1037//1082-989X.7.l.105
Park, D., Yu, A., Metz, S. E., Tsukayama, E., Crum, A. J. and Duckworth, A. L. (2017), Beliefs About Stress Attenuate the Relation Among Adverse Life Events, Perceived Distress, and Self-Control. Child Development, doi:10.1111/cdev.l2946.
Appendix
Difference in Baseline Measures of Stress Mindset in Psychiatric Patients and Employees An independent-samples t-test was conducted to compare the baseline measures of the Stress Mindset Measure of the psychiatric and healthy focus groups (see Table 2). There was a significant difference in the scores for the psychiatric and the healthy focus groups, t(63,19) = -3.91, p < .001, CI [ -.80, -.26], d = .854. Thus, on average focus groups of employees demonstrate a more positive mindset with regard to stress as compared to focus groups of psychiatric patients.
Table 2
Means (M) and Standard Deviations (SD) of Stress Mindset Measure
Group n M SD
Psychiatric Patients 27 1.56 0.53 Employees 55 2.09 0.66
Note: The Stress Mindset Measure contains a scale that ranges from 0 (negative attitude) to 4 (positive attitude)
Effects of Short Term Sessions in Healthy Employees
Among healthy employees, a paired-samples t-test was conducted to compare the Stress Mindset Measure at the start and at the end of a Stressjam, 40-minute session (see Table 3). There was a significant difference in the scores at the start and at the end of the session; t(54) = - 2.57, p = .013, CI [-.27, -.033], d = .349*. Thus, the focus group of healthy employees showed a slightly more positive mindset towards stress after playing Stressjam for 40 minutes.
Table 3
Means (M) and Standard Deviations (SD) of Stress Mindset Measure Group n M SD
Start of StressJam Hour 55 2.09 0.66
End of StressJam Hour
55 2.24 0.71
Note: The Stress Mindset Measure contains a scale that ranges from 0 (negative attitude) to 4 (positive attitude)
*Effect size based on Morris & DeShon's (2008, p. Ill) procedure to estimate the effect size for pretest-posttest designs by taking the correlation between the pre- and posttest into account.
Effects of Long Term Sessions in Psychiatric Patients Among psychiatric patients, a paired-samples t-test was conducted to compare the Stress Mindset Measure at the start and at the end of their Stressjam course (the length of this course varied among patients, on average patients played for 3 hours, 43 minutes). There was a significant difference in the scores at the start and at the end of the Stressjam course; t(26) = -4.65, p < .001, CI [ -.75, -.29], d = 1.073* (see Table 4). Thus, on average the focus group of psychiatric patients evidently showed a more positive mindset towards stress after their Stressjam courses. It was even observed that the mean of stress mindset end score among psychiatric patients was the same as the mean stress mindset start score of the group of healthy individuals.
Table 4
Means (M) and Standard Deviations (SD) of Stress Mindset Measure Group n M SD
Start of StressJam Hour 27 1.56 0.53
End of StressJam Hour
27 2.09 0.69
Note: The Stress Mindset Measure contains a scale that ranges from 0 (negative attitude towards stress) to 4 (positive attitude towards stress)
*Effect size based on Morris & DeShon's (2008, p. Ill) procedure to estimate the effect size for pretest-posttest designs by taking the correlation between the pre- and posttest into account.
A preliminary analysis of the Stress Mindset Measures over time suggests that scores of psychiatric patients become more positive after repeated sessions. In contrast to healthy individuals, psychiatric patients might need more time to achieve a 'healthy' mindset towards stress.

Claims

Claims
1 A computer-based hardware and software system or device comprising
a detection unit capable of detecting at least one emotional cue variable of a subject, an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data,
an effecting unit capable of collecting and processing output data from said integration unit and providing output data to an actuator unit,
an actuator unit capable of processing said output into an actuator response capable of being detected by said subject.
2 A system or device according to claim 1 wherein said actuator unit is capable of affecting a detection of a change in motion by said subject.
3 A system or device according to claim 2, wherein said actuator unit is capable of affecting a detection of a change in motion of said subject in near-real-time or real-time. 4 A system or device according to anyone of claims 1 to 3 wherein said detection unit can detect emotional valence and/or emotional arousal of said cue.
5 A system or device according to anyone of claims 1 to 4 wherein said response is selected from a group of least four different outputs that respectively relate to high valence and high arousal, low valence and low arousal, high valence and low arousal and low valence and high arousal.
6 A system or device according to anyone of claims 1 to 5 wherein said detection unit comprises an electrode.
7 A system or device according to anyone of claims 1 to 6 wherein said detection unit comprises an optical sensor.
8 A system or device according to anyone of claims 1 to 6 wherein said detection unit comprises a camera.
9 A system or device according to anyone of claims 1 to 8 wherein said detection unit comprises a microphone.
10 A system or device according to anyone of claims 1 to 9 wherein said integration unit is provided with software capable of providing a measure of the emotional experience of said suspect. 11 A system or device according to claim 10 wherein said emotional experience is classifiable as anyone of joy, anger, surprise, fear, sadness, disgust or contempt.
12 A system or device according to anyone of claims 1 to 11 wherein said integration unit is provided with software capable of providing a measure of the facial expression of said suspect.
13 A system or device according to anyone of claims 1 to 12 wherein said subject is also provided with a system or device for augmented or virtual reality detection.
14 A system or device according to anyone of claims 1 to 13 wherein said actuator unit provides an actuator response including a physical notification of said subject.
15 A system or device according to anyone of claims 1 to 14 wherein said effecting unit provides an actuator response that moves the furniture at which or wherein said subject is seated.
16 A system or device according to anyone of claims 1 to 15 wherein said detection unit is capable of detecting at least one emotional cue variable of at least two subjects.
17 A system or device according to anyone of claims 1 to 16 wherein said detection unit is capable of detecting at least one emotional cue variable of at least three subjects.
18 A machine-readable medium storing the software that is capable to perform the detecting, collecting and processing functions claimed in anyone of claims 1 to 17.
19 A computer (or server) provided with the software of claim 18.
20 A system or device according to anyone of claims 1 to 17, a medium of claim 18 or a computer of claim 19 for use in psychotherapy.
21 A method for providing a subject with psychotherapy comprising
detecting at least one emotional cue variable of said subject with a detection unit, collecting input data from said detection unit and processing said input relating to said cue variable into output data,
collecting output data from said integration unit and processing said output in an effecting unit providing output data to an actuator unit
affecting a detection of a change in motion by said subject by said actuator unit, preferably wherein said change in motion is provided in near-real-time or real-time.
22 A method according to claim 21 wherein said detection unit is capable of detecting emotional valence and/or emotional arousal of said cue. 23 A computer-based hardware and software system or device for providing fields of view allowing augmented or virtual reality detection of motion by a subject comprising a detection unit capable of detecting at least one stress variable of a subject,
an integration unit capable of collecting input data from said detection unit and processing said input relating to said cue variable into output data,
an effecting unit capable of collecting and processing output data from said integration unit and providing output data effecting a change in motion in said field of view.
24 A system or device for augmented or virtual reality detection according to claim 23 wherein said detection unit can detect heart rate variation and/or respiratory rate variation. 25 A system or device according to clams 22 or 23 wherein said detection unit is in a breast band capable to be worn by said subject.
26 A system or device according to anyone of claims 23 to 25 wherein said effecting unit provides a change in field of view of a subject of which at least one stress variable is detected.
27 A system or device according to anyone of claims 23 to 26 wherein said detection unit is capable of detecting at least one stress variable of at least two subjects.
28 A system or device according to anyone of claims 23 to 27 wherein said detection unit is capable of detecting at least one stress variable of at least three subjects.
29 A machine-readable medium storing the software that is capable to perform the detecting, collecting and processing functions claimed in anyone of claims 23 to 28.
30 A computer (or server) provided with the software of claim 29.
31 A system or device according to anyone of claims 23 to 28, a medium of claim 29 or a computer of claim 30 for use in psychotherapy.
32 A method for providing a subject with psychotherapy comprising
detecting at least one emotional cue variable of said subject with a detection unit, collecting input data from said detection unit and processing said input relating to said cue variable into output data,
collecting output data from said integration unit and processing said output in an effecting unit,
providing output data to an actuator unit providing fields of view allowing augmented or virtual reality detection of motion by a subject, affecting a detection of a change in motion by said subject by said actuator unit, preferably wherein said change in motion is provided in near-real-time or real-time.
33 A method according to claim 32 wherein said detection unit is capable of detecting emotional valence and/or emotional arousal of said cue.
PCT/EP2018/063593 2017-05-26 2018-05-24 System or device allowing emotion recognition with actuator response induction useful in training and psychotherapy WO2018215575A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP17173003.9 2017-05-26
EP17173003 2017-05-26

Publications (1)

Publication Number Publication Date
WO2018215575A1 true WO2018215575A1 (en) 2018-11-29

Family

ID=58800693

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2018/063593 WO2018215575A1 (en) 2017-05-26 2018-05-24 System or device allowing emotion recognition with actuator response induction useful in training and psychotherapy

Country Status (1)

Country Link
WO (1) WO2018215575A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102020598B1 (en) * 2018-12-11 2019-09-10 전자부품연구원 Biofeedback system based on bio-signal sensor for diagnosis and healing of mental illness
JP2021058231A (en) * 2019-10-02 2021-04-15 株式会社エクサウィザーズ Cognitive function estimation method, computer program, and cognitive function estimation device
CN113712572A (en) * 2020-11-25 2021-11-30 北京未名脑脑科技有限公司 System and method for assessing cognitive function
EP4260804A1 (en) * 2022-04-11 2023-10-18 Università di Pisa System for creating and modulating a virtual reality environment for an individual
EP4268718A1 (en) * 2022-04-29 2023-11-01 BIC Violex Single Member S.A. Virtual reality system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130337421A1 (en) * 2012-06-19 2013-12-19 International Business Machines Corporation Recognition and Feedback of Facial and Vocal Emotions
US20160104486A1 (en) * 2011-04-22 2016-04-14 Angel A. Penilla Methods and Systems for Communicating Content to Connected Vehicle Users Based Detected Tone/Mood in Voice Input
EP3062198A1 (en) * 2015-02-27 2016-08-31 Immersion Corporation Generating actions based on a user's mood
WO2017021321A1 (en) * 2015-07-31 2017-02-09 Universitat De Barcelona Physiological response

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160104486A1 (en) * 2011-04-22 2016-04-14 Angel A. Penilla Methods and Systems for Communicating Content to Connected Vehicle Users Based Detected Tone/Mood in Voice Input
US20130337421A1 (en) * 2012-06-19 2013-12-19 International Business Machines Corporation Recognition and Feedback of Facial and Vocal Emotions
EP3062198A1 (en) * 2015-02-27 2016-08-31 Immersion Corporation Generating actions based on a user's mood
WO2017021321A1 (en) * 2015-07-31 2017-02-09 Universitat De Barcelona Physiological response

Non-Patent Citations (13)

* Cited by examiner, † Cited by third party
Title
AKINOLA, M.; FRIDMAN, I.; MOR, S.; MORRIS, M. W.; CRUM, A. J.: "Adaptive Appraisals of Anxiety Moderate the Association between Cortisol Reactivity and Performance in Salary Negotiations", PLOS ONE, vol. 11, no. 12, 2016
BARON-COHEN S.; GOLAN 0.; WHEELWRIGHT S.; HILL J. J.: "Mind Reading: the interactive guide to emotions", 2004, JESSICA KINGSLEY LIMITED.
BAUMEISTER, R. F.; VOHS, K. D.; AAKER, J. L.; GARBINSKY, E. N.: "Some key differences between a happy life and a meaningful life", THE JOURNAL OF POSITIVE PSYCHOLOGY, vol. 8, no. 6, 2013, pages 505 - 516
CRUM, A. J.; SALOVEY, P.; ACHOR, S.: "Rethinking stress: The role of mindsets in determining the stress response", JOURNAL OF PERSONALITY AND SOCIAL PSYCHOLOGY, vol. 104, no. 4, 2013, pages 716 - 733
CRUM, A.; AKINOLA, M.; MARTIN, A.; FATH, S.: "The Role of Stress Mindset in Shaping Cognitive, Emotional, & Physiological Responses to Challenging & Threatening Stress", ANXIETY, STRESS, & COPING, vol. 30, no. 4, 2017, pages 379 - 395
CRUM, A.; LYDDY, C.: "The Wiley Blackwell handbook of mindfulness", 2014, JOHN WILEY & SONS, article "Destressing stress: The power of mindsets and the art of stressing mindfully", pages: 948 - 963
EKMAN, PERSPECTIVES ON PSYCHOLOGICAL SCIENCE, vol. 11, no. 1, 2016, pages 31 - 34
JOURNAL OF CLINICAL PSYCHOLOGY, vol. 55, no. 1, 1999, pages 39 - 57
KASHANI F; KASHANI P; MOGHIMIAN M; SHAKOUR M: "Effect of Stress Inoculation Training on the Levels of Stress, Anxiety, and Depression in Cancer Patients", IRANIAN JOURNAL OF NURSING AND MIDWIFERY RESEARCH, vol. 20, no. 3, 2015, pages 359 - 64
MERZENICH, M.M.; VAN VLEET, T.M.; NAHUM, M: "Brain Plasticity-based Therapeutics", FRONTIERS OF HUMAN NEUROSCIENCE, vol. 8, 2014, pages 358
MORRIS, S. B.; DESHON, R. P.: "Combining effect size estimates in meta-analysis with repeated measures and independent-groups designs", PSYCHOLOGICAL METHODS, vol. 7, no. 1, 2002, pages 105 - 125, Retrieved from the Internet <URL:https://doi.org/10.1037//1082-989X.7.1.105>
NEURAL NETWORKS, vol. 18, 2005, pages 389 - 405
PARK, D.; YU, A.; METZ, S. E.; TSUKAYAMA, E.; CRUM, A. J.; DUCKWORTH, A. L.: "Beliefs About Stress Attenuate the Relation Among Adverse Life Events, Perceived Distress, and Self-Control", CHILD DEVELOPMENT, 2017

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102020598B1 (en) * 2018-12-11 2019-09-10 전자부품연구원 Biofeedback system based on bio-signal sensor for diagnosis and healing of mental illness
JP2021058231A (en) * 2019-10-02 2021-04-15 株式会社エクサウィザーズ Cognitive function estimation method, computer program, and cognitive function estimation device
JP7014761B2 (en) 2019-10-02 2022-02-01 株式会社エクサウィザーズ Cognitive function estimation method, computer program and cognitive function estimation device
CN113712572A (en) * 2020-11-25 2021-11-30 北京未名脑脑科技有限公司 System and method for assessing cognitive function
EP4260804A1 (en) * 2022-04-11 2023-10-18 Università di Pisa System for creating and modulating a virtual reality environment for an individual
WO2023198417A1 (en) * 2022-04-11 2023-10-19 Università Di Pisa System for creating and modulating a virtual reality environment for an individual
EP4268718A1 (en) * 2022-04-29 2023-11-01 BIC Violex Single Member S.A. Virtual reality system

Similar Documents

Publication Publication Date Title
US11815951B2 (en) System and method for enhanced training using a virtual reality environment and bio-signal data
US10396905B2 (en) Method and system for direct communication
JP6470338B2 (en) Enhanced cognition in the presence of attention diversion and / or interference
WO2018215575A1 (en) System or device allowing emotion recognition with actuator response induction useful in training and psychotherapy
Leeb et al. Thinking penguin: multimodal brain–computer interface control of a vr game
AU2015218578B2 (en) Systems, environment and methods for evaluation and management of autism spectrum disorder using a wearable data collection device
Kritikos et al. Personalized virtual reality human-computer interaction for psychiatric and neurological illnesses: a dynamically adaptive virtual reality environment that changes according to real-time feedback from electrophysiological signal responses
Parsons et al. Neurocognitive and psychophysiological interfaces for adaptive virtual environments
Sumioka et al. Technical challenges for smooth interaction with seniors with dementia: Lessons from Humanitude™
US20210125702A1 (en) Stress management in clinical settings
Ghisio et al. Designing a platform for child rehabilitation exergames based on interactive sonification of motor behavior
Teruel et al. Exploiting awareness for the development of collaborative rehabilitation systems
Migovich et al. Stress Detection of Autistic Adults during Simulated Job Interviews using a Novel Physiological Dataset and Machine Learning
Miri Using Technology to Regulate Affect: A Multidisciplinary Perspective
Wang The control of mimicry by social signals
Aardema et al. Effects of virtual reality on presence and dissociative experience
Ahire Respiratory Biofeedback based Virtual Environment to Increase Subjective Vitality and Reduce Stress in International Students: Usability, Feasibility and Effectiveness pilot study.
Gandomi Analysis and Prediction of Emotions using Human-Robot and Driver-Vehicle Interactions
Pardini A USER-CENTRED APPROACH TO RELAXATION AND ANXIETY MANAGEMENT IN CLINICAL AND NON-CLINICAL SETTINGS: THE USE OF CUSTOMIZED VIRTUAL REALITY SCENARIOS EXPERIENCED INDEPENDENTLY OR IN COMBINATION WITH WEB-BASED RELAXATION TRAINING Un approccio incentrato sull'utente per promuovere il rilassamento e la gestione dell'ansia in contesti clinici e non-clinici: indagine del ruolo di scenari virtuali personalizzati erogati in modalità autonoma o in combinazione con training di rilassamento in modalità web-based
Lahiri Virtual-reality based gaze-sensitive adaptive response technology for children with autism spectrum disorder
Lee Externalizing and interpreting autonomic arousal in people diagnosed with Autism
Tabbaa Emotional Spaces in Virtual Reality: Applications for Healthcare & Wellbeing
Chen The Evaluation and Application of Bio-Emotion Estimation Methods
Schipor et al. Making E-Mobility Suitable for Elderly
Kwon Anxiety activating virtual environments for investigating social phobias

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18728556

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18728556

Country of ref document: EP

Kind code of ref document: A1