WO2021190762A1 - Joint virtual reality and neurostimulation methods for visuomotor rehabilitation - Google Patents

Joint virtual reality and neurostimulation methods for visuomotor rehabilitation Download PDF

Info

Publication number
WO2021190762A1
WO2021190762A1 PCT/EP2020/058725 EP2020058725W WO2021190762A1 WO 2021190762 A1 WO2021190762 A1 WO 2021190762A1 EP 2020058725 W EP2020058725 W EP 2020058725W WO 2021190762 A1 WO2021190762 A1 WO 2021190762A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
motion
session
time series
visuomotor
Prior art date
Application number
PCT/EP2020/058725
Other languages
French (fr)
Inventor
Silvio IONTA
Meysam MINOUFEKR
Sofia OSIMO
Original Assignee
Fondation Asile Des Aveugles
Dropslab Technologies Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fondation Asile Des Aveugles, Dropslab Technologies Gmbh filed Critical Fondation Asile Des Aveugles
Priority to PCT/EP2020/058725 priority Critical patent/WO2021190762A1/en
Publication of WO2021190762A1 publication Critical patent/WO2021190762A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1104Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb induced by stimuli or drugs
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1121Determining geometric values, e.g. centre of rotation or angular range of movement
    • A61B5/1122Determining geometric values, e.g. centre of rotation or angular range of movement of movement trajectories
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1124Determining motor skills
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/163Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state by tracking eye movement, gaze, or pupil change
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4836Diagnosis combined with treatment in closed-loop systems or methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6802Sensor mounted on worn items
    • A61B5/6803Head-worn items, e.g. helmets, masks, headphones or goggles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6887Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient mounted on external non-worn devices, e.g. non-medical devices
    • A61B5/6897Computer input devices, e.g. mice or keyboards
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/742Details of notification to user or communication with user or patient ; user input means using visual displays
    • A61B5/745Details of notification to user or communication with user or patient ; user input means using visual displays using a holographic display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2505/00Evaluating, monitoring or diagnosing in the context of a particular type of medical care
    • A61B2505/09Rehabilitation or training
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/113Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N1/00Electrotherapy; Circuits therefor
    • A61N1/18Applying electric currents by contact electrodes
    • A61N1/20Applying electric currents by contact electrodes continuous direct currents
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N1/00Electrotherapy; Circuits therefor
    • A61N1/18Applying electric currents by contact electrodes
    • A61N1/32Applying electric currents by contact electrodes alternating or intermittent currents
    • A61N1/36Applying electric currents by contact electrodes alternating or intermittent currents for stimulation
    • A61N1/36014External stimulators, e.g. with patch electrodes
    • A61N1/36025External stimulators, e.g. with patch electrodes for treating a mental or cerebral condition

Definitions

  • Methods described herein relate to neurorehabilitation in general, and more specifically to visuomotor disability therapies.
  • the damaged mammal brain generally has the property to reorganize itself to recover some of its lost functionality.
  • various therapies have thus been developed to facilitate this recovery by repeated systematic training of the lost movements in combination with brain stimulation, either direct or indirect.
  • various methods such as Visual Scanning Training (VST) and Vision Restauration Therapies (VRT) have been proposed to facilitate the recovery of lost visual fields by training eye scanning movements towards the neglected side, or by overstimulating the remaining neurons in the visual cortex with multiple light stimuli.
  • VST Visual Scanning Training
  • VRT Vision Restauration Therapies
  • the VRT system from Novavision uses a home computer screen to display light stimuli in particular at the border between the intact and the lost visual field, usually in 1 or 2 daily sessions with are repeated over a period of six months.
  • EP2012733 by Novavision suggested to focus a greater portion of the display light stimuli nearer the center of the visual field.
  • Clinical results from the corresponding FDA-approved treatment showed improvement for about 5 degrees of visual field around the center, which is good enough for reading, but still not sufficient for most motor tasks in daily life.
  • US2018/0336973 proposed to combine a virtual reality immersive environment with motion capture technology such as the MindMaze MindMotionTM PRO immersive virtual reality to measure the patient joint and head gaze movement from a diversity of sensors.
  • motion capture technology such as the MindMaze MindMotionTM PRO immersive virtual reality to measure the patient joint and head gaze movement from a diversity of sensors.
  • such a system advantageously gives a rewarding multi-sensory feedback to the patient through a VR game audiovisual scene.
  • the patient motion data is explicitly analyzed with a mean trajectory (MSE) calculation for the right hand, reaction times, reach time, number and duration of errors and number of 'time-outs', which are calculated and compared in real-time (paired t-test) to determine where to next place the virtual object stimuli in the scene.
  • MSE mean trajectory
  • this solution remains complex both in terms of the hardware system and software algorithms. It requires multiple specific sensors, including a Kinect system, Lyra cameras and depth sensors, and ideally a further EEG tracking system such as the Biosemi ActiveTwo amplifier as well as a VR headset with eye tracking to discriminate whether the patient is viewing certain parts of the visual field by moving the eyes or the head.
  • the optical stimulation methods from the prior art may be combined with the neurostimulation methods commonly used in motor rehabilitation therapies.
  • NovaVision conducted a pilot study on the combination of VRT with non-invasive transcranial direct stimulation (tDCS) (Plow et al. “ Comparison of visual field training for hemianopia with active versus sham transcranial direct cortical stimulation ”, Neurorehabilitation and Neural Repair XX(X) 1-11, 2012) suggested that tDCS may enhance neuroplasticity when applied in association with visual training. While promising, the experiment was only based on two patients.
  • WO20 16092563 teaches a training system potentially suitable for visuomotor balance rehabilitation therapy based on the combination of an eye tracker, biosensors such as a brain EEG capture device and a spinal system EMG capture device, and physical sensors to measure the kinematic and kinetic motion data from the neuromuscular system.
  • the system uses a Wii balance board and an MS Kinect system to track the patient balance while displaying a VR game on a screen display.
  • Two Bayesian sensor fusion modules are used to serially analyze the multiple data feeds and to produce audio-visual stimuli signals for VR-based sensory stimulation jointly with non-invasive electrical stimulation signals for the motor system.
  • This solution remains complex both in terms of the hardware system and software algorithms, as it requires to capture and process complex EEG and EMG bio-signals.
  • the applicants therefore identified the need for designing a lighter system setup and data processing methods to combine visual and motor data signal processing, analysis and non- invasive neurostimulation with different possible modalities to facilitate a more efficient and cost-effective automated training treatment of patients suffering from post-stroke visuomotor deficits and related pathologies.
  • the method comprises: rendering in a first training session, with a virtual reality (VR) rendering engine, a 3D VR game scene comprising at least one VR object stimulus on a VR head mounted display screen; receiving commands from the user dominant hand with a VR handle, said commands aiming to reach and catch the VR object stimulus with the VR handle; measuring, with a VR system head mounted sensor, the time series of the user head positions and/or orientations vectors in the 3D VR environment for the first session; measuring, with a VR system handle sensor, the time series of the user hand positions and/or orientations vectors in the 3D VR environment for the first session; measuring, with an eye tracking system, the time series of the user gaze direction vectors relative to the user head position and orientation origin in the 3D virtual environment for the first session; applying a HD-tDCS neurostimulation to the user in a time range between one hour before and
  • the measured user motion time series data with a reference motion time series data may comprise applying a principal component analysis (PCA) or a functional principal component analysis (FPCA) to extract and compare the principal components of the user motion time series of positions and/or orientations.
  • PCA principal component analysis
  • FPCA functional principal component analysis
  • the user motion drift may be measured as a distance between the user motion time series data and the healthy reference time series data, such as the
  • the texture, color, contrast, shape and/or size property of the VR object stimulus as well as its time of appearance, time of disappearance, direction and/or motion during its presence in the 3D VR game scene may be adapted according to the motion drift.
  • a further HD-tDCS neurostimulation may also be applied to the user in a time range between one hour before and one hour after the next session if the user motion drift exceeds an acceptable motion drift threshold in the first session.
  • FIG. 1 is a schematic representation of a visuomotor training system according to some embodiments of the present disclosure.
  • FIG. 2 is a schematic representation of an exemplary visuomotor training computational workflow according to some embodiments of the present disclosure.
  • FIG. 3 shows an exemplary visual field assessment map.
  • FIG. 4 shows an exemplary map suitable for eye saccade tracking characterization.
  • FIG. 5 and FIG. 6 an abstract representation of the patient visuomotor system engagement in tracking a virtual object target (a flying mosquito in the VR game scenery) resulting in different respective positions of the head, dominant hand and gaze direction.
  • a virtual object target a flying mosquito in the VR game scenery
  • FIG. 1 is a schematic representation of an exemplary visuomotor training system comprising a first wearable device 120 and a second wearable device 130 to be adapted to the user hand; a visuomotor training engine 100 in connection with the first and second wearable devices to entertain the user with a Virtual Reality (VR) game; a control display 150 in connection with the visuomotor training engine 100 for visualizing the user performance assessment results; a medical database 110 in connection with the VR computing device 100 to record the training session configuration information and the resulting user performance assessment results; and optionally a neurostimulation device 160 to be adapted to the user skull, to further transmit electrical current stimuli towards one or more of the user visuomotor cerebral areas by means of neurostimulation electrodes 161.
  • VR Virtual Reality
  • the first wearable device 120 comprises eye tracking sensors 121 to measure the eye directions in real-time, a head tracking sensor system 122 to measure the head position in the 3D space in real-time, and a head-mounted display 123 to display the VR game as generated in real-time from the visuomotor engine 100.
  • An exemplary device 120 is the HTC ViveTM HMD, which comprises embedded infrared sensors 122 to detect the HTC ViveTM base station emitted infrared pulses in real time, with update rates ranging from 250Hz to 1kHz.
  • Eye tracking sensors may be separately supplied and fitted into the basic HTC Vive HMD, such as for instance the aglassTM eye tracker device which tracks the point of regard (POR) 2D position of each eye relative to the HMD display half screen (one half screen per eye in the HTC Vive HMD) at an ocular precision down to 0.5° at a rate of 100 to 380Hz.
  • the HTC Vive Pro Eye HMD with integrated eye tracking 121 may be used.
  • Other exemplary VR HMD devices 123 include Oculus Rift, Oculus Quest, Oculus Go, HTC Focus, HTC Cosmos, GearVR, Google daydream, or Microsoft Hololens. Other supplies and arrangements are also possible.
  • the second wearable device 130 comprises at least one handle trigger button 131 which the user can manipulate with his/her finger to provide feedback to the visuomotor training engine 100, for instance to hit or catch a target object from the VR game scene, and a hand tracking sensor system 132 to measure the hand position in the 3D space in real-time.
  • An exemplary device 130 is the HTC ViveTM handle, which comprises a dual-stage trigger 131 and embedded infrared sensors 132 to detect the HTC ViveTM base station emitted infrared pulses in real time at a sub-millimetric precision at a rate of 250 to 1000Hz.
  • the handle 130 is manipulated by the user dominant hand, but other arrangements are also possible.
  • An alternate exemplary device 130 is the VRfree® wearable glove from Sensoryx which enables to track both the hand and the finger motion. Other supplies and arrangements are also possible.
  • the visuomotor training engine 100 may be implemented as a software module or a combination of interconnected software modules onto a main computing device of the rehabilitation product.
  • the visuomotor training engine may comprise the following software modules:
  • a visual field assessment module 105 to measure the user visual field
  • a calibration engine 115 to adapt the VR game calculation to the user wearable sensors and HMD setup; a handle control engine 135 to receive and process the handle trigger signals from the handle trigger 131 and the hand tracking data from the hand tracking sensor system 132; an HMD control engine 145 to transmit the VR game scene images to the head- mounted display screens (one by eye); an eye tracker control engine 155 to receive and process the eye tracking data from the eye tracker sensor system 121 ;
  • a VR scene configurator 175, a VR game configurator 185, a VR rendering engine 125, a VR motion control engine 195 to setup, calculate and adapt in real-time a VR game scene with various object positions and trajectories for the user to visually track and manually hit so as to stimulate his/her visuomotor system.
  • the visuomotor training engine 100 may then derive in real time the user gaze in the 3D VR scene by combining the tracked 2D eye position data as received by the eye tracker control engine 155 with the tracked head position and rotation in the real world as received by the HMD control engine 145.
  • the VR rendering engine 125 may accordingly render in real time the VR scene as watched by the user on the HMD 123 screen.
  • an exemplary visuomotor training engine 100 may be implemented on a desktop personal computer using the Unity or the Unreal VR game engine software modules in combination with the SteamVR VR tracking control engine software modules, but other embodiments and software supplies are also possible.
  • the visuomotor training system may comprise a separate neurostimulation device which can be operated independently from the visuomotor training engine and VR game stimulation setup.
  • the portable NeuroelectricsTM HD-tDCS (High Definition transcranial Direct Current Stimulation) device may be used to this hand, but other arrangements are also possible.
  • the neurostimulation session may be conducted with the user prior, concurrently or after the VR stimulation session.
  • the neurostimulation device may thus be operated independently from the VR system.
  • the visuomotor training engine 100 may further comprise a neurostimulation control engine 165 to process and transmit neurostimulation signals to the neurostimulation electrodes 161 in accordance with the user VR training protocol.
  • the system of FIG.1 may also be extended with additional wearable sensors, for instance to track the position and/or direction and/or motion in the 3D space from additional user body parts.
  • additional wearable sensors may track one or more of the back motion, the chest motion, the second (non-dominant) hand motion, the feet motion, and/or the motion of one or more joints such as the ankles, the knees, the hips, the shoulders, the elbows, the wrists, the toes or the fingers.
  • the system of FIG.1 may also be extended with additional non wearable body position or gesture tracking sensors, such as for instance a balance board or gesture tracking cameras.
  • the sensors are adapted, with a sensor signals processing engine, to convert the raw sensor information into up to 6 degrees of freedom position and orientation coordinates relative to a common reference, so that the visuomotor training engine 100 may derive motion information in a data format most suitable to be handled by the proposed motion processing methods, as will be further described in more detail throughout this enclosure.
  • system of FIG.1 may also be extended to facilitate a mixed reality training experience, with additional sensors to track and render at least some object elements of the physical environment with which the user physically interacts in combination with the user tracking and VR reality scene elements with which the user virtually interacts.
  • FIG. 2 represents an exemplary workflow as may be executed by the visuomotor training engine 100 of the visuomotor training system of FIG.l.
  • the workflow may comprise the steps of:
  • rendering 240 the VR game as a combination of the static VR scene elements and the moving VR game objects positions and trajectories in the VR scene, in accordance with at least the tracked user head, eye and hand positions received from the sensors of the first and second wearable devices 120, 130;
  • the visuomotor training engine system 100 may acquire 210 user information from the user records in the medical database 110, for instance the user age, visual correction, hand dominance, and any medical data information that may facilitate the adaptation 220, 230 to the specific user condition of the static VR scene elements, with the VR scene configurator 175, and of the moving VR game objects in the VR scenery, with the VR game configuration 185.
  • the elements of the VR scene may be adapted in size or in color to some specific eye impairment or color impairment pathologies, and/or the position and motion speed of the main objects of the VR game may be adapted to the age and physical condition of the user, for instance depending on whether the user can stand up and raise his/her hand over his head.
  • the visuomotor training engine system 100 may record 210 additional user information in the medical database 110, such as for instance information on the motion performance and/or visual performance as measured by the system.
  • the visuomotor training engine system 100 may calculate in real time, with the VR motion control engine 195, the hand motion information according to the hand position tracking data received from the handle control engine 135, the head motion information according to the head position tracking data received from the HMD control engine 145, and/or the eye tracking motion information data received from the eye tracker control engine 155.
  • the visuomotor training engine system 100 may accordingly derive one or more visuomotor system performance measurements to characterize the user performance in the training session, and may record the resulting measurements information in the user records in the medical database 110.
  • the proposed visuomotor training system 100 jointly processes both visual performance and motion performance information by measuring 210, 260 the user visuomotor performance as a combination of eye movement tracking, head movement tracking and hand movement tracking in real time.
  • the proposed visuomotor training system 100 enables accordingly to assess the user visuomotor performance in real time during a training session (possibly in a closed-loop experiment) as well as the user performance progression from one training session to the next.
  • the proposed visuomotor training system 100 may initially adapt 220 the static configuration of the VR scene elements, which can be adjusted primarily as a function of the visual capability of the user (for instance, the game complexity level).
  • the proposed visuomotor training system 100 may adapt 230 the dynamic calculation of the VR game objects positions and trajectories, which can be adjusted in real time to the user eye, head and hand positions and trajectories as tracked with the VR system sensors (closed-loop experiment).
  • the proposed visuomotor training system 100 may both adapt 220, 230 the static configuration of the VR scene elements and the dynamic calculation of the VR game objects positions and trajectories so as to fully optimize the real time moving VR scene rendering to both the initial user capability and the on-going user performance.
  • the workflow may also comprise a step of applying 250 neurostimulation signals to the user visuomotor system.
  • the neurostimulation signals may be parametrized by the visuomotor training engine system 100 in accordance with the user information (age, pathology, etc.).
  • the neurostimulation signals may be adapted by the visuomotor training engine system 100 in accordance with the measured user performance in the VR stimulation to further reinforce the overall visuomotor training in a closed loop approach.
  • Initial visual field assessment Measuring 210 the initial user performance first comprises assessing the user visual field.
  • the visual field may be assessed by displaying dots or objects at different positions in the screen watched by the user, one eye at a time while the other eye is blinded. The user gives feedback on whether he/she sees the target.
  • the HMD display 123 may be used as the screen
  • the handle trigger 131 may be used to capture the user feedback
  • the VR rendering engine may display an initial basic VR scene with simple contrast dots (for instance, white dot objects displayed with a size, appearance and disappearance rates suitable for the user visual capability) over a uniform background (for instance, a black static scene).
  • the method of Tsapakis et al. in “ Visual field examination method using virtual reality glasses compared with the Humphrey perimeter”, Clinical Ophthalmology (Auckland, N.Z.), 11, 1431-1443 (2017) may be implemented by the proposed visuomotor training system 100.
  • the method from Virtual Field, Inc. as described in US2019/0298166 may be used.
  • other methods are also possible. FIG.
  • FIG 3 shows an exemplary visual field assessment result where the visual field, represented over the full quadrant with a thick grey line around the visible dots from the measurement, is clearly reduced in the left and top areas, while the blind spot area, represented with the black thick line, is shifted to the right and slightly to the bottom (the theoretical visual field from a healthy user should be centered on the quadrant).
  • the proposed visuomotor training system 100 may configure, with the VR scene configurator module 175, a static VR world scene for the user to virtually explore.
  • a static VR world scene for the user to virtually explore.
  • different VR world scenes corresponding to different environments may be used.
  • a familiar environment such as a living room may be configured as the static scene.
  • the scene may be parametrized with various VR object shapes, textures and/or colors adapted to the user information (for instance according to the user age).
  • more challenging environments may be configured, such as for instance a city street scene.
  • the proposed visuomotor training system 100 may also configure, with the VR game configurator module 185, a VR game scenario to produce and animate VR object stimuli for the user to interact with as part of a VR game.
  • the VR handle 130 may be associated with a tool avatar that can be manipulated by the user avatar hand in the VR scene.
  • different objects may be added to the VR scene.
  • Static objects such as an object to be reached over a table or a button to be pressed on the wall (with the handle trigger 131) may be added to the static room scene in front of the user and possibly also on the sides, to encourage hand, eye and head motion in search for the object, especially around the boundaries of the neglected visual field in accordance with the formerly measured user visual field assessment information.
  • the VR objects may be parametrized with various shapes, textures and/or colors adapted to the user information (for instance according to the user age).
  • more challenging VR objects may also be animated as moving stimuli in the VR scene during the training session, such as for instance a VR object target appearing, flying around and disappearing with different motion trajectories.
  • the VR game configurator module 185 may produce and animate a mosquito avatar and the VR rendering engine 125 may render it into the 3D VR scene in accordance with the user tracking by the proposed VR system.
  • the hand of the user may be associated with a stick tool to reach the mosquitos.
  • the mosquito disappears and the user is rewarded (visual score, winning sound, or any reward feedback that is most suitable to the user).
  • the proposed visuomotor training system 100 measures, with VR motion control engine 195, a number of parameters corresponding to the user stimulation and performance during the training session.
  • the VR motion control engine 195 may for instance record one or more of the following information:
  • the VR motion control engine may calculate a measurement of the user motion performance in association with the VR object stimulus chasing practice.
  • the VR motion control engine 195 may thus record one or more of the following information to characterize the user visual exploration parameters such as:
  • Head movements rotations and translations; number of each type of movement (number or rotations towards right, of rotations towards left etc) and time spent doing each type of movement.
  • the performance of the user in reaching the target object in the VR game may be measured with the following reaching parameters which characterize the “last” eye movement of the trial, i.e. the one leading the user to catching the target:
  • the above parameters may be calculated in real time by the VR motion control engine 195 from the HMD control engine 145 head tracking data measurements.
  • additional parameters may be calculated in real time by the VR motion control engine 195 from the eye tracker control engine 155 data measurements, such as:
  • the 3D direction of the eye gaze which may be mapped as the closer direction between right, top-right, top, top-left, left, left-bottom, bottom, right-bottom quadrants and the closer position between central, middle and external relative to the user visual field theoretical full circle, as represented on FIG.4.
  • the saccade vector depicted in light gray in FIG.4 may be classified as a middle top-right movement as the angle between it and the top-right movement axis is smaller than between it and the right movement axis, and it occurs in the middle area between the central and the external areas of the user visual field.
  • the VR motion control engine 195 may thus record one or more of the following information to characterize the user manual exploration parameters such as:
  • the performance of the user in reaching the target object in the VR game may be measured with the following reaching parameters which characterize the “last” hand movement of the trial, i.e. the one leading the user to catching the target:
  • the movement of the hand tracking sensor may be measured as the trajectory of the tracked 6DOF position relative to a second tracking sensor 6DOF position, for instance a sensor placed on the user’s chest, so as to better characterize the arm+hand motion relative to the main body motion.
  • a second tracking sensor 6DOF position for instance a sensor placed on the user’s chest
  • the visuomotor training system 100 may calculate a scalar metrics that is representative of the deviation of the user performance compared to a reference performance.
  • the reference performance may be that of a control population, for instance it may be pre-defined from measurements of healthy individuals taking the same training exercise.
  • the proposed VR tracking system 100 measures multiple position variables, namely from the hand, the head and the eye gaze, as a function of time sampled along the training session period.
  • the proposed VR motion control engine may record time series vectors of 3 position variables P hand (x n ,y n ,z n ) and 3 orientation angle variables O hand (u n ,v n ,w n ) for the hand, as well as 3 position variables P head (x d ,y d ,z d ) and 3 orientation angle variables Ot head (u d ,V d ,w d ) for the head, at different sampling times of the VR tracking system while the user is taking a trajectory to track and reach the VR stimulus object target in the VR game scenario with his head and dominant hand.
  • Q gaze ( g / g ) (0,0) corresponding to the center of the theoretical visual field).
  • other embodiments are also possible, for instance using a time series of 3D positions and quaternions for tracking the orientations, and possibly also for the eye gaze tracking.
  • FIG. 5 and FIG. 6 illustrate two examples of different positions and orientations for the user head, hand and pair of eyes gaze respectively.
  • FIG.5 corresponds to an early sampling time in the training session where a VR object stimulus has been introduced in the 3D VR scene but the user has not seen it yet.
  • FIG. 6 corresponds to the successful end of the VR object stimulus chasing game session when the user has seen and reached the VR object stimulus by properly coordinating his head, eyes and hand.
  • the trajectories of the combined head+eye direction as well as the dominant hand can thus be tracked in real time along all intermediate poses as the user moves from the initial position in FIG.5 to the end position in FIG.6, with the proposed VR system to measure the user motor performance in this visuomotor task.
  • the VR object stimulus may remain at a fixed position in the 3D VR scene to facilitate its capture, or in more advanced levels it may move to different positions so that the user has to adjust his/her trajectory in real-time.
  • the texture, the color, the contrast, the size, the shape, the appearance time, the disappearance time, the direction and/or the speed of the VR object stimulus in the 3D VR scene may be adapted to the user by the VR game configurator 185.
  • the VR motion control engine 195 may derive the direction axis of the user avatar gaze in the 3D VR scene. The VR motion control engine 195 may then measure a gaze distance d g (t) from this calculated axis to the actual position of the VR object stimulus in the 3D VR space at time t.
  • a decreasing gaze distance indicates that the user progressively reaches the position of the VR object stimulus to chase with his head+eye, in other words, that the user has seen the VR object stimulus target in his/her visual field and aligned his head and eyes accordingly with his/her visuomotor system.
  • the VR motion control engine 195 may further measure a hand reach distance dge(t) from the measured position of the hand to the actual position of the VR object stimulus in the 3D VR space.
  • a decreasing distance indicates that the user progressively reaches the position of the VR object stimulus to chase with his hand, in other words, that the user is reaching the VR object stimulus target with his hand and body motion.
  • the VR motion control engine 195 may derive a pointing orientation axis from the hand position and orientation and measure a third hand alignment distance d na (t) from the pointing orientation axis to the actual position of the VR object stimulus in the 3D VR space at time t.
  • a decreasing distance indicates that the user progressively orients his/her hand towards the VR object stimulus, in other words, that the user is trying to reach the VR object stimulus target with his hand motion independently from his/her body motion.
  • the VR motion control engine 195 may also process the eye, head and hand trajectory information to derive scalar motion performance measurements as indicators of the user motion drift relative to the reference motion data captured from a healthy control subject or population in similar game conditions.
  • the user visuomotor performance may be tracked during any training session as a time series of the position and orientation vectors respectively for the head and the hand, in combination with a time series of the position coordinates for the eye gaze relative to the theoretical visual field quadrant.
  • the VR motion control engine 195 may apply a multivariate statistical analysis method such as for instance the Principal Component Analysis method to summarize the user motion information conveyed by the measured time series data of the user motion recorded over time for the head, hand and eye gaze while the VR stimulus object is rendered onto the VR display screen, so as to characterize the overall user visuomotor movement in a lower dimension data set formed by the calculated principal components.
  • a multivariate statistical analysis method such as for instance the Principal Component Analysis method to summarize the user motion information conveyed by the measured time series data of the user motion recorded over time for the head, hand and eye gaze while the VR stimulus object is rendered onto the VR display screen, so as to characterize the overall user visuomotor movement in a lower dimension data set formed by the calculated principal components.
  • the VR motion control engine 195 may apply a Functional Principal Component Analysis (FPCA) method to the kinematic time series of the head, hand and eye position and orientation data, for instance by extending the hand motion analysis method of Cortes et al., “A short and distinct time window for recovery of arm motor control early after stroke revealed with a global measure of trajectory kinematics", Neurorehabilitation and Neural Repair, Vol.31, issue 6, pp.552-560, 2017, to the head and eye tracking.
  • FPCA Functional Principal Component Analysis
  • the user motion drift relative to the healthy controls motion may be measured as a distance between the user data and a reference data in a multidimensional space.
  • the distance may be measured as the Euclidean distance in 3D VR space, or by more dedicated statistical distances in a transformed space, such as for instance the Mahalanobis distance to compare the transformed FPCA reaching trajectories for the eye, head and hand to the transformed FPCA reaching trajectories of a healthy control population, as proposed by Cortes et al.
  • the VR motion engine 195 may feedback information from the measured trained user performance such as the gaze distance d ⁇ t), the hand reach distance d n (t), the hand alignment distance d combat a (t), and/or the user motion principal components to the VR scene configurator to configure 220 the static VR scene elements in accordance with the measured user eye, hand and head trajectories in real-time.
  • the VR motion engine 195 may feedback information from the measured trained user performance as the gaze distance d g ⁇ i), the hand reach distance d n (t), the hand alignment distance d combat a (t), and/or the user motion principal components to the VR game configurator to calculate 230 the VR game object stimulus position in the 3D VR scene (for a static stimulus, in basic training levels) and possibly the VR game object stimulus trajectory (for a moving stimulus, in advanced training levels) in accordance with the measured user motion in real-time.
  • the static VR scene may have different VR object stimuli rendering details such as shape, size, contrast, texture and/or color to draw more attention of the user to some areas.
  • the VR game object stimuli may be preferably positioned and/or densified around the border areas and/or moving towards the border areas of the damaged visual field for a longer period of time to retain more attention of the user into those areas.
  • neurostimulation in a possible further embodiment, there may be a need to apply neurostimulation in addition to the visual VR training to facilitate a better recovery for the user.
  • neurostimulation may be applied to the user to electrically stimulate the visual cortex brain areas in addition to the VR- based sensory stimulation.
  • Neurostimulation may be applied concurrently with the VR training session, or before, or after the session.
  • a HD-tDCS neurostimulation device is used so that a specific subset of electrodes can be activated, specifically on the visual cortex area and directly around it.
  • the HD-tDCS stimulation protocol may also be adapted to the current user performance as measured with the above described motion analysis methods; for instance, the decision to apply or not the neurostimulation after the session may depend on the user performance during the training, or the activation of individual electrodes may be tuned to the specific user visuomotor performance characteristics (e.g. shifting the stimulation towards certain specific brain areas according to the measured user motion drift).

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physiology (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Medicinal Chemistry (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Geometry (AREA)
  • User Interface Of Digital Computer (AREA)
  • Rehabilitation Tools (AREA)

Abstract

A virtual reality (VR) system is used to render a 3D VR game scene comprising at least one VR object stimulus on a VR head mounted display screen. During a first training session, the VR system measures commands from the user dominant hand with a VR handle, and the time series of respectively the user head and the user hand positions and/or orientations vectors, as well as the time series of the user gaze direction vectors relative to the user head position and orientation origin in the 3D virtual environment. A HD-tDCS neurostimulation is applied to the user in a time range between one hour before and one hour after the first session. The VR system compares the measured user motion time series data with a reference motion time series data to determine a user motion drift for the first session, possibly with a principal component analysis (PCA) or a functional principal component analysis (FPCA) to extract and compare the principal components of the user motion time series of positions and/or orientations. If the user motion drift exceeds a threshold, the VR system adapts the rendering of at least one property of the VR object stimulus and/or the HT-tDCS neurostimulation in a next session.

Description

Title
Joint virtual reality and neurostimulation methods for visuomotor rehabilitation
FIELD OF THE INVENTION
Methods described herein relate to neurorehabilitation in general, and more specifically to visuomotor restauration therapies.
BACKGROUND OF THE INVENTION
Visuomotor rehabilitation
Certain brain damages, as may be caused by stroke, traumatic brain injury, brain surgery or pathologies such as brain tumors or inflammation, often induce visuomotor deficits which significantly impact the body-eye coordination. In particular, there is a need for new therapies for post-stroke patients with lesions in visual cortex causing visuo-motor disabilities even without motor cortex lesions. It is estimated 30% of these patients need rehabilitation as they suffer from a number of impairments, such as homonymous hemianopia (HH) or scotoma visual field loss, eye movement problems, blurred vision, double vision (diplopia), nystagmus, impaired depth perception and difficulty locating objects, and visual neglect.
The damaged mammal brain generally has the property to reorganize itself to recover some of its lost functionality. In the context of motor rehabilitation after stroke, various therapies have thus been developed to facilitate this recovery by repeated systematic training of the lost movements in combination with brain stimulation, either direct or indirect. Similarly, in ophthalmology practice, various methods such as Visual Scanning Training (VST) and Vision Restauration Therapies (VRT) have been proposed to facilitate the recovery of lost visual fields by training eye scanning movements towards the neglected side, or by overstimulating the remaining neurons in the visual cortex with multiple light stimuli. For instance, the VRT system from Novavision uses a home computer screen to display light stimuli in particular at the border between the intact and the lost visual field, usually in 1 or 2 daily sessions with are repeated over a period of six months. EP2012733 by Novavision suggested to focus a greater portion of the display light stimuli nearer the center of the visual field. Clinical results from the corresponding FDA-approved treatment showed improvement for about 5 degrees of visual field around the center, which is good enough for reading, but still not sufficient for most motor tasks in daily life.
Virtual reality systems In line with the development of virtual reality hardware solutions such as consumer electronics head-mounted displays (HMD) in the past decade, novel vision therapies have been proposed which take advantage of a more immersive environment to better stimulate the vision of eye-impaired patients. For instance, US9706910 to VividVision describes a VR- based therapy to treat amblyopia, by successively displaying VR objects adaptively to the end user visual field assessment and performance measurement with a VR system. US2017156965 to Libra teaches displaying VR stimulation exercises on a mobile computing device display arranged into a headset with an optical adapter, and recording the patient eye movement to measure the patient performance and progressively adapt the exercise difficulty accordingly.
In order to extend the post-stroke therapy to improving the patient motion globally, and not just the patient visual field, US2018/0336973 proposed to combine a virtual reality immersive environment with motion capture technology such as the MindMaze MindMotion™ PRO immersive virtual reality to measure the patient joint and head gaze movement from a diversity of sensors. Compared to the prior art solutions of Novavision and others, such a system advantageously gives a rewarding multi-sensory feedback to the patient through a VR game audiovisual scene. In the MindMaze system, the patient motion data is explicitly analyzed with a mean trajectory (MSE) calculation for the right hand, reaction times, reach time, number and duration of errors and number of 'time-outs', which are calculated and compared in real-time (paired t-test) to determine where to next place the virtual object stimuli in the scene. However, this solution remains complex both in terms of the hardware system and software algorithms. It requires multiple specific sensors, including a Kinect system, Lyra cameras and depth sensors, and ideally a further EEG tracking system such as the Biosemi ActiveTwo amplifier as well as a VR headset with eye tracking to discriminate whether the patient is viewing certain parts of the visual field by moving the eyes or the head. Moreover, it is solely based on visual stimulation and thus focuses on precisely calculating patient-tailored trajectories for the virtual objects, by explicitly processing all the sensors information data with conventional logic, possibly including EEG signals, to indicate with the VR objects a precise movement to be performed by the patient during the therapy.
Transcranial Direct Stimulation
In order to further improve the visuo-motor rehabilitation, the optical stimulation methods from the prior art may be combined with the neurostimulation methods commonly used in motor rehabilitation therapies. In 2011, NovaVision conducted a pilot study on the combination of VRT with non-invasive transcranial direct stimulation (tDCS) (Plow et al. “ Comparison of visual field training for hemianopia with active versus sham transcranial direct cortical stimulation ”, Neurorehabilitation and Neural Repair XX(X) 1-11, 2012) suggested that tDCS may enhance neuroplasticity when applied in association with visual training. While promising, the experiment was only based on two patients.
WO20 16092563 teaches a training system potentially suitable for visuomotor balance rehabilitation therapy based on the combination of an eye tracker, biosensors such as a brain EEG capture device and a spinal system EMG capture device, and physical sensors to measure the kinematic and kinetic motion data from the neuromuscular system. The system uses a Wii balance board and an MS Kinect system to track the patient balance while displaying a VR game on a screen display. Two Bayesian sensor fusion modules (one for the cognitive data, one for the spinal system data) are used to serially analyze the multiple data feeds and to produce audio-visual stimuli signals for VR-based sensory stimulation jointly with non-invasive electrical stimulation signals for the motor system. This solution remains complex both in terms of the hardware system and software algorithms, as it requires to capture and process complex EEG and EMG bio-signals.
The applicants therefore identified the need for designing a lighter system setup and data processing methods to combine visual and motor data signal processing, analysis and non- invasive neurostimulation with different possible modalities to facilitate a more efficient and cost-effective automated training treatment of patients suffering from post-stroke visuomotor deficits and related pathologies.
Summary
It is accordingly an aim of the present disclosure to provide novel systems and methods to render user-adaptive content for a user with a virtual reality (VR) system. In a possible embodiment, the method comprises: rendering in a first training session, with a virtual reality (VR) rendering engine, a 3D VR game scene comprising at least one VR object stimulus on a VR head mounted display screen; receiving commands from the user dominant hand with a VR handle, said commands aiming to reach and catch the VR object stimulus with the VR handle; measuring, with a VR system head mounted sensor, the time series of the user head positions and/or orientations vectors in the 3D VR environment for the first session; measuring, with a VR system handle sensor, the time series of the user hand positions and/or orientations vectors in the 3D VR environment for the first session; measuring, with an eye tracking system, the time series of the user gaze direction vectors relative to the user head position and orientation origin in the 3D virtual environment for the first session; applying a HD-tDCS neurostimulation to the user in a time range between one hour before and one hour after the first session; comparing, with the VR motion control engine, the measured user motion time series data with a reference motion time series data to determine a user motion drift for the first session; adapting at least one property of the VR object stimulus to be rendered on the VR head mounted display screen in a next session if the user motion drift exceeds a threshold; and rendering in a next session, with the virtual reality (VR) rendering engine, the 3D VR game scene comprising the adapted VR object stimulus. Comparing, with the VR motion control engine, the measured user motion time series data with a reference motion time series data may comprise applying a principal component analysis (PCA) or a functional principal component analysis (FPCA) to extract and compare the principal components of the user motion time series of positions and/or orientations. The user motion drift may be measured as a distance between the user motion time series data and the healthy reference time series data, such as the
Euclidean distance or the Mahalanobis distance. The texture, color, contrast, shape and/or size property of the VR object stimulus as well as its time of appearance, time of disappearance, direction and/or motion during its presence in the 3D VR game scene may be adapted according to the motion drift. A further HD-tDCS neurostimulation may also be applied to the user in a time range between one hour before and one hour after the next session if the user motion drift exceeds an acceptable motion drift threshold in the first session.
Brief Description of the Drawings
FIG. 1 is a schematic representation of a visuomotor training system according to some embodiments of the present disclosure.
FIG. 2 is a schematic representation of an exemplary visuomotor training computational workflow according to some embodiments of the present disclosure.
FIG. 3 shows an exemplary visual field assessment map.
FIG. 4 shows an exemplary map suitable for eye saccade tracking characterization.
FIG. 5 and FIG. 6 an abstract representation of the patient visuomotor system engagement in tracking a virtual object target (a flying mosquito in the VR game scenery) resulting in different respective positions of the head, dominant hand and gaze direction.
Detailed Description Visuomotor training system
FIG. 1 is a schematic representation of an exemplary visuomotor training system comprising a first wearable device 120 and a second wearable device 130 to be adapted to the user hand; a visuomotor training engine 100 in connection with the first and second wearable devices to entertain the user with a Virtual Reality (VR) game; a control display 150 in connection with the visuomotor training engine 100 for visualizing the user performance assessment results; a medical database 110 in connection with the VR computing device 100 to record the training session configuration information and the resulting user performance assessment results; and optionally a neurostimulation device 160 to be adapted to the user skull, to further transmit electrical current stimuli towards one or more of the user visuomotor cerebral areas by means of neurostimulation electrodes 161.
In a preferred embodiment, the first wearable device 120 comprises eye tracking sensors 121 to measure the eye directions in real-time, a head tracking sensor system 122 to measure the head position in the 3D space in real-time, and a head-mounted display 123 to display the VR game as generated in real-time from the visuomotor engine 100. An exemplary device 120 is the HTC Vive™ HMD, which comprises embedded infrared sensors 122 to detect the HTC Vive™ base station emitted infrared pulses in real time, with update rates ranging from 250Hz to 1kHz. Eye tracking sensors may be separately supplied and fitted into the basic HTC Vive HMD, such as for instance the aglass™ eye tracker device which tracks the point of regard (POR) 2D position of each eye relative to the HMD display half screen (one half screen per eye in the HTC Vive HMD) at an ocular precision down to 0.5° at a rate of 100 to 380Hz. Alternately, the HTC Vive Pro Eye HMD with integrated eye tracking 121 may be used. Other exemplary VR HMD devices 123 include Oculus Rift, Oculus Quest, Oculus Go, HTC Focus, HTC Cosmos, GearVR, Google daydream, or Microsoft Hololens. Other supplies and arrangements are also possible.
In a preferred embodiment, the second wearable device 130 comprises at least one handle trigger button 131 which the user can manipulate with his/her finger to provide feedback to the visuomotor training engine 100, for instance to hit or catch a target object from the VR game scene, and a hand tracking sensor system 132 to measure the hand position in the 3D space in real-time. An exemplary device 130 is the HTC Vive™ handle, which comprises a dual-stage trigger 131 and embedded infrared sensors 132 to detect the HTC Vive™ base station emitted infrared pulses in real time at a sub-millimetric precision at a rate of 250 to 1000Hz. Preferably the handle 130 is manipulated by the user dominant hand, but other arrangements are also possible. An alternate exemplary device 130 is the VRfree® wearable glove from Sensoryx which enables to track both the hand and the finger motion. Other supplies and arrangements are also possible.
In a preferred embodiment, the visuomotor training engine 100 may be implemented as a software module or a combination of interconnected software modules onto a main computing device of the rehabilitation product. In a possible embodiment, the visuomotor training engine may comprise the following software modules:
- a visual field assessment module 105 to measure the user visual field;
- a calibration engine 115 to adapt the VR game calculation to the user wearable sensors and HMD setup; a handle control engine 135 to receive and process the handle trigger signals from the handle trigger 131 and the hand tracking data from the hand tracking sensor system 132; an HMD control engine 145 to transmit the VR game scene images to the head- mounted display screens (one by eye); an eye tracker control engine 155 to receive and process the eye tracking data from the eye tracker sensor system 121 ;
- and a VR scene configurator 175, a VR game configurator 185, a VR rendering engine 125, a VR motion control engine 195 to setup, calculate and adapt in real-time a VR game scene with various object positions and trajectories for the user to visually track and manually hit so as to stimulate his/her visuomotor system.
The visuomotor training engine 100 may then derive in real time the user gaze in the 3D VR scene by combining the tracked 2D eye position data as received by the eye tracker control engine 155 with the tracked head position and rotation in the real world as received by the HMD control engine 145. The VR rendering engine 125 may accordingly render in real time the VR scene as watched by the user on the HMD 123 screen. As will be apparent to those skilled in the art of VR programming, an exemplary visuomotor training engine 100 may be implemented on a desktop personal computer using the Unity or the Unreal VR game engine software modules in combination with the SteamVR VR tracking control engine software modules, but other embodiments and software supplies are also possible.
In a possible embodiment, the visuomotor training system may comprise a separate neurostimulation device which can be operated independently from the visuomotor training engine and VR game stimulation setup. For instance, the portable Neuroelectrics™ HD-tDCS (High Definition transcranial Direct Current Stimulation) device may be used to this hand, but other arrangements are also possible. According to medical practice, the neurostimulation session may be conducted with the user prior, concurrently or after the VR stimulation session. In a possible embodiment, the neurostimulation device may thus be operated independently from the VR system. In an alternate possible embodiment, as illustrated in FIG.l, the visuomotor training engine 100 may further comprise a neurostimulation control engine 165 to process and transmit neurostimulation signals to the neurostimulation electrodes 161 in accordance with the user VR training protocol.
In a further embodiment, the system of FIG.1 may also be extended with additional wearable sensors, for instance to track the position and/or direction and/or motion in the 3D space from additional user body parts. In a possible embodiment, additional wearable sensors may track one or more of the back motion, the chest motion, the second (non-dominant) hand motion, the feet motion, and/or the motion of one or more joints such as the ankles, the knees, the hips, the shoulders, the elbows, the wrists, the toes or the fingers.
In a still further embodiment, the system of FIG.1 may also be extended with additional non wearable body position or gesture tracking sensors, such as for instance a balance board or gesture tracking cameras. In such an embodiment, preferably the sensors are adapted, with a sensor signals processing engine, to convert the raw sensor information into up to 6 degrees of freedom position and orientation coordinates relative to a common reference, so that the visuomotor training engine 100 may derive motion information in a data format most suitable to be handled by the proposed motion processing methods, as will be further described in more detail throughout this enclosure.
In a still further possible embodiment, the system of FIG.1 may also be extended to facilitate a mixed reality training experience, with additional sensors to track and render at least some object elements of the physical environment with which the user physically interacts in combination with the user tracking and VR reality scene elements with which the user virtually interacts.
Visuomotor training workflow FIG. 2 represents an exemplary workflow as may be executed by the visuomotor training engine 100 of the visuomotor training system of FIG.l. In a preferred embodiment, the workflow may comprise the steps of:
1. acquiring 200 the user information;
2. measuring 210 the initial user performance prior to the training session;
3. configuring 220 the static VR scene elements as a function of the user information and/or user performance;
4. calculating 230 the positions and trajectories of moving objects in the VR game as a function of the user information and/or user performance;
5. rendering 240 the VR game as a combination of the static VR scene elements and the moving VR game objects positions and trajectories in the VR scene, in accordance with at least the tracked user head, eye and hand positions received from the sensors of the first and second wearable devices 120, 130;
6. and measuring 260 the user performance during to the training session.
The visuomotor training engine system 100 may acquire 210 user information from the user records in the medical database 110, for instance the user age, visual correction, hand dominance, and any medical data information that may facilitate the adaptation 220, 230 to the specific user condition of the static VR scene elements, with the VR scene configurator 175, and of the moving VR game objects in the VR scenery, with the VR game configuration 185. For instance, the elements of the VR scene may be adapted in size or in color to some specific eye impairment or color impairment pathologies, and/or the position and motion speed of the main objects of the VR game may be adapted to the age and physical condition of the user, for instance depending on whether the user can stand up and raise his/her hand over his head.
During the initial assessment as well as during the training session, the visuomotor training engine system 100 may record 210 additional user information in the medical database 110, such as for instance information on the motion performance and/or visual performance as measured by the system. In a preferred embodiment, the visuomotor training engine system 100 may calculate in real time, with the VR motion control engine 195, the hand motion information according to the hand position tracking data received from the handle control engine 135, the head motion information according to the head position tracking data received from the HMD control engine 145, and/or the eye tracking motion information data received from the eye tracker control engine 155. The visuomotor training engine system 100 may accordingly derive one or more visuomotor system performance measurements to characterize the user performance in the training session, and may record the resulting measurements information in the user records in the medical database 110.
Prior art solutions either focus primarily on the training of the visual system (for instance, the Novavision system) or primarily on the training of the motor system (for instance, the Mindmaze system). In contrast, the proposed visuomotor training system 100 jointly processes both visual performance and motion performance information by measuring 210, 260 the user visuomotor performance as a combination of eye movement tracking, head movement tracking and hand movement tracking in real time. The proposed visuomotor training system 100 enables accordingly to assess the user visuomotor performance in real time during a training session (possibly in a closed-loop experiment) as well as the user performance progression from one training session to the next.
In a possible embodiment, the proposed visuomotor training system 100 may initially adapt 220 the static configuration of the VR scene elements, which can be adjusted primarily as a function of the visual capability of the user (for instance, the game complexity level).
Furthermore, various closed-loop experiments are enabled by the proposed workflow of FIG.2. In a possible embodiment, the proposed visuomotor training system 100 may adapt 230 the dynamic calculation of the VR game objects positions and trajectories, which can be adjusted in real time to the user eye, head and hand positions and trajectories as tracked with the VR system sensors (closed-loop experiment). In a still possible further embodiment, the proposed visuomotor training system 100 may both adapt 220, 230 the static configuration of the VR scene elements and the dynamic calculation of the VR game objects positions and trajectories so as to fully optimize the real time moving VR scene rendering to both the initial user capability and the on-going user performance.
In a possible further optional embodiment, the workflow may also comprise a step of applying 250 neurostimulation signals to the user visuomotor system. In a possible embodiment, the neurostimulation signals may be parametrized by the visuomotor training engine system 100 in accordance with the user information (age, pathology, etc.). In a further possible embodiment, the neurostimulation signals may be adapted by the visuomotor training engine system 100 in accordance with the measured user performance in the VR stimulation to further reinforce the overall visuomotor training in a closed loop approach.
Initial visual field assessment Measuring 210 the initial user performance first comprises assessing the user visual field. Various methods from the prior art may be used to this end. In general, the visual field may be assessed by displaying dots or objects at different positions in the screen watched by the user, one eye at a time while the other eye is blinded. The user gives feedback on whether he/she sees the target. With the proposed VR setup, the HMD display 123 may be used as the screen, the handle trigger 131 may be used to capture the user feedback, and the VR rendering engine may display an initial basic VR scene with simple contrast dots (for instance, white dot objects displayed with a size, appearance and disappearance rates suitable for the user visual capability) over a uniform background (for instance, a black static scene). In a possible embodiment, the method of Tsapakis et al. in “ Visual field examination method using virtual reality glasses compared with the Humphrey perimeter”, Clinical Ophthalmology (Auckland, N.Z.), 11, 1431-1443 (2017) may be implemented by the proposed visuomotor training system 100. Alternately, the method from Virtual Field, Inc. as described in US2019/0298166 may be used. As will be apparent to those skilled in the art of ophthalmology practice, other methods are also possible. FIG. 3 shows an exemplary visual field assessment result where the visual field, represented over the full quadrant with a thick grey line around the visible dots from the measurement, is clearly reduced in the left and top areas, while the blind spot area, represented with the black thick line, is shifted to the right and slightly to the bottom (the theoretical visual field from a healthy user should be centered on the quadrant).
VR static scene configuration
The proposed visuomotor training system 100 may configure, with the VR scene configurator module 175, a static VR world scene for the user to virtually explore. In order to stimulate the user motor capability in conditions as close as possible to daily life, different VR world scenes corresponding to different environments may be used. In initial training stages, a familiar environment such as a living room may be configured as the static scene. In a possible embodiment, the scene may be parametrized with various VR object shapes, textures and/or colors adapted to the user information (for instance according to the user age). In more advanced training stages, more challenging environments may be configured, such as for instance a city street scene.
The proposed visuomotor training system 100 may also configure, with the VR game configurator module 185, a VR game scenario to produce and animate VR object stimuli for the user to interact with as part of a VR game. To facilitate the interaction with the VR object to stimuli, the VR handle 130 may be associated with a tool avatar that can be manipulated by the user avatar hand in the VR scene. In order to stimulate the user visuomotor capability with different movements through jointly engaging his visual system and his motor system in the rehabilitation protocol, different objects may be added to the VR scene. Static objects such as an object to be reached over a table or a button to be pressed on the wall (with the handle trigger 131) may be added to the static room scene in front of the user and possibly also on the sides, to encourage hand, eye and head motion in search for the object, especially around the boundaries of the neglected visual field in accordance with the formerly measured user visual field assessment information. In such a scenario, only limited user motion is required. In a possible embodiment, the VR objects may be parametrized with various shapes, textures and/or colors adapted to the user information (for instance according to the user age). In a further possible embodiment, more challenging VR objects may also be animated as moving stimuli in the VR scene during the training session, such as for instance a VR object target appearing, flying around and disappearing with different motion trajectories.
In an exemplary game implementation, the VR game configurator module 185 may produce and animate a mosquito avatar and the VR rendering engine 125 may render it into the 3D VR scene in accordance with the user tracking by the proposed VR system. In the VR scene, the hand of the user may be associated with a stick tool to reach the mosquitos. When the user reaches the mosquito with the tool (a task which requires efficient visuomotor coordination), the mosquito disappears and the user is rewarded (visual score, winning sound, or any reward feedback that is most suitable to the user).
User performance measurements
In a preferred embodiment, the proposed visuomotor training system 100 measures, with VR motion control engine 195, a number of parameters corresponding to the user stimulation and performance during the training session.
For each VR object stimulus to chase, the VR motion control engine 195 may for instance record one or more of the following information:
• Start time
• Time of appearance in the part of VR scene rendered by the VR rendering engine 125 and HMD control engine 145 to the HMD 123 • Time of disappearance from the part of VR scene rendered by the VR rendering engine 125 and HMD control engine 145 to the HMD 123 (e.g. when the user loses track of it)
• Time when fixated by the user
• Time when caught by the user, with the handle trigger
• Number of time participant collides with an obstacle (in the moving scenarios). Preferably, the VR motion control engine may calculate a measurement of the user motion performance in association with the VR object stimulus chasing practice. Through the VR object stimulus chasing session, the user explores the VR world with his head; the VR motion control engine 195 may thus record one or more of the following information to characterize the user visual exploration parameters such as:
• Total length of the trajectory (length of walk)
• Mean velocity
• Peak velocity
• Mean instantaneous acceleration
• Peak instantaneous acceleration
• Head movements: rotations and translations; number of each type of movement (number or rotations towards right, of rotations towards left etc) and time spent doing each type of movement.
Lastly, the performance of the user in reaching the target object in the VR game may be measured with the following reaching parameters which characterize the “last” eye movement of the trial, i.e. the one leading the user to catching the target:
• Reaction time (time passed between target onset and the movement onset)
• Movement time (from the moment the movement is started to the moment the target is caught)
• Movement length (length of trajectory)
• Mean velocity (during movement time)
• Peak velocity (during movement time)
• Time to peak velocity (time from movement onset to moment of peak velocity)
The above parameters may be calculated in real time by the VR motion control engine 195 from the HMD control engine 145 head tracking data measurements. In order to get a finer tracking of the user gaze direction as a combination of head movement and eye movement, additional parameters may be calculated in real time by the VR motion control engine 195 from the eye tracker control engine 155 data measurements, such as:
• The 3D direction of the eye gaze, which may be mapped as the closer direction between right, top-right, top, top-left, left, left-bottom, bottom, right-bottom quadrants and the closer position between central, middle and external relative to the user visual field theoretical full circle, as represented on FIG.4. As an illustration, the saccade vector depicted in light gray in FIG.4 may be classified as a middle top-right movement as the angle between it and the top-right movement axis is smaller than between it and the right movement axis, and it occurs in the middle area between the central and the external areas of the user visual field.
• Time spent in each quadrant, classified as central / middle / external + right/top- right/top-left....
• Number of saccades (movements larger than 5 degrees) in each direction
• Timing information for each group of saccades
Furthermore, through the whole training session, the user explores the VR world with his hand to reach the VR object stimuli targets; the VR motion control engine 195 may thus record one or more of the following information to characterize the user manual exploration parameters such as:
• Global hand movement on the 3 axes (sum of all movements on the x, y and z axes)
• Mean velocity
• Peak velocity
• Volume of the area explored (integration along 3 dimensions of the hand trajectory curve)
Lastly, the performance of the user in reaching the target object in the VR game may be measured with the following reaching parameters which characterize the “last” hand movement of the trial, i.e. the one leading the user to catching the target:
• Reaction time (time passed between target onset and the movement onset)
• Movement time (time between movement onset and target reached)
• Total time (reaction time plus movement time)
• Mean velocity during movement time
• Peak velocity during movement time
• Time to peak velocity (from movement onset to peak velocity) • Movement length on the three dimensions
• Movement volume (integration on 3 dimensions of the curve)
• In addition, some parameters on the whole time spent in the virtual environment: o Global length of trajectories (how much people have walked around, in 2D) o Area explored (area comprised in the furthest points reached by participants, 2D) o Volume of area defined by targets in the places where they’ve been reached (integration on 3 dimensions) o Volume of the area explored by each hand trajectory over time (integration of the curve over time)
In setups where multiple user body motion tracking sensors are employed (e.g. to separately track the main body back or chest, the joints, and the hands) the movement of the hand tracking sensor may be measured as the trajectory of the tracked 6DOF position relative to a second tracking sensor 6DOF position, for instance a sensor placed on the user’s chest, so as to better characterize the arm+hand motion relative to the main body motion. Other embodiments are also possible.
User movement analysis
In order to quantify the user performance in the training session as a score, the visuomotor training system 100 may calculate a scalar metrics that is representative of the deviation of the user performance compared to a reference performance. The reference performance may be that of a control population, for instance it may be pre-defined from measurements of healthy individuals taking the same training exercise.
The proposed VR tracking system 100 measures multiple position variables, namely from the hand, the head and the eye gaze, as a function of time sampled along the training session period. To fully characterize the position of the hand or the head in the 3D environment in which the user has to evolve, the proposed VR motion control engine may record time series vectors of 3 position variables Phand (xn,yn,zn) and 3 orientation angle variables Ohand (un,vn,wn) for the hand, as well as 3 position variables Phead (xd,yd,zd) and 3 orientation angle variables Othead (ud,Vd,wd) for the head, at different sampling times of the VR tracking system while the user is taking a trajectory to track and reach the VR stimulus object target in the VR game scenario with his head and dominant hand. Preferably, the VR tracking system 100 may further measure two position variables Qgaze Vg) as the horizontal and vertical position of the gaze on the gaze quadrant of FIG. 4 to track the eye motion separately from the head motion (Qgaze( g/g) =(0,0) corresponding to the center of the theoretical visual field). As will be apparent to those skilled in the art of kinematics, other embodiments are also possible, for instance using a time series of 3D positions and quaternions for tracking the orientations, and possibly also for the eye gaze tracking.
FIG. 5 and FIG. 6 illustrate two examples of different positions and orientations for the user head, hand and pair of eyes gaze respectively. FIG.5 corresponds to an early sampling time in the training session where a VR object stimulus has been introduced in the 3D VR scene but the user has not seen it yet. FIG. 6 corresponds to the successful end of the VR object stimulus chasing game session when the user has seen and reached the VR object stimulus by properly coordinating his head, eyes and hand. The trajectories of the combined head+eye direction as well as the dominant hand can thus be tracked in real time along all intermediate poses as the user moves from the initial position in FIG.5 to the end position in FIG.6, with the proposed VR system to measure the user motor performance in this visuomotor task. Furthermore, depending on the formerly assessed user performance, various difficulty levels may be simulated for this task. For instance, the VR object stimulus may remain at a fixed position in the 3D VR scene to facilitate its capture, or in more advanced levels it may move to different positions so that the user has to adjust his/her trajectory in real-time. More generally, the texture, the color, the contrast, the size, the shape, the appearance time, the disappearance time, the direction and/or the speed of the VR object stimulus in the 3D VR scene may be adapted to the user by the VR game configurator 185.
Moreover, even if the user does not manage to reach the target in the end (as defined then by a timeout for the VR object stimulus chasing session), the eye, head and hand trajectory information records may still provide useful clues on the operation of the visuomotor system. This will be better understood as illustrated by FIG. 5 and FIG.6. First, by combining the measured head position and orientation with the measured eye gaze quadrant at any sampling time t, the VR motion control engine 195 may derive the direction axis of the user avatar gaze in the 3D VR scene. The VR motion control engine 195 may then measure a gaze distance dg(t) from this calculated axis to the actual position of the VR object stimulus in the 3D VR space at time t. As can be observed with a healthy user, a decreasing gaze distance indicates that the user progressively reaches the position of the VR object stimulus to chase with his head+eye, in other words, that the user has seen the VR object stimulus target in his/her visual field and aligned his head and eyes accordingly with his/her visuomotor system. Second, the VR motion control engine 195 may further measure a hand reach distance d„(t) from the measured position of the hand to the actual position of the VR object stimulus in the 3D VR space. As can be observed with a healthy user, a decreasing distance indicates that the user progressively reaches the position of the VR object stimulus to chase with his hand, in other words, that the user is reaching the VR object stimulus target with his hand and body motion.
In a further possible embodiment (not illustrated), the VR motion control engine 195 may derive a pointing orientation axis from the hand position and orientation and measure a third hand alignment distance dna(t) from the pointing orientation axis to the actual position of the VR object stimulus in the 3D VR space at time t. A decreasing distance indicates that the user progressively orients his/her hand towards the VR object stimulus, in other words, that the user is trying to reach the VR object stimulus target with his hand motion independently from his/her body motion.
To facilitate the comparison of the user performance with an expected performance as measured with a healthy control population, the VR motion control engine 195 may also process the eye, head and hand trajectory information to derive scalar motion performance measurements as indicators of the user motion drift relative to the reference motion data captured from a healthy control subject or population in similar game conditions. The user visuomotor performance may be tracked during any training session as a time series of the position and orientation vectors respectively for the head and the hand, in combination with a time series of the position coordinates for the eye gaze relative to the theoretical visual field quadrant. In a possible embodiment, the VR motion control engine 195 may apply a multivariate statistical analysis method such as for instance the Principal Component Analysis method to summarize the user motion information conveyed by the measured time series data of the user motion recorded over time for the head, hand and eye gaze while the VR stimulus object is rendered onto the VR display screen, so as to characterize the overall user visuomotor movement in a lower dimension data set formed by the calculated principal components.
As will be apparent to those skilled in the art of kinematics, other mathematical modeling approaches may be used too. As an alternative to PC A, the VR motion control engine 195 may apply a Functional Principal Component Analysis (FPCA) method to the kinematic time series of the head, hand and eye position and orientation data, for instance by extending the hand motion analysis method of Cortes et al., “A short and distinct time window for recovery of arm motor control early after stroke revealed with a global measure of trajectory kinematics", Neurorehabilitation and Neural Repair, Vol.31, issue 6, pp.552-560, 2017, to the head and eye tracking.
In a further possible embodiment, the user motion drift relative to the healthy controls motion may be measured as a distance between the user data and a reference data in a multidimensional space. The distance may be measured as the Euclidean distance in 3D VR space, or by more dedicated statistical distances in a transformed space, such as for instance the Mahalanobis distance to compare the transformed FPCA reaching trajectories for the eye, head and hand to the transformed FPCA reaching trajectories of a healthy control population, as proposed by Cortes et al.
Motion-adaptive visual training
If the measured user motion drift is above a given threshold, there may be a need to adapt the next session training to encourage a better performance for the user. In a possible embodiment, the VR motion engine 195 may feedback information from the measured trained user performance such as the gaze distance d^t), the hand reach distance dn(t), the hand alignment distance d„a(t), and/or the user motion principal components to the VR scene configurator to configure 220 the static VR scene elements in accordance with the measured user eye, hand and head trajectories in real-time. In addition, or alternately, the VR motion engine 195 may feedback information from the measured trained user performance as the gaze distance dg{i), the hand reach distance dn(t), the hand alignment distance d„a(t), and/or the user motion principal components to the VR game configurator to calculate 230 the VR game object stimulus position in the 3D VR scene (for a static stimulus, in basic training levels) and possibly the VR game object stimulus trajectory (for a moving stimulus, in advanced training levels) in accordance with the measured user motion in real-time. In a possible embodiment, the static VR scene may have different VR object stimuli rendering details such as shape, size, contrast, texture and/or color to draw more attention of the user to some areas. In a further or alternate possible embodiment, the VR game object stimuli may be preferably positioned and/or densified around the border areas and/or moving towards the border areas of the damaged visual field for a longer period of time to retain more attention of the user into those areas.
Neurostimulation In a possible further embodiment, there may be a need to apply neurostimulation in addition to the visual VR training to facilitate a better recovery for the user. In order to reinforce the visuomotor system rehabilitation, in a possible further embodiment neurostimulation may be applied to the user to electrically stimulate the visual cortex brain areas in addition to the VR- based sensory stimulation. Neurostimulation may be applied concurrently with the VR training session, or before, or after the session. Preferably, a HD-tDCS neurostimulation device is used so that a specific subset of electrodes can be activated, specifically on the visual cortex area and directly around it. In a further possible embodiment, the HD-tDCS stimulation protocol may also be adapted to the current user performance as measured with the above described motion analysis methods; for instance, the decision to apply or not the neurostimulation after the session may depend on the user performance during the training, or the activation of individual electrodes may be tuned to the specific user visuomotor performance characteristics (e.g. shifting the stimulation towards certain specific brain areas according to the measured user motion drift).
Other embodiments
The particulars shown herein are by way of example and for purposes of illustrative discussion of the various embodiments only and are presented in the cause of providing what is believed to be the most useful and readily understood description of the principles and conceptual aspects of the methods and compositions described herein. In this regard, no attempt is made to show more detail than is necessary for a fundamental understanding, the description making apparent to those skilled in the art how the several forms may be embodied in practice.
The proposed methods and systems will now be described by reference to more detailed embodiments. The proposed methods and systems may, however, be embodied in different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope to those skilled in the art.
Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description herein is for describing particular embodiments only and is not intended to be limiting. As used in the description and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise.

Claims

Claims
1. A method to render user-adaptive content for a user with a virtual reality (VR) system, the method comprising:
• Rendering in a first training session, with a virtual reality (VR) rendering engine, a 3D VR game scene comprising at least one VR object stimulus on a VR head mounted display screen;
• Receiving commands from the user dominant hand with a VR handle, said commands aiming to reach and catch the VR object stimulus with the VR handle;
• Measuring, with a VR system head mounted sensor, the time series of the user head positions and/or orientations vectors in the 3D VR environment for the first session;
• Measuring, with a VR system handle sensor, the time series of the user hand positions and/or orientations vectors in the 3D VR environment for the first session;
• Measuring, with an eye tracking system, the time series of the user gaze direction vectors relative to the user head position and orientation origin in the 3D virtual environment for the first session; characterized in that the method further comprises:
• Applying a HD-tDCS neurostimulation to the user in a time range between one hour before and one hour after the first session;
• Comparing, with the VR motion control engine, the measured user motion time series data with a reference motion time series data to determine a user motion drift for the first session;
• Adapting at least one property of the VR object stimulus to be rendered on the VR head mounted display screen in a next session if the user motion drift exceeds a threshold; the method further comprising:
• Rendering in a next session, with the virtual reality (VR) rendering engine, the 3D VR game scene comprising the adapted VR object stimulus.
2. The method of claim 1 , characterized in that comparing, with the VR motion control engine, the measured user motion time series data with a reference motion time series data comprises applying a principal component analysis (PCA) or a functional principal component analysis (FPCA) to extract and compare the principal components of the user motion time series of positions and/or orientations.
3. The method of claims 1 or 2, characterized in that the user motion drift is measured as a distance between the user motion time series data and the healthy reference time series data.
4. The method of claim 3 wherein the distance is the Euclidean distance.
5. The method of claim 3 wherein the distance is the Mahalanobis distance.
6. The method of any preceding claim, comprising adapting the texture, color, contrast, shape and/or size property of the VR object stimulus.
7. The method of any preceding claim, comprising adapting the VR object stimulus time of appearance, time of disappearance, direction and/or motion during its presence in the 3D VR game scene.
8. The method of any preceding claim, further comprising applying a HD-tDCS neurostimulation to the user in a time range between one hour before and one hour after the next session if the user motion drift exceeds an acceptable motion drift threshold in the first session.
PCT/EP2020/058725 2020-03-27 2020-03-27 Joint virtual reality and neurostimulation methods for visuomotor rehabilitation WO2021190762A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2020/058725 WO2021190762A1 (en) 2020-03-27 2020-03-27 Joint virtual reality and neurostimulation methods for visuomotor rehabilitation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2020/058725 WO2021190762A1 (en) 2020-03-27 2020-03-27 Joint virtual reality and neurostimulation methods for visuomotor rehabilitation

Publications (1)

Publication Number Publication Date
WO2021190762A1 true WO2021190762A1 (en) 2021-09-30

Family

ID=70058348

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2020/058725 WO2021190762A1 (en) 2020-03-27 2020-03-27 Joint virtual reality and neurostimulation methods for visuomotor rehabilitation

Country Status (1)

Country Link
WO (1) WO2021190762A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114237387A (en) * 2021-12-01 2022-03-25 辽宁科技大学 Brain-computer interface multi-mode rehabilitation training system
EP4177868A1 (en) * 2021-11-03 2023-05-10 Sony Group Corporation Performance-based feedback for activity in a low-gravity environment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2012733A1 (en) 2006-03-21 2009-01-14 Novavision, Inc. Process and device for apportioning therapeutic vision stimuli
US9251407B2 (en) * 2008-09-04 2016-02-02 Northrop Grumman Systems Corporation Security system utilizing gesture recognition
WO2016092563A2 (en) 2014-12-11 2016-06-16 Indian Institute Of Technology Gandhinagar Smart eye system for visuomotor dysfuntion diagnosis and its operant conditioning
US20160235323A1 (en) * 2013-09-25 2016-08-18 Mindmaze Sa Physiological parameter measurement and feedback system
US20170156965A1 (en) 2014-07-04 2017-06-08 Libra At Home Ltd Virtual reality apparatus and methods therefor
US9706910B1 (en) 2014-05-29 2017-07-18 Vivid Vision, Inc. Interactive system for vision assessment and correction
US20180336973A1 (en) 2017-05-19 2018-11-22 MindMaze Holdiing SA System, method and apparatus for treatment of neglect
US20190298166A1 (en) 2018-03-27 2019-10-03 Virtual Field, Inc. System and method for testing visual field

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2012733A1 (en) 2006-03-21 2009-01-14 Novavision, Inc. Process and device for apportioning therapeutic vision stimuli
US9251407B2 (en) * 2008-09-04 2016-02-02 Northrop Grumman Systems Corporation Security system utilizing gesture recognition
US20160235323A1 (en) * 2013-09-25 2016-08-18 Mindmaze Sa Physiological parameter measurement and feedback system
US9706910B1 (en) 2014-05-29 2017-07-18 Vivid Vision, Inc. Interactive system for vision assessment and correction
US20170156965A1 (en) 2014-07-04 2017-06-08 Libra At Home Ltd Virtual reality apparatus and methods therefor
WO2016092563A2 (en) 2014-12-11 2016-06-16 Indian Institute Of Technology Gandhinagar Smart eye system for visuomotor dysfuntion diagnosis and its operant conditioning
US20180336973A1 (en) 2017-05-19 2018-11-22 MindMaze Holdiing SA System, method and apparatus for treatment of neglect
US20190298166A1 (en) 2018-03-27 2019-10-03 Virtual Field, Inc. System and method for testing visual field

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CORTES ET AL.: "A short and distinct time window for recovery of arm motor control early after stroke revealed with a global measure of trajectory kinematics", NEUROREHABILITATION AND NEURAL REPAIR, vol. 31, no. 6, 2017, pages 552 - 560
PLOW ET AL.: "Comparison of visual field training for hemianopia with active versus sham transcranial direct cortical stimulation", NEUROREHABILITATION AND NEURAL REPAIR, vol. XX, no. X, 2012, pages 1 - 11
TSAPAKIS ET AL.: "Visual field examination method using virtual reality glasses compared with the Humphrey perimeter", CLINICAL OPHTHALMOLOGY (AUCKLAND, N.Z.), vol. 11, 2017, pages 1431 - 1443

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4177868A1 (en) * 2021-11-03 2023-05-10 Sony Group Corporation Performance-based feedback for activity in a low-gravity environment
US11853475B2 (en) 2021-11-03 2023-12-26 Sony Group Corporation Performance-based feedback for activity in a low-gravity environment
CN114237387A (en) * 2021-12-01 2022-03-25 辽宁科技大学 Brain-computer interface multi-mode rehabilitation training system

Similar Documents

Publication Publication Date Title
US11273344B2 (en) Multimodal sensory feedback system and method for treatment and assessment of disequilibrium, balance and motion disorders
US9788714B2 (en) Systems and methods using virtual reality or augmented reality environments for the measurement and/or improvement of human vestibulo-ocular performance
US10231614B2 (en) Systems and methods for using virtual reality, augmented reality, and/or a synthetic 3-dimensional information for the measurement of human ocular performance
CN107929007B (en) Attention and visual ability training system and method using eye tracking and intelligent evaluation technology
US20200197744A1 (en) Method and system for motion measurement and rehabilitation
KR101660157B1 (en) Rehabilitation system based on gaze tracking
CN109875501B (en) Physiological parameter measurement and feedback system
KR102313622B1 (en) Biosignal-based avatar control system and method
US7549743B2 (en) Systems and methods for improving visual discrimination
US10258259B1 (en) Multimodal sensory feedback system and method for treatment and assessment of disequilibrium, balance and motion disorders
JP2020509790A5 (en)
Khademi et al. Comparing “pick and place” task in spatial augmented reality versus non-immersive virtual reality for rehabilitation setting
US8900165B2 (en) Balance training system
CN108478399B (en) Amblyopia training instrument
US20190286234A1 (en) System and method for synchronized neural marketing in a virtual environment
van Rheede et al. Simulating prosthetic vision: Optimizing the information content of a limited visual display
Krausz et al. Intent prediction based on biomechanical coordination of EMG and vision-filtered gaze for end-point control of an arm prosthesis
Wang et al. Feature evaluation of upper limb exercise rehabilitation interactive system based on kinect
WO2021190762A1 (en) Joint virtual reality and neurostimulation methods for visuomotor rehabilitation
McCormick et al. Eye gaze metrics reflect a shared motor representation for action observation and movement imagery
US20230149248A1 (en) Methods and systems for dynamic ocular training using therapeutic games
Bhatia et al. A review on eye tracking technology
Feintuch et al. VirHab-A virtual reality system for treatment of chronic pain and disability
SIONG Training and assessment of hand-eye coordination with electroencephalography
Pastel Visual perception in virtual reality and the application in sports

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20715816

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20715816

Country of ref document: EP

Kind code of ref document: A1