CN116578185A - User capability analysis method and device based on virtual reality and computer readable storage medium - Google Patents

User capability analysis method and device based on virtual reality and computer readable storage medium Download PDF

Info

Publication number
CN116578185A
CN116578185A CN202310507322.4A CN202310507322A CN116578185A CN 116578185 A CN116578185 A CN 116578185A CN 202310507322 A CN202310507322 A CN 202310507322A CN 116578185 A CN116578185 A CN 116578185A
Authority
CN
China
Prior art keywords
user
virtual reality
capability
interaction
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310507322.4A
Other languages
Chinese (zh)
Inventor
李文玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Shuyao Intelligent Technology Co ltd
Original Assignee
Shanghai Shuyao Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Shuyao Intelligent Technology Co ltd filed Critical Shanghai Shuyao Intelligent Technology Co ltd
Priority to CN202310507322.4A priority Critical patent/CN116578185A/en
Publication of CN116578185A publication Critical patent/CN116578185A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a user capability analysis method and device based on virtual reality and a computer readable storage medium, wherein the method comprises the following steps: creating a virtual reality scene and setting an interactive task in the virtual reality scene according to the capability of a user to be analyzed, wherein the performance of the interactive task is influenced by the capability of the user; when a user executes an interaction task in a virtual reality scene, collecting interaction data from the user; identifying various characteristics which are displayed by a user and reflect the capability of the user when the user executes the interaction task; the levels of various features of the user are analyzed based on the user's interaction data. The invention identifies various characteristics of the user capable of reflecting the capability of the user when the user executes the interaction task, and analyzes and evaluates the various characteristics of the user based on the interaction data of the user, thereby comprehensively and completely analyzing and evaluating the capability of the user in a targeted manner.

Description

User capability analysis method and device based on virtual reality and computer readable storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a user capability analysis method and apparatus based on virtual reality, and a computer readable storage medium.
Background
At present, the virtual reality and man-machine interaction technology is widely applied to various fields, including treatment and rehabilitation of autism patients and patients with hypoevolutism, and the emotion change of the patients is analyzed by collecting electroencephalogram signals, eye movement signals and the like of the users in a virtual reality scene, so that corresponding treatment measures are adopted to treat the patients, and the conditions of the patients are improved.
Before training treatment is carried out on autism patients and patients with developmental delay, the social communication and other various abilities of the patients need to be evaluated, and the defects of the prior art are that: the cured index is commonly adopted for analyzing the patient, and the capability of the patient actually has different reflecting modes under different conditions, so that the capability of the patient in all aspects can not be accurately analyzed and estimated through the cured index.
Disclosure of Invention
In order to overcome the above-mentioned drawbacks, the present invention is directed to a virtual reality-based user capability analysis method, apparatus, and computer-readable storage medium for enabling accurate analysis and assessment of different capabilities of a user.
In a first aspect, the present invention provides a method for analyzing user capability based on virtual reality, including: creating a virtual reality scene and setting an interaction task in the virtual reality scene according to the capability of a user to be analyzed, wherein the performance of the interaction task is influenced by the capability of the user; when the user executes the interaction task in the virtual reality scene, collecting interaction data from the user; identifying various characteristics reflecting the capability of the user when the user executes the interactive task; and analyzing the levels of various characteristics of the user according to the interaction data of the user.
Preferably, the step of "collecting interaction data from the user" of the aforementioned virtual reality-based user capability analysis method includes: and collecting voice, vision, behavior and/or brain electricity data when the user executes the interaction task.
Preferably, the step of identifying the plurality of features reflecting the capability of the user, which are displayed by the user when performing the interactive task, according to the user capability analysis method based on virtual reality, includes: when the user's ability is social skills, the user's multiple features are identified as gaze status, response to other people content, self-introduced content, body language expression content, talking to strangers content.
Preferably, the step of identifying the plurality of features reflecting the capability of the user, which are displayed by the user when performing the interactive task, according to the user capability analysis method based on virtual reality, includes: when the user's ability is visual ability, the user's multiple features are identified as joint attention, pointing follow, visual fixation, visual follow-up, flexible follow-up.
Preferably, the step of identifying the plurality of features reflecting the capability of the user, which are displayed by the user when performing the interactive task, according to the user capability analysis method based on virtual reality, includes: when the user's ability is social rule understanding ability, identifying the user's multiple features as belonging relationship understanding cases, conditional relationship understanding cases, causal relationship understanding cases, turning relationship understanding cases, social rule understanding cases.
Preferably, the step of creating a virtual reality scene and setting an interaction task in the virtual reality scene according to the capability of the user to be analyzed according to the user capability analysis method includes: according to the capability of a user to analyze, selecting a plurality of elements from a preset material library, creating a virtual reality scene by using the plurality of elements, selecting one or more interaction modes realized based on the plurality of elements from a preset scene library, and setting interaction tasks in the virtual reality scene by using the one or more interaction modes.
Preferably, the foregoing method for analyzing user capability based on virtual reality further includes, before the step of selecting a plurality of elements from a preset material library according to the capability of the user to be analyzed and creating a virtual reality scene using the plurality of elements: and setting the elements in the material library according to the elements in the real scene where the user is located.
Preferably, the foregoing method for analyzing user capability based on virtual reality, the step of setting elements in the material library according to the elements in the real scene where the user is located, further includes: and setting the elements in the material library according to the common elements in the multiple real scenes where the user is located.
In a second aspect, the present invention provides a user capability analysis apparatus based on virtual reality, including: the scene creation module creates a virtual reality scene and sets an interaction task in the virtual reality scene according to the capability of a user to be analyzed, and the performance of the interaction task is influenced by the capability of the user; the data acquisition module is used for acquiring interaction data from the user when the user executes the interaction task in the virtual reality scene; the feature identification module is used for identifying various features reflecting the capability of the user when the user executes the interactive task; and the characteristic analysis module is used for analyzing the levels of various characteristics of the user according to the interaction data of the user.
In a third aspect, the present invention provides a computer readable storage medium having stored therein a plurality of program code adapted to be loaded and executed by a processor to perform the aforementioned virtual reality based user capability analysis method.
The technical scheme provided by the invention has at least one or more of the following beneficial effects:
the technical scheme of the invention is different from the prior art, a solidified analysis and evaluation method is not adopted, the evaluation capability is firstly analyzed according to the user requirement, a virtual reality scene and an interactive task which are suitable for the user to exert the capability of the user are created, various characteristics of the user can be reflected when the user executes the interactive task are identified, at the moment, the actual level of the various characteristics of the user is analyzed based on the interactive data of the user, and thus the capability of the user is comprehensively and completely analyzed and evaluated in a targeted manner.
Drawings
The present disclosure will become more readily understood with reference to the accompanying drawings. As will be readily appreciated by those skilled in the art: the drawings are for illustrative purposes only and are not intended to limit the scope of the present invention. Wherein:
FIG. 1 is a flow chart of a virtual reality based user capability analysis method according to one embodiment of the invention;
FIG. 2 is a flow chart of a virtual reality based user capability analysis method according to one embodiment of the invention;
FIG. 3 is a block diagram of a virtual reality based user capability analysis apparatus according to one embodiment of the invention;
fig. 4 is a block diagram of a virtual reality-based user capability analysis method according to one embodiment of the invention.
Detailed Description
Some embodiments of the invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are merely for explaining the technical principles of the present invention, and are not intended to limit the scope of the present invention.
In the description of the present invention, a "module," "processor" may include hardware, software, or a combination of both. A module may comprise hardware circuitry, various suitable sensors, communication ports, memory, or software components, such as program code, or a combination of software and hardware. The processor may be a central processor, a microprocessor, an image processor, a digital signal processor, or any other suitable processor. The processor has data and/or signal processing functions. The processor may be implemented in software, hardware, or a combination of both. Non-transitory computer readable storage media include any suitable medium that can store program code, such as magnetic disks, hard disks, optical disks, flash memory, read-only memory, random access memory, and the like. The term "a and/or B" means all possible combinations of a and B, such as a alone, B alone or a and B. The term "at least one A or B" or "at least one of A and B" has a meaning similar to "A and/or B" and may include A alone, B alone or A and B. The singular forms "a", "an" and "the" include plural referents.
As shown in fig. 1, in one embodiment of the present invention, there is provided a user capability analysis method based on virtual reality, including:
step S110, creating a virtual reality scene and setting an interaction task in the virtual reality scene according to the capability of the user to be analyzed, wherein the performance of the interaction task is influenced by the capability of the user.
In this embodiment, the ability of the user to analyze and evaluate is not limited, and may be social ability, for example. In this embodiment, a virtual reality scenario is created and an interactive task is formulated according to the capability of the user, so that the user is ensured to be suitable for playing to embody the personal capability of the user when executing the interactive task in the virtual reality scenario.
In this embodiment, the provided virtual reality scenario includes, but is not limited to, a social skill evaluation scenario, a visual ability evaluation scenario, and a social rule understanding ability evaluation scenario, which are used to simulate a real social interaction scenario, and involves a certain social ability or a plurality of social abilities. The scene presentation may have various options, such as a touch screen display, a tablet computer, or virtual reality VR glasses, for the purpose of presenting a social scene to a user for monitoring the user's performance under the scene and further evaluating the social ability thereof. The user can sit or stand in front of the touch screen, watch social scenes in the screen, interact with characters or pictures in the screen, and acquire data at the moment. Similarly, scene presentation may also be implemented by a tablet computer or virtual reality VR glasses.
Step S120, when the user performs the interaction task in the virtual reality scene, collecting interaction data from the user, specifically includes collecting voice, vision, behavior and/or electroencephalogram data when the user performs the interaction task.
In this embodiment, behavior interaction and voice interaction are mainly involved. Behavioral interactions, i.e., a person or thing in a scene performs behavioral interactions with a user, e.g., a virtual child in the scene calls a child user to a hand, with which the child user can call; voice interactions, i.e., voice communication of characters in a scene with a child user, for example, a virtual child may initiate a conversation with the child user, and a conversation may be maintained between the child user and the virtual child.
In this embodiment, by collecting voice, vision, behavior, and electroencephalogram data of the user, positioning tracking, motion capturing, orientation recognition, eye movement tracking, concentration detection, and relaxation detection can be realized. The positioning tracking is realized by space positioning equipment and user wearing equipment, and space positioning equipment arranges around the room, and user wearing bracelet and foot ring are tested in test space, and space positioning equipment can track the position of user wherein in real time, for example, when other people and user carry out the dialogue, discernment user is in the state of sitting. The motion capture device is composed of user wearing equipment, a head-wearing positioning device is arranged besides the hand ring and the foot ring, and the system restores the limb state of the user through the positions of the head, the upper limb and the lower limb devices, for example, the hand calling device can identify whether the hand calling action is carried out for the hand calling of other people. Orientation recognition, implemented by a headset, for monitoring the orientation of the user's head, may evaluate whether the user is able to orient the corresponding character when communicating with characters of different orientations. Eye tracking is implemented by a head mounted eye tracker for detecting whether a user can notice a person's face with which he is talking while talking, and analyzing the user's visual ability. Concentration detection is realized by a head-mounted EEG monitoring device, EEG electroencephalogram signal characteristic waves are monitored, and concentration of a user when talking with others is analyzed. The relaxation degree detection is realized by a head-mounted EEG monitoring device, EEG electroencephalogram characteristic waves are monitored, and the relaxation degree of a user in a familiar environment is analyzed. Behavior monitoring, providing user behavior data. The voice time information refers to time information of starting and ending a certain language of a user. The voice text information refers to specific text content information of a language spoken by a user in the language interaction process.
Step S130, various features reflecting the capability of the user, which are displayed when the user performs the interactive task, are identified.
In this embodiment, since various abilities of the user have different expression modes when executing the interactive task in different scenes, accurate and comprehensive analysis and evaluation are required for the abilities of the user, and features of the user, which reflect the abilities of the user in executing the interactive task in the current scene, must be identified in advance.
Step S140, analyzing the levels of various features of the user according to the interactive data of the user.
According to the technical scheme, firstly, according to the capability of the user to be analyzed and evaluated, a virtual reality scene and an interactive task which are suitable for the user to play the capability of the user are created, various characteristics of the user capable of reflecting the capability of the user when the interactive task is executed are identified, and at the moment, the actual level of the various characteristics of the user is analyzed based on the interactive data of the user, so that the capability of the user is comprehensively and completely analyzed and evaluated in a targeted manner.
As shown in fig. 2, in one embodiment of the present invention, there is provided a user capability analysis method based on virtual reality, including:
step S210, setting elements in a material library according to the elements in the real scene where the user is located.
In the embodiment, a dedicated material library is prepared for the user, so that the real scene where the user is located and the virtual reality scene have the same elements, the continuity between the real scene and the virtual reality scene is improved, and the user can adapt to the virtual reality scene as soon as possible.
Specifically, the elements in the material library are set according to the common elements in the multiple real scenes where the user is located.
In this embodiment, in general, the user is more familiar with the common elements in the real scene, so that the elements in the virtual reality scene are set only according to the common elements in the real scene, and the user can be ensured to adapt to the virtual reality scene quickly under the condition of effectively controlling the number of elements in the virtual reality scene.
Step S220, according to the historical activity record of the user, setting an interaction mode in the scene library.
In this embodiment, a dedicated context library is prepared for the user, so that the interaction task in the virtual reality scene is associated with the historical activity of the user, which is beneficial for the user to adapt to the virtual reality scene as soon as possible.
Step S230, according to the capability of the user to be analyzed, selecting a plurality of elements from a preset material library, creating a virtual reality scene by using the plurality of elements, selecting one or more interaction modes realized based on the plurality of elements from a preset scene library, and setting interaction tasks in the virtual reality scene by using the one or more interaction modes.
In this embodiment, the virtual reality scene is decomposed into elements constituting the virtual reality scene and various interaction modes that can be performed in the virtual reality scene based on the designs of the material library and the scene library. In this embodiment, it is assumed that the user is an autistic patient (the embodiment herein and the following describes the technical scheme by taking an autistic patient as an example, in practice, the technical scheme of the present invention is also applicable to patients with retarded development and other crowds), and the material library applicable to different autistic patients includes a scene library, a character action library, a character expression library, and a character language library, wherein the scene library includes classroom scenes, library scenes, and canteen scenes, the character library includes characters such as teacher 1, teacher 2, classmates 1, classmates 2, etc., the character action library includes actions such as call, invitation, hand lifting, walking, jumping, etc., the character expression library includes expressions such as happiness, smiling, gas, etc., the character language library includes language contents in terms of vocabulary, instructions, topics, etc., and the scene library uses various materials in the material library as basic elements, and designs the scene by different interaction modes such as call, topic calling, discussion, question disputed, dialogue, audiometric, etc.
Step S240, when the user performs the interaction task in the virtual reality scene, collects interaction data from the user, specifically including collecting voice, vision, behavior and/or electroencephalogram data when the user performs the interaction task.
Step S250, various characteristics reflecting the capability of the user, which are displayed when the user performs the interactive task, are identified.
(1) When the user's ability is social skills, the user's multiple features are identified as gaze, response to other people content, self-introduction content, body language expression content, talking to strangers content.
In this embodiment, when the social skill capability of the user needs to be analyzed and evaluated, a virtual reality scene with "new friends aware" as the theme is created, a child with a virtual interaction task calls the user, the user needs to look at the child and respond to the child, at this time, the features for analyzing the social skill of the user are determined as the gaze condition, the content of responding to the other person, the self-introduced content, the body language expression content, the conversation content with strangers, and the like, and the positions of eyes, mouth, and the like of the virtual child are taken as the key areas, the eye gaze condition, the attention, that is, the attention degree of the user to the key areas are monitored, and the voice time information and the voice text information in the user conversation process are monitored.
(2) When the user's ability is visual ability, the user's multiple features are identified as joint attention, pointing follow, visual fixation, visual follow-up, flexible follow-up.
In this embodiment, when the visual ability of the user needs to be analyzed and evaluated, a virtual reality scene with a "play together balloon" as a theme is created, the interactive task is that a virtual child plays a red balloon on the grassland, the user needs to look at and look behind the virtual character and the object, at this time, the feature for analyzing the visual ability of the user is determined as a joint attention situation, a pointing following situation, a visual looking behind situation, a flexible looking behind situation, and the face of the virtual child and the red balloon are taken as key areas, the eye gaze and looking behind situation of the user on the key areas and the attention, i.e. the attention degree, of the key areas are monitored, whether the user calls limbs and responses to the child facing the hand, and whether the current direction and the forward facing virtual child can be changed are monitored.
(3) When the user's ability is social rule understanding ability, the user's multiple features are identified as belonging relationship understanding cases, conditional relationship understanding cases, causal relationship understanding cases, turning relationship understanding cases, social rule understanding cases.
In this embodiment, when analysis and evaluation are required to be performed on the social rule understanding capability of the user, a virtual reality scene with a football sport class as a theme is created, an interactive task is that a virtual teacher inquires about a problem of attribution of a ball game prize of the user, the user understands the problem and responds, at this time, the feature for analyzing the social rule understanding capability of the user is determined as whether the user can answer the problem correctly (reflect the condition relation understanding capability), and voice text information of the user needs to be acquired; or creating a virtual reality scene with 'whose big apple' as a theme, wherein an interactive task is that a virtual teacher inquires the owner of the apple of the user, the user understands the questions and responds, at the moment, the characteristics for analyzing the understanding capability of the social rules of the user are determined as whether the user can answer the questions correctly (reflect the causal understanding capability), and the voice text information of the user needs to be collected.
Step S260, analyzing the levels of various characteristics of the user according to the interactive data of the user.
In summary, according to the technical solution of the present embodiment, a simulated social scenario is constructed according to the social capability subdivision skill item, where the simulated scenario includes a social skill assessment scenario, a visual capability assessment scenario, and a social rule understanding capability assessment scenario, and rich and comprehensive multi-modal behavior monitoring including positioning tracking, motion capturing, orientation recognition, eye movement tracking, concentration detection, and relaxation detection is implemented, so as to implement targeted, comprehensive and accurate user capability assessment.
As shown in fig. 3, in one embodiment of the present invention, there is provided a user capability analysis apparatus based on virtual reality, the apparatus including:
the scene creation module 310 creates a virtual reality scene and sets an interaction task in the virtual reality scene according to the capability of the user to be analyzed, and the performance of the interaction task is affected by the capability of the user.
In this embodiment, the ability of the user to analyze and evaluate is not limited, and may be social ability, for example. In this embodiment, a virtual reality scenario is created and an interactive task is formulated according to the capability of the user, so that the user is ensured to be suitable for playing to embody the personal capability of the user when executing the interactive task in the virtual reality scenario.
In this embodiment, the provided virtual reality scenario includes, but is not limited to, a social skill evaluation scenario, a visual ability evaluation scenario, and a social rule understanding ability evaluation scenario, which are used to simulate a real social interaction scenario, and involves a certain social ability or a plurality of social abilities. The scene presentation may have various options, such as a touch screen display, a tablet computer, or virtual reality VR glasses, for the purpose of presenting a social scene to a user for monitoring the user's performance under the scene and further evaluating the social ability thereof. The user can sit or stand in front of the touch screen, watch social scenes in the screen, interact with characters or pictures in the screen, and acquire data at the moment. Similarly, scene presentation may also be implemented by a tablet computer or virtual reality VR glasses.
The data collection module 320 collects interaction data from the user when the user performs an interaction task in a virtual reality scenario, specifically including collecting voice, visual, behavioral, and/or electroencephalogram data when the user performs the interaction task.
In this embodiment, behavior interaction and voice interaction are mainly involved. Behavioral interactions, i.e., a person or thing in a scene performs behavioral interactions with a user, e.g., a virtual child in the scene calls a child user to a hand, with which the child user can call; voice interactions, i.e., voice communication of characters in a scene with a child user, for example, a virtual child may initiate a conversation with the child user, and a conversation may be maintained between the child user and the virtual child.
In this embodiment, by collecting voice, vision, behavior, and electroencephalogram data of the user, positioning tracking, motion capturing, orientation recognition, eye movement tracking, concentration detection, and relaxation detection can be realized. The positioning tracking is realized by space positioning equipment and user wearing equipment, and space positioning equipment arranges around the room, and user wearing bracelet and foot ring are tested in test space, and space positioning equipment can track the position of user wherein in real time, for example, when other people and user carry out the dialogue, discernment user is in the state of sitting. The motion capture device is composed of user wearing equipment, a head-wearing positioning device is arranged besides the hand ring and the foot ring, and the system restores the limb state of the user through the positions of the head, the upper limb and the lower limb devices, for example, the hand calling device can identify whether the hand calling action is carried out for the hand calling of other people. Orientation recognition, implemented by a headset, for monitoring the orientation of the user's head, may evaluate whether the user is able to orient the corresponding character when communicating with characters of different orientations. Eye tracking is implemented by a head mounted eye tracker for detecting whether a user can notice a person's face with which he is talking while talking, and analyzing the user's visual ability. Concentration detection is realized by a head-mounted EEG monitoring device, EEG electroencephalogram signal characteristic waves are monitored, and concentration of a user when talking with others is analyzed. The relaxation degree detection is realized by a head-mounted EEG monitoring device, EEG electroencephalogram characteristic waves are monitored, and the relaxation degree of a user in a familiar environment is analyzed. Behavior monitoring, providing user behavior data. The voice time information refers to time information of starting and ending a certain language of a user. The voice text information refers to specific text content information of a language spoken by a user in the language interaction process.
The feature recognition module 330 recognizes various features reflecting the user's ability to perform interactive tasks.
In this embodiment, since various abilities of the user have different expression modes when executing the interactive task in different scenes, accurate and comprehensive analysis and evaluation are required for the abilities of the user, and features of the user, which reflect the abilities of the user in executing the interactive task in the current scene, must be identified in advance.
The feature analysis module 340 analyzes levels of various features of the user based on the user's interaction data.
According to the technical scheme, firstly, according to the capability of the user to be analyzed and evaluated, a virtual reality scene and an interactive task which are suitable for the user to play the capability of the user are created, various characteristics of the user capable of reflecting the capability of the user when the interactive task is executed are identified, and at the moment, the actual level of the various characteristics of the user is analyzed based on the interactive data of the user, so that the capability of the user is comprehensively and completely analyzed and evaluated in a targeted manner.
As shown in fig. 4, in one embodiment of the present invention, there is provided a user capability analysis apparatus based on virtual reality, the apparatus including:
the material setting module 410 sets elements in the material library according to the elements in the real scene where the user is located.
In the embodiment, a dedicated material library is prepared for the user, so that the real scene where the user is located and the virtual reality scene have the same elements, the continuity between the real scene and the virtual reality scene is improved, and the user can adapt to the virtual reality scene as soon as possible.
Specifically, the elements in the material library are set according to the common elements in the multiple real scenes where the user is located.
In this embodiment, in general, the user is more familiar with the common elements in the real scene, so that the elements in the virtual reality scene are set only according to the common elements in the real scene, and the user can be ensured to adapt to the virtual reality scene quickly under the condition of effectively controlling the number of elements in the virtual reality scene.
The scenario setting module 420 sets the interaction mode in the scenario library according to the historical activity record of the user.
In this embodiment, a dedicated context library is prepared for the user, so that the interaction task in the virtual reality scene is associated with the historical activity of the user, which is beneficial for the user to adapt to the virtual reality scene as soon as possible.
The scene creation module 430 selects a plurality of elements from a preset material library according to the capability of the user to be analyzed, creates a virtual reality scene by using the plurality of elements, selects one or more interaction modes implemented based on the plurality of elements from the preset scene library, and sets interaction tasks in the virtual reality scene by using the one or more interaction modes.
In this embodiment, the virtual reality scene is decomposed into elements constituting the virtual reality scene and various interaction modes that can be performed in the virtual reality scene based on the designs of the material library and the scene library. In this embodiment, if the user is an autistic patient, the material library suitable for different autistic patients includes a scene library, a character action library, a character expression library, and a character language library, the scene library includes a classroom scene, a library scene, and a canteen scene, the character library includes characters such as teacher 1, teacher 2, classmates 1, classmates 2, the character action library includes actions such as calling, inviting, lifting hands, walking, jumping, etc., the character expression library includes expressions such as happiness, smiling, and angry, the character language library includes language contents in terms of vocabulary, instructions, topics, etc., and the scene library uses various materials in the material library as basic elements, and designs social scenes by different interactive modes such as calling, topic discussion, questions for disputes, questions for answers, conversations, listening and speaking.
The data collection module 440 collects interaction data from the user as the user performs the interaction task in the virtual reality scenario, specifically including collecting voice, visual, behavioral, and/or electroencephalogram data as the user performs the interaction task.
The feature recognition module 450 recognizes various features reflecting the user's ability to perform interactive tasks.
(1) When the user's ability is social skills, the user's multiple features are identified as gaze, response to other people content, self-introduction content, body language expression content, talking to strangers content.
In this embodiment, when the social skill capability of the user needs to be analyzed and evaluated, a virtual reality scene with "new friends aware" as the theme is created, a child with a virtual interaction task calls the user, the user needs to look at the child and respond to the child, at this time, the features for analyzing the social skill of the user are determined as the gaze condition, the content of responding to the other person, the self-introduced content, the body language expression content, the conversation content with strangers, and the like, and the positions of eyes, mouth, and the like of the virtual child are taken as the key areas, the eye gaze condition, the attention, that is, the attention degree of the user to the key areas are monitored, and the voice time information and the voice text information in the user conversation process are monitored.
(2) When the user's ability is visual ability, the user's multiple features are identified as joint attention, pointing follow, visual fixation, visual follow-up, flexible follow-up.
In this embodiment, when the visual ability of the user needs to be analyzed and evaluated, a virtual reality scene with a "play together balloon" as a theme is created, the interactive task is that a virtual child plays a red balloon on the grassland, the user needs to look at and look behind the virtual character and the object, at this time, the feature for analyzing the visual ability of the user is determined as a joint attention situation, a pointing following situation, a visual looking behind situation, a flexible looking behind situation, and the face of the virtual child and the red balloon are taken as key areas, the eye gaze and looking behind situation of the user on the key areas and the attention, i.e. the attention degree, of the key areas are monitored, whether the user calls limbs and responses to the child facing the hand, and whether the current direction and the forward facing virtual child can be changed are monitored.
(3) When the user's ability is social rule understanding ability, the user's multiple features are identified as belonging relationship understanding cases, conditional relationship understanding cases, causal relationship understanding cases, turning relationship understanding cases, social rule understanding cases.
In this embodiment, when analysis and evaluation are required to be performed on the social rule understanding capability of the user, a virtual reality scene with a football sport class as a theme is created, an interactive task is that a virtual teacher inquires about a problem of attribution of a ball game prize of the user, the user understands the problem and responds, at this time, the feature for analyzing the social rule understanding capability of the user is determined as whether the user can answer the problem correctly (reflect the condition relation understanding capability), and at this time, voice text information of the user needs to be collected; or creating a virtual reality scene with 'whose big apple' as a theme, wherein an interaction task is that a virtual teacher inquires the owner of the apple of the user, the user understands the questions and responds, at the moment, the characteristics for analyzing the understanding capability of the social rules of the user are determined as whether the user can answer the questions correctly (reflecting the causal understanding capability), and at the moment, the voice text information of the user needs to be collected.
The feature analysis module 460 analyzes the levels of various features of the user based on the user's interaction data.
In summary, according to the technical solution of the present embodiment, a simulated social scenario is constructed according to the social capability subdivision skill item, where the simulated scenario includes a social skill assessment scenario, a visual capability assessment scenario, and a social rule understanding capability assessment scenario, and rich and comprehensive multi-modal behavior monitoring including positioning tracking, motion capturing, orientation recognition, eye movement tracking, concentration detection, and relaxation detection is implemented, so as to implement targeted, comprehensive and accurate user capability assessment.
The invention also provides a computer readable storage medium. In one computer-readable storage medium embodiment according to the present invention, the computer-readable storage medium may be configured to store a program for performing the virtual reality-based user capability analysis method of the above-described method embodiment, which may be loaded and executed by a processor to implement the virtual reality-based user capability analysis method described above. For convenience of explanation, only those portions of the embodiments of the present invention that are relevant to the embodiments of the present invention are shown, and specific technical details are not disclosed, please refer to the method portions of the embodiments of the present invention. The computer readable storage medium may be a storage device including various electronic devices, and optionally, the computer readable storage medium in the embodiments of the present invention is a non-transitory computer readable storage medium.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general-purpose systems may also be used with the teachings herein. The required structure for a construction of such a system is apparent from the description above. In addition, the present invention is not directed to any particular programming language. It will be appreciated that the teachings of the present invention described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments can be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some or all of the components in the processing means of a mobile terminal according to an embodiment of the present invention may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present invention can also be implemented as an apparatus or device program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present invention may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.

Claims (10)

1. A virtual reality-based user capability analysis method, comprising:
creating a virtual reality scene and setting an interaction task in the virtual reality scene according to the capability of a user to be analyzed, wherein the performance of the interaction task is influenced by the capability of the user;
when the user executes the interaction task in the virtual reality scene, collecting interaction data from the user;
identifying various characteristics reflecting the capability of the user when the user executes the interactive task;
and analyzing the levels of various characteristics of the user according to the interaction data of the user.
2. The virtual reality-based user capability analysis method of claim 1, wherein the step of collecting interaction data from the user comprises:
and collecting voice, vision, behavior and/or brain electricity data when the user executes the interaction task.
3. The virtual reality-based user capability analysis method of claim 1, wherein the step of identifying a plurality of features that the user exhibits in performing the interactive task reflecting the user's capability level comprises:
when the user's ability is social skills, the user's multiple features are identified as gaze status, response to other people content, self-introduced content, body language expression content, talking to strangers content.
4. The virtual reality-based user capability analysis method of claim 1, wherein the step of identifying a plurality of features that the user exhibits in performing the interactive task reflecting the user's capability level comprises:
when the user's ability is visual ability, the user's multiple features are identified as joint attention, pointing follow, visual fixation, visual follow-up, flexible follow-up.
5. The virtual reality-based user capability analysis method of claim 1, wherein the step of identifying a plurality of features that the user exhibits in performing the interactive task reflecting the user's capability level comprises:
when the user's ability is social rule understanding ability, identifying the user's multiple features as belonging relationship understanding cases, conditional relationship understanding cases, causal relationship understanding cases, turning relationship understanding cases, social rule understanding cases.
6. The virtual reality-based user capability analysis method of claim 1, wherein creating a virtual reality scene and setting interactive tasks in the virtual reality scene according to capabilities that a user needs to analyze comprises:
according to the capability of a user to analyze, selecting a plurality of elements from a preset material library, creating a virtual reality scene by using the plurality of elements, selecting one or more interaction modes realized based on the plurality of elements from a preset scene library, and setting interaction tasks in the virtual reality scene by using the one or more interaction modes.
7. The virtual reality-based user capability analysis method of claim 6, further comprising, prior to the step of selecting a plurality of elements from a preset library of stories and creating a virtual reality scene using the plurality of elements, according to a capability of a user to be analyzed:
and setting the elements in the material library according to the elements in the real scene where the user is located.
8. The virtual reality-based user capability analysis method of claim 7, wherein the step of setting elements in the gallery based on elements in a real scene in which the user is located further comprises:
and setting the elements in the material library according to the common elements in the multiple real scenes where the user is located.
9. A virtual reality-based user capability analysis apparatus, comprising:
the scene creation module creates a virtual reality scene and sets an interaction task in the virtual reality scene according to the capability of a user to be analyzed, and the performance of the interaction task is influenced by the capability of the user;
the data acquisition module is used for acquiring interaction data from the user when the user executes the interaction task in the virtual reality scene;
the feature identification module is used for identifying various features reflecting the capability of the user when the user executes the interactive task;
and the characteristic analysis module is used for analyzing the levels of various characteristics of the user according to the interaction data of the user.
10. A computer readable storage medium, in which a plurality of program codes are stored, characterized in that the program codes are adapted to be loaded and executed by a processor to perform the virtual reality based user capability analysis method of any one of claims 1 to 8.
CN202310507322.4A 2023-05-05 2023-05-05 User capability analysis method and device based on virtual reality and computer readable storage medium Pending CN116578185A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310507322.4A CN116578185A (en) 2023-05-05 2023-05-05 User capability analysis method and device based on virtual reality and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310507322.4A CN116578185A (en) 2023-05-05 2023-05-05 User capability analysis method and device based on virtual reality and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN116578185A true CN116578185A (en) 2023-08-11

Family

ID=87540650

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310507322.4A Pending CN116578185A (en) 2023-05-05 2023-05-05 User capability analysis method and device based on virtual reality and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN116578185A (en)

Similar Documents

Publication Publication Date Title
Paiva et al. Empathy in virtual agents and robots: A survey
Bull Posture & Gesture: Posture & Gesture
US10089895B2 (en) Situated simulation for training, education, and therapy
EP3381175B1 (en) Apparatus and method for operating personal agent
Li et al. Emotion recognition using Kinect motion capture data of human gaits
CN109176535B (en) Interaction method and system based on intelligent robot
Paquette et al. Sensor-Free or Sensor-Full: A Comparison of Data Modalities in Multi-Channel Affect Detection.
Herumurti et al. Overcoming glossophobia based on virtual reality and heart rate sensors
CN110992222A (en) Teaching interaction method and device, terminal equipment and storage medium
Moghimi et al. Influencing human affective responses to dynamic virtual environments
Vieira et al. Assessment of fun in interactive systems: A survey
JP2010128281A (en) Interaction activating system and interaction activating robot
US20220309947A1 (en) System and method for monitoring and teaching children with autistic spectrum disorders
Li et al. A framework for using games for behavioral analysis of autistic children
Chaichitwanidchakol et al. Design and Implementation of Interactive Mobile Application for Autistic Children in Physical Education Class.
Li et al. Designing immersive affective environments with biofeedback
US20230185361A1 (en) System and method for real-time conflict management and safety improvement
CN116578185A (en) User capability analysis method and device based on virtual reality and computer readable storage medium
Gómez Jáuregui et al. Video analysis of approach-avoidance behaviors of teenagers speaking with virtual agents
Merkx et al. Inducing and measuring emotion through a multiplayer first-person shooter computer game
Ohmoto et al. Effect of an agent's contingent responses on maintaining an intentional stance
CN108563322B (en) Control method and device of VR/AR equipment
Abate et al. Delex: A deep learning emotive experience: Investigating empathic hci
Raja et al. Recognition of facial stress system using machine learning with an intelligent alert system
Park et al. Utility of haptic data in recognition of user state

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination