CN117032453A - Virtual reality interaction system for realizing mutual recognition function - Google Patents

Virtual reality interaction system for realizing mutual recognition function Download PDF

Info

Publication number
CN117032453A
CN117032453A CN202310972516.1A CN202310972516A CN117032453A CN 117032453 A CN117032453 A CN 117032453A CN 202310972516 A CN202310972516 A CN 202310972516A CN 117032453 A CN117032453 A CN 117032453A
Authority
CN
China
Prior art keywords
user
virtual environment
virtual
behavior
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310972516.1A
Other languages
Chinese (zh)
Inventor
康国浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenxiang Education Technology Shenzhen Co ltd
Original Assignee
Shenxiang Education Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenxiang Education Technology Shenzhen Co ltd filed Critical Shenxiang Education Technology Shenzhen Co ltd
Priority to CN202310972516.1A priority Critical patent/CN117032453A/en
Publication of CN117032453A publication Critical patent/CN117032453A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a virtual reality interaction system for realizing an mutual recognition function, which comprises: a virtual environment production unit responsible for creating and updating a virtual environment; a body motion capturing and rendering unit responsible for capturing a user's motion in the real world and converting it into motion in a virtual environment; the interactive recognition unit recognizes the behavior of the user in the virtual environment and performs interaction through a deep learning technology; and the feedback unit is used for providing feedback based on the behavior of the user in the virtual environment, and the interactive system can provide more real interactive experience for the user.

Description

Virtual reality interaction system for realizing mutual recognition function
Technical Field
The invention relates to the technical field of intelligent teaching, in particular to a virtual reality interaction system for realizing an interactive function.
Background
With the rapid development of science and technology, the virtual reality technology has gradually been integrated into our daily life, and the application of the virtual reality technology in the fields of social, education, entertainment and the like is increasingly wide. However, existing virtual reality systems focus mainly on the simulation of audiovisual sensations, lacking in real-world interactive and social experience.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a virtual reality interaction system for realizing the mutual recognition function.
The invention is realized by the following technical scheme: the invention discloses a virtual reality interaction system for realizing an mutual recognition function, which comprises the following components:
virtual environment production unit: responsible for creating and updating virtual environments; the unit analyzes and interprets by receiving user input and other necessary information, such as scene selections, environment settings, and user roles, to determine the needs and intent of the user. Virtual environments are created according to the needs and intents of users, including scenes, environment layout, object placement, interactive elements, using virtual reality technology and development tools. The virtual environment is updated in real time according to the needs and intents of the user, such as changing scenes, adding or deleting objects, modifying interactive elements.
Body motion capture and rendering unit: is responsible for capturing the user's motion in the real world and converting it into motion in the virtual environment. The unit captures motion data of the user in the real world, including body motion, facial expression and body posture, through sensors and computer vision techniques; after the data are processed, the data are converted into actions in the virtual environment, so that the user can perform real interaction and identification in the virtual environment.
Mutual recognition unit: identifying the behavior of a user in a virtual environment and performing interaction through a deep learning technology; the unit uses the trained deep learning model to analyze behavior data of the user in the virtual environment, including body actions, facial expressions, and body gestures. Through analysis, the behavior classification, emotion state, voice conversion and the like of the user can be identified.
And a feedback unit: feedback is provided based on the behavior of the user in the virtual environment. The unit collects behavior and interaction data of the user in the virtual environment, including actions, interactions and voices of the user in the virtual environment. Feedback information is generated by analyzing and processing the data, including behavior recognition, emotion analysis, and speech recognition. The feedback information can be presented to the user in a mode of virtual sound effect, visual effect, haptic effect and the like, so that the sense of reality of the virtual environment and the satisfaction degree of the user are improved.
Preferably, the step of identifying the behaviour of the user in the virtual environment comprises:
data collection, by having a community of users make various behaviors and interactions in the real world, and capturing their actions using sensors and computer vision techniques;
data preprocessing: analyzing and processing collected behavior and interaction data of a user in a virtual environment, including noise removal, standardization and normalization;
feature extraction: extracting useful features from the preprocessed data;
model training: training a deep learning model by using the extracted features, wherein the model can learn the patterns and rules of the behaviors and interactions of the user in the virtual environment through training;
model evaluation and optimization: evaluating and optimizing the trained model to ensure that the model can accurately identify the behavior and interaction of the user;
model application: the trained model is applied to the actual virtual environment, so that the user can perform actual interaction and recognition in the virtual environment.
Preferably, the virtual environment production unit creates and updates the virtual environment including:
receive user input and other necessary information: receiving necessary information input by a user through a user interface or other components;
analyze user input and other necessary information: analyzing and interpreting the received user input and other necessary information to determine the user's needs and intent;
creating a virtual environment: virtual environments are created according to the needs and intents of users, including scenes, environment layout, object placement, interactive elements, using virtual reality technology and development tools.
Updating the virtual environment: the virtual environment is updated in real time according to the needs and intents of the user, such as changing scenes, adding or deleting objects, modifying interactive elements.
The invention provides a virtual reality interaction system for realizing an mutual recognition function, which is compared with the prior art: by creating and updating the virtual environment by the virtual environment production unit, a more realistic and lifelike virtual environment experience can be provided. Through the body motion capturing and rendering unit, the motion of the user in the real world can be converted into the motion in the virtual environment, so that the user can perform real interaction and recognition in the virtual environment; through the mutual recognition unit, the behavior of the user in the virtual environment can be recognized and interacted, and the interactivity and user experience of the virtual environment are improved; through the feedback unit, feedback information of the behavior and interaction of the user in the virtual environment can be provided, so that the user can better know the behavior and interaction of the user in the virtual environment, and the sense of reality and user satisfaction of the virtual environment are improved.
Drawings
FIG. 1 is a flow chart of the interactive steps of the interactive system of the present invention.
Detailed Description
The following describes in detail the examples of the present invention, which are implemented on the premise of the technical solution of the present invention, and detailed embodiments and specific operation procedures are given, but the scope of protection of the present invention is not limited to the following examples.
The invention is realized by the following technical scheme: the invention provides a virtual reality interaction system for realizing an interactive function, which is an interaction system based on a virtual reality technology, and comprises a virtual environment production unit, a body motion capturing and rendering unit, an interactive unit and a feedback unit, wherein the real interaction and recognition of a user in a virtual environment are realized through the cooperative work of the components.
The virtual environment production unit is one of the important components of the system, which is responsible for creating and updating the virtual environment. The unit analyzes and interprets by receiving user input and other necessary information, such as scene selections, environment settings, and user roles, etc., to determine the needs and intent of the user. The virtual environment production unit creates a virtual environment including a scene, an environment layout, object placement, interactive elements, and the like using virtual reality technology and development tools according to the needs and intents of the user. Meanwhile, according to the needs and intents of the user, the virtual environment production unit can also update the virtual environment in real time, such as changing scenes, adding or deleting objects, modifying interactive elements and the like, so as to provide a more real and realistic virtual environment experience.
The body motion capture and rendering unit is another important component of the system, which is responsible for capturing the user's motion in the real world and converting it into motion in a virtual environment. The unit captures motion data of a user in the real world, including body motion, facial expression, body posture and the like, through means of sensors, computer vision technology and the like. After the data are processed, the data are converted into actions in the virtual environment, so that the user can perform real interaction and identification in the virtual environment. The output of the body motion capture and rendering unit may interact with other elements in the virtual environment, such as interacting with a virtual character, manipulating virtual items, etc.
The mutual recognition unit is one of the core components of the system, and recognizes the behavior of a user in a virtual environment and interacts through a deep learning technology. The inter-recognition unit uses the trained deep learning model to analyze behavior data of the user in the virtual environment, including body actions, facial expressions, body gestures, and the like. Through analysis, the behavior classification, emotion state, voice conversion and the like of the user can be identified, so that the identification of the behavior of the user in the virtual environment is realized. The output of the mutual recognition unit may interact with other components, such as with a virtual environment production unit, a feedback unit, etc.
The feedback unit is another core component of the system that provides feedback based on the behavior of the user in the virtual environment. The feedback unit collects behavior and interaction data of the user in the virtual environment, including actions, interactions, voices and the like of the user in the virtual environment. By analyzing and processing the data, the feedback unit may generate feedback information that may be presented to the user by way of virtual sound effects, visual effects, haptic effects, and the like. The feedback information may be presented in a manner that is customized according to the design of the system and the needs of the user, such as providing virtual audio, displaying related images or videos, or delivering feedback information to the user via a haptic feedback device. Through the provision of feedback information, the user can better understand the behavior and interaction conditions of the user in the virtual environment, and the sense of reality and the user satisfaction of the virtual environment are improved.
In addition to the components described above, the interactive system of the present invention may also be provided with other auxiliary functions and features, such as user interface design, system setup, fault handling, etc. The user interface design may be customized to the needs and preferences of the user to provide a more friendly and easy to use interactive interface. The system settings can set various parameters and options of the system to meet the needs and scenes of different users. The fault handling can diagnose and handle various problems and errors occurring in the system so as to ensure the stability and reliability of the system.
In one embodiment, the step of identifying the behavior of the user in the virtual environment comprises:
data collection, by having a community of users make various behaviors and interactions in the real world, and capturing their actions using sensors and computer vision techniques;
data preprocessing: analyzing and processing collected behavior and interaction data of the user in the virtual environment, including noise removal, normalization and other operations, so as to facilitate subsequent learning and recognition.
Feature extraction: useful features such as body movements, facial expressions, body gestures, etc. are extracted from the preprocessed data. This process may use some commonly used computer vision techniques and machine learning algorithms, such as optical flow, deep learning, etc.
Model training: the extracted features are used to train a deep learning model, such as a convolutional neural network, a recurrent neural network, or the like. Through training, the model can learn the patterns and rules of the behavior and interaction of the user in the virtual environment.
Model evaluation and optimization: the trained model is evaluated and optimized to ensure that it accurately recognizes the behavior and interactions of the user. Methods of evaluation and optimization include cross-validation, confusion matrix, adjustment of hyper-parameters, and the like.
Model application: the trained model is applied to the actual virtual environment, so that the user can perform actual interaction and recognition in the virtual environment. The application of the model can be realized by processing the behavior and interaction data of the user in real time, and feeding back the identification result to the user and other components.
Specifically, data preprocessing: this step includes preprocessing the collected user behavior and interaction data in the virtual environment, such as removing noise, normalizing, etc. Noise removal refers to removing disturbances and noise in the data, such as background clutter in the image or noise in the sound. Normalization refers to converting data into a standard distribution or a distribution with an average value of 0 and a standard deviation of 1, so as to avoid that the size difference of the data influences the training of a model. Normalization refers to mapping the values of the data to a specified range, such as [0,1] or [ -1,1], to avoid that too large or too small data has an impact on the training of the model.
Feature extraction: this step is to extract useful features such as body movements, facial expressions, body gestures, etc. from the preprocessed data. Optical flow is a computer vision technique used to describe the movement and variation of pixels in an image. In feature extraction, optical flow may be used to describe the actions and movements of a user in a virtual environment. Deep learning is a machine learning algorithm that can learn features and patterns in data using a multi-layer neural network. In feature extraction, a deep learning model may be used to learn features and patterns of user behavior and interactions in a virtual environment.
Model training: this step uses the extracted features to train a deep learning model, such as a convolutional neural network, a recurrent neural network, or the like. Through training, the model can learn the patterns and rules of the behavior and interaction of the user in the virtual environment.
Model evaluation and optimization: this step evaluates and optimizes the trained model to ensure that it accurately recognizes the user's behavior and interactions. Methods of evaluation and optimization include cross-validation, confusion matrix, adjustment of hyper-parameters, and the like. Cross-validation is a common machine learning technique used to evaluate the performance and accuracy of models. It groups the data into groups, trains the model using some of the groups, and then tests the model with other groups to evaluate the performance and accuracy of the model. The confusion matrix is a machine learning tool that evaluates the classification performance of the model. The method compares the predicted result of the model with the real result, and calculates the accuracy and error rate of various classifications so as to evaluate the performance of the model. Adjusting the hyper-parameters is a common machine learning technique used to optimize the performance and accuracy of the model. According to the performance and the precision of the model, the super parameters of the model are manually adjusted to optimize the performance and the precision of the model.
Model application: this step applies the trained model to the actual virtual environment, enabling the user to perform real interactions and recognition in the virtual environment. The application of the model can be realized by processing user behavior and interaction data in real time, and feeding back the recognition result to the user and other components.
In one embodiment, a specific scheme of a part of units is disclosed:
mutual recognition system
Training an interactive system requires training using a large amount of real-world behavioral and interactive data. We can train the mutual recognition system using the following steps:
data preprocessing: the collected data is preprocessed, including noise removal, normalization, etc., to facilitate better learning and recognition of behaviors and interactions by the system.
Feature extraction: useful features such as body movements, facial expressions, body gestures, etc. are extracted from the preprocessed data. This process may use some commonly used computer vision techniques and machine learning algorithms, such as optical flow, deep learning, etc.
Model training: the extracted features are used to train the mutual recognition system. We can use some common machine learning and deep learning algorithms such as support vector machines, convolutional neural networks, etc. Through training, the mutual recognition system can learn to recognize the behavior and interaction of the user in the virtual environment.
Model evaluation and optimization: the trained model is evaluated and optimized to ensure that it accurately recognizes the behavior and interactions of the user. Methods of evaluation and optimization include cross-validation, confusion matrix, adjustment of hyper-parameters, and the like.
Developing virtual environment generators
Developing a virtual environment generator requires the use of computer graphics and virtual reality techniques to create and update a virtual environment. The virtual environment generator may be developed using the following steps:
and (3) environmental design: virtual environments, including scenes, objects, characters, etc., are designed according to user inputs and other information. This process may use some commonly used computer graphics tools and scene editors, such as Blender, unity.
The environment is realized: a computer program is used to implement a virtual environment including creation of scenes, placement of objects, setting of lights, etc. This process may use some commonly used virtual reality techniques and libraries, such as OpenVR, unity, etc.
And (5) environment updating: the virtual environment is updated according to the user's needs and feedback, including changing scenes, adding new objects, modifying characters, etc. This process may be updated based on the user's feedback and needs.
Integrated system component
The integrated system component needs to integrate the body motion capture and rendering, mutual recognition system and social module together to form a complete system. We can integrate the system components using the following steps.
In one embodiment, the step of creating and updating the virtual environment by the virtual environment production unit includes:
receive user input and other necessary information: receiving necessary information input by a user through a user interface or other components;
analyze user input and other necessary information: analyzing and interpreting the received user input and other necessary information to determine the user's needs and intent;
creating a virtual environment: virtual environments are created according to the needs and intents of users, including scenes, environment layout, object placement, interactive elements, using virtual reality technology and development tools.
Updating the virtual environment: the virtual environment is updated in real time according to the needs and intents of the user, such as changing scenes, adding or deleting objects, modifying interactive elements.
The steps of creating and updating the virtual environment in a specific scene include:
a: the user inputs scene selection and environment setting information, such as selecting a park scene, setting environment elements such as trees, flowers, grasslands, etc., through the user interface.
B: the virtual environment generator receives user input and other necessary information and analyzes and interprets the information to determine the needs and intent of the user.
C: the virtual environment generator creates a virtual environment, including an environmental element of a park scene, tree, flower, grass, etc., and possibly other objects and interactive elements, using virtual reality technology and development tools, according to the needs and intent of the user.
D: the user interacts in the virtual environment, such as moving or rotating a perspective, interacting with environmental elements or other users, and so forth.
E: based on the user's interactions and intent, the virtual environment generator updates the virtual environment in real-time, e.g., changes scenes, adds or deletes objects, modifies interactive elements, etc.
F: the virtual environment generator tests and optimizes the stability and performance of the virtual environment, such as testing the fluency of perspective movement and interaction, optimizing graphics rendering effects, and the like.
G: the virtual environment generator integrates the virtual environment with other components, such as sending user's motion information in the virtual environment to the body motion capture and rendering, mutual recognition system, social module, and feedback system.
This embodiment demonstrates how the virtual environment generator creates and updates the virtual environment according to the needs and intent of the user and integrates with other components to achieve the real interaction and recognition of the user in the virtual environment.
In summary, the system provides a more realistic, interactive, and personalized virtual reality experience for users by combining virtual environment production, body motion capture and rendering, mutual recognition techniques, feedback techniques, and the like. The implementation of the system requires the comprehensive application of various technologies and methods, including virtual reality technology, computer vision technology, deep learning technology and the like, and also needs to consider the requirements and preferences of users to provide more excellent and satisfactory services.
The online teaching system provided by the invention has the following advantages and innovation points:
the virtual environment is created and updated through the virtual environment production unit, so that more real and vivid virtual environment experience can be provided; through the body motion capturing and rendering unit, the motion of the user in the real world can be converted into the motion in the virtual environment, so that the user can perform real interaction and recognition in the virtual environment; through the mutual recognition unit, the behavior of the user in the virtual environment can be recognized and interacted, and the interactivity and user experience of the virtual environment are improved; through the feedback unit, feedback information of the behavior and interaction of the user in the virtual environment can be provided, so that the user can better know the behavior and interaction of the user in the virtual environment, and the sense of reality and the user satisfaction of the virtual environment are improved; high degree of customizable: the system can be customized according to the requirements and preferences of different users, including the appearance, functions, interaction modes and the like of the virtual environment; cross-platform support: the system can support various different operating systems and devices, including PCs, mobile devices, VR head-mounted displays and the like, so that more users can enjoy the experience of virtual reality interaction; security and privacy protection: the system has strict security measures and privacy protection mechanisms, and ensures that personal information security and privacy of users are not violated; multilingual support: the system can support a plurality of different languages so as to meet the demands of users in different areas and cultures; powerful expansibility: the system can expand new functions and components at any time to accommodate changing technology and market demands.
The present invention is not limited to the above-mentioned embodiments, and any person skilled in the art, based on the technical solution of the present invention and the inventive concept thereof, can be replaced or changed within the scope of the present invention.
It should be noted that in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.

Claims (8)

1. A virtual reality interaction system for implementing an interactive function, comprising:
a virtual environment production unit responsible for creating and updating a virtual environment;
a body motion capturing and rendering unit responsible for capturing a user's motion in the real world and converting it into motion in a virtual environment;
the interactive recognition unit recognizes the behavior of the user in the virtual environment and performs interaction through a deep learning technology;
and the feedback unit is used for providing feedback based on the behavior of the user in the virtual environment.
2. The virtual reality interaction system of claim 1, wherein the step of identifying behavior of the user in the virtual environment comprises:
data collection, by having a community of users make various behaviors and interactions in the real world, and capturing their actions using sensors and computer vision techniques;
data preprocessing: analyzing and processing collected behavior and interaction data of a user in a virtual environment, including noise removal, standardization and normalization;
feature extraction: extracting useful features from the preprocessed data;
model training: training a deep learning model by using the extracted features, wherein the model can learn the patterns and rules of the behaviors and interactions of the user in the virtual environment through training;
model evaluation and optimization: evaluating and optimizing the trained model to ensure that the model can accurately identify the behavior and interaction of the user;
model application: the trained model is applied to the actual virtual environment, so that the user can perform actual interaction and recognition in the virtual environment.
3. The virtual reality interaction system of claim 1, wherein the useful features include body actions, facial expressions and body gestures; and/or
Methods of evaluation and optimization include cross-validation, confusion matrix, and adjustment of hyper-parameters.
4. A virtual reality interaction system implementing a mutual recognition function as claimed in claim 3, wherein the information fed back by the feedback unit comprises virtual sound effects, visual effects and haptic effects.
5. The virtual reality interaction system of claim 4, wherein the scene of the virtual environment comprises a city, a forest, a space, a classroom, a sea, a desert.
6. The virtual reality interaction system of claim 5, wherein the virtual environment production unit creates and updates a virtual environment comprising:
receive user input and other necessary information: receiving necessary information input by a user through a user interface or other components;
analyze user input and other necessary information: analyzing and interpreting the received user input and other necessary information to determine the user's needs and intent;
creating a virtual environment: virtual environments are created according to the needs and intents of users, including scenes, environment layout, object placement, interactive elements, using virtual reality technology and development tools.
Updating the virtual environment: and updating the virtual environment in real time according to the requirements and the intentions of the user.
7. The virtual reality interaction system of claim 6, wherein the necessary information includes scene selection, environment settings, and user roles.
8. The virtual reality interaction system of claim 7, wherein the feedback unit provides feedback based on user behavior in the virtual environment, comprising:
and (3) data collection: collecting behavior and interaction data of a user in a virtual environment, wherein the behavior and interaction data comprise actions, interactions and voices of the user in the virtual environment;
and (3) data processing: analyzing and processing the collected data, including behavior recognition, emotion analysis and voice recognition;
and (3) feedback information generation: and generating feedback information according to the analysis result.
CN202310972516.1A 2023-08-03 2023-08-03 Virtual reality interaction system for realizing mutual recognition function Pending CN117032453A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310972516.1A CN117032453A (en) 2023-08-03 2023-08-03 Virtual reality interaction system for realizing mutual recognition function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310972516.1A CN117032453A (en) 2023-08-03 2023-08-03 Virtual reality interaction system for realizing mutual recognition function

Publications (1)

Publication Number Publication Date
CN117032453A true CN117032453A (en) 2023-11-10

Family

ID=88621991

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310972516.1A Pending CN117032453A (en) 2023-08-03 2023-08-03 Virtual reality interaction system for realizing mutual recognition function

Country Status (1)

Country Link
CN (1) CN117032453A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117369649A (en) * 2023-12-05 2024-01-09 山东大学 Virtual reality interaction system and method based on proprioception
CN117762250A (en) * 2023-12-04 2024-03-26 世优(北京)科技有限公司 Virtual reality action recognition method and system based on interaction equipment
CN118069754A (en) * 2024-04-18 2024-05-24 江西耀康智能科技有限公司 Training table and virtual reality environment data synchronization method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117762250A (en) * 2023-12-04 2024-03-26 世优(北京)科技有限公司 Virtual reality action recognition method and system based on interaction equipment
CN117369649A (en) * 2023-12-05 2024-01-09 山东大学 Virtual reality interaction system and method based on proprioception
CN117369649B (en) * 2023-12-05 2024-03-26 山东大学 Virtual reality interaction system and method based on proprioception
CN118069754A (en) * 2024-04-18 2024-05-24 江西耀康智能科技有限公司 Training table and virtual reality environment data synchronization method

Similar Documents

Publication Publication Date Title
WO2022048403A1 (en) Virtual role-based multimodal interaction method, apparatus and system, storage medium, and terminal
US12039454B2 (en) Microexpression-based image recognition method and apparatus, and related device
CN117032453A (en) Virtual reality interaction system for realizing mutual recognition function
CN103218842B (en) A kind of voice synchronous drives the method for the three-dimensional face shape of the mouth as one speaks and facial pose animation
CN107515674A (en) It is a kind of that implementation method is interacted based on virtual reality more with the mining processes of augmented reality
CN109993131B (en) Design intention distinguishing system and method based on multi-mode signal fusion
CN110956142A (en) Intelligent interactive training system
CN117251057A (en) AIGC-based method and system for constructing AI number wisdom
CN117197878A (en) Character facial expression capturing method and system based on machine learning
CN113506377A (en) Teaching training method based on virtual roaming technology
CN116957866A (en) Individualized teaching device of digital man teacher
CN117333646B (en) Roaming method of AR map linkage running machine
CN117932713A (en) Cloud native CAD software gesture interaction geometric modeling method, system, device and equipment
CN117292031A (en) Training method and device for 3D virtual digital lip animation generation model
Putra et al. Designing translation tool: Between sign language to spoken text on kinect time series data using dynamic time warping
CN107644686A (en) Medical data acquisition system and method based on virtual reality
CN115167674A (en) Intelligent interaction method based on digital human multi-modal interaction information standard
CN114630190A (en) Joint posture parameter determining method, model training method and device
CN113807280A (en) Kinect-based virtual ship cabin system and method
Jia et al. Facial expression synthesis based on motion patterns learned from face database
Clay et al. Towards an architecture model for emotion recognition in interactive systems: application to a ballet dance show
CN117857892B (en) Data processing method, device, electronic equipment, computer program product and computer readable storage medium based on artificial intelligence
Hu Visual Presentation Algorithm of Dance Arrangement and CAD Based on Big Data Analysis
Wang CAD Modeling Process in Animation Design Using Data Mining Methods
Hrúz et al. Multi-modal dialogue system with sign language capabilities

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication