CN113129413A - Virtual image feedback action system and method based on three-dimensional engine - Google Patents

Virtual image feedback action system and method based on three-dimensional engine Download PDF

Info

Publication number
CN113129413A
CN113129413A CN202110447823.9A CN202110447823A CN113129413A CN 113129413 A CN113129413 A CN 113129413A CN 202110447823 A CN202110447823 A CN 202110447823A CN 113129413 A CN113129413 A CN 113129413A
Authority
CN
China
Prior art keywords
dimensional image
module
image module
modeling
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110447823.9A
Other languages
Chinese (zh)
Other versions
CN113129413B (en
Inventor
杨树才
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Ea Intelligent Technology Co ltd
Original Assignee
Shanghai Ea Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Ea Intelligent Technology Co ltd filed Critical Shanghai Ea Intelligent Technology Co ltd
Priority to CN202110447823.9A priority Critical patent/CN113129413B/en
Publication of CN113129413A publication Critical patent/CN113129413A/en
Application granted granted Critical
Publication of CN113129413B publication Critical patent/CN113129413B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a virtual image feedback action system and method based on a three-dimensional engine, relates to the technical field of image feedback, and aims to solve the problem that the existing three-dimensional image technology in the prior art realizes dynamic capture through close-range wearable equipment, but cannot directly feed back on a real-time three-dimensional model and needs to be manufactured through post-technology. Three-dimensional image gathers the frame and includes that main three-dimensional image module, back vision three-dimensional image module, left side look three-dimensional image module, right side look three-dimensional image module and horizontal gravity sensing board, and main three-dimensional image module, back vision three-dimensional image module, left side look three-dimensional image module and right side look the component structure of three-dimensional image module the same, the output and the body modeling operation branch end of main three-dimensional image module, and the output and the low limbs modeling operation branch end of back vision three-dimensional image module, the output and the upper limbs modeling operation branch end of left side look three-dimensional image module and right side look three-dimensional image module.

Description

Virtual image feedback action system and method based on three-dimensional engine
Technical Field
The invention relates to the technical field of image feedback, in particular to an action feedback system and method based on a three-dimensional engine virtual image.
Background
The three-dimensional image technology is the interaction between reality and virtual realized on the basis of a dynamic model.
However, the existing three-dimensional image technology realizes dynamic capture through a short-distance wearable device, but cannot directly feed back on a real-time three-dimensional model, and needs to be manufactured through a post-technology; therefore, the existing requirements are not met, and a system and a method for action feedback based on the three-dimensional engine virtual image are provided for the requirements.
Disclosure of Invention
The invention aims to provide a feedback action system and a feedback action method based on a three-dimensional engine virtual image, which aim to solve the problem that the existing three-dimensional image technology proposed in the background technology realizes dynamic capture through a short-distance wearable device, but cannot directly feed back on a real-time three-dimensional model and needs to be manufactured through post-technology.
In order to achieve the purpose, the invention provides the following technical scheme: the utility model provides a feedback action system based on three-dimensional engine virtual image, includes three-dimensional image collection frame, three-dimensional image collection frame includes main three-dimensional image module, back vision three-dimensional image module, left side look three-dimensional image module, right side look three-dimensional image module and horizontal gravity tablet, and the constitution structure of main three-dimensional image module, back vision three-dimensional image module, left side look three-dimensional image module and right side look three-dimensional image module is the same, the output of main three-dimensional image module is divided with body modeling operation, and the output of back vision three-dimensional image module is divided with low limbs modeling operation, the output of left side look three-dimensional image module and right side look three-dimensional image module is divided with upper limbs modeling operation.
Preferably, the input ends of the upper limb modeling operation branch, the lower limb modeling operation branch and the body modeling operation branch are connected with the output end of the normal data module.
Preferably, the input ends of the upper limb modeling operation branch, the lower limb modeling operation branch and the body modeling operation branch are connected with the output ends of the model database and the posture data database.
Preferably, the main-view three-dimensional image module comprises a dynamic capture grating, a main-shaft long-focus probe and an auxiliary-shaft micro-focus probe, and the output ends of the dynamic capture grating, the main-shaft long-focus probe and the auxiliary-shaft micro-focus probe are connected with the input ends of the animation synthesis module and the scene synthesis unit.
Preferably, the input end of the animation synthesis module is connected with the output end of the light and shadow filling module, the input end of the scene synthesis unit is connected with the output end of the environment subtraction unit, and the input ends of the animation synthesis module and the scene synthesis unit are connected with the output end of the basic origin coordinate.
Preferably, the output ends of the upper limb modeling operation branch, the lower limb modeling operation branch and the body modeling operation branch are connected with the input end of the compressed data channel, the output end of the compressed data channel is connected with the input end of the master model joint control terminal, and the input end of the master model joint control terminal is connected with the output end of the multi-path decompression module.
Preferably, the master model joint control terminal includes an animation synthesis module, an input end of the animation synthesis module is connected with output ends of the dynamic synchronization unit, the motion filtering unit and the frame number adjusting module, and the frame number adjusting module includes a linear optimization unit and a distortion correction unit.
A method for feeding back actions based on a three-dimensional engine virtual image comprises the following steps:
the method comprises the following steps: the user station is in the monitoring range of the horizontal gravity sensing plate and faces the main-view three-dimensional image module, two hands respectively correspond to the left-view three-dimensional image module and the right-view three-dimensional image module on two sides, and the rear-view three-dimensional image module is positioned behind and below the user;
step two: after the equipment is operated, the dynamic capture grating and the main shaft long-focus probe in the three-dimensional image module are put into operation, at the moment, a user needs to keep a static picture as much as possible, and data are collected in real time through the dynamic capture grating and the main shaft long-focus probe;
step three: the collected data are mainly divided into upper limbs, lower limbs and a body, after the collection of user data is completed, virtual image modeling is carried out through computer software, and meanwhile, each group of data is provided with an independent computer to realize modeling;
step four: in the modeling process, the system can extract the existing data with the highest matching degree from the model database and the posture database according to the height and posture data of the user for direct application, and then the system can finely adjust the model in the database through a normal logic algorithm to ensure the harmony of the whole modeling;
step five: after the branch computer completes static modeling, a three-axis environment coordinate with a horizontal gravity sensing plate as a basic origin is established, and modeling data except for a user and non-specific objects can be automatically subtracted by the system in the environment modeling process;
step six: the data on the three component end computers are transmitted to the main model joint control terminal through a special compression transmission channel, and the main model joint control terminal joints and combines the three groups of modeling data together to form a complete user three-dimensional model;
step seven: after the main-end computer completes the module assembly, a user can display some actions on the horizontal gravity sensing plate, the auxiliary shaft micro-focus probe is matched with the dynamic capture grating and the main shaft long-focus probe to capture changes of some details on the limbs of the user, and meanwhile, the horizontal gravity sensing plate on the sole can also collect gravity center changes of the user under different actions;
step eight: data in the whole process can be continuously uploaded to a main-end computer after being optimized, and are displayed by the same action of a virtual modeling image feedback part in the computer.
Compared with the prior art, the invention has the beneficial effects that:
1. the method comprises the steps of acquiring real-time modeling data by dynamically capturing a grating and a main shaft tele probe, wherein the acquired data mainly comprises an upper limb, a lower limb and a body, after user data are acquired, virtual image modeling is carried out through computer software, meanwhile, each group of data is provided with an independent computer to realize modeling, the system can extract the existing data with the highest matching degree from a model database and a posture database according to the height and posture data of a user in the modeling process to be directly applied, and then the system can finely adjust the model in the database through a normal logic algorithm to ensure the coordination of the whole modeling;
2. the inside of the normal state data module of the invention stores a large amount of detailed data based on the actual body shape, so that the system can match similar body state data according to the measured result for direct use, and then the system can directly extract the manufactured model data from the model database and the body state data database according to the matched data, thereby shortening the time required by system modeling;
3. the auxiliary shaft micro-focus probe can capture the change of some details on the limbs of a user by matching with the dynamic capture grating and the main shaft long-focus probe, meanwhile, the horizontal gravity sensing plate on the sole can also collect the change of the gravity center of the user under different actions, data in the whole process can be continuously uploaded to a main end computer after being optimized, and the data are displayed by the same action as that of a virtual modeling image feedback part in the computer.
Drawings
FIG. 1 is an overall control flow diagram of the present invention;
FIG. 2 is a flowchart of an image capture process according to the present invention;
FIG. 3 is a block diagram of the present invention;
FIG. 4 is a flow chart of the compression transmission of the present invention;
fig. 5 is a terminal feedback flow chart of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
Referring to fig. 1-5, an embodiment of the present invention is shown: a virtual image feedback action system based on a three-dimensional engine comprises a three-dimensional image acquisition frame, wherein the three-dimensional image acquisition frame comprises a main-view three-dimensional image module, a rear-view three-dimensional image module, a left-view three-dimensional image module, a right-view three-dimensional image module and a horizontal gravity sensing plate, the main three-dimensional image module, the rear three-dimensional image module, the left three-dimensional image module and the right three-dimensional image module have the same composition structure, the output end of the main three-dimensional image module is separated from the body modeling operation, the main three-dimensional image module mainly observes the dynamic conditions of the head and the upper body of a user, the output end of the back-view three-dimensional image module is separated from the lower limb modeling operation end, the back-view three-dimensional image module can observe the dynamic conditions of the waist and the legs, the output ends of the left-view three-dimensional image module and the right-view three-dimensional image module are separated from the upper limb modeling operation end, and the left-view three-dimensional image module is used for observing the dynamic conditions of both hands of a user.
Furthermore, the input ends of the upper limb modeling operation branch, the lower limb modeling operation branch and the body modeling operation branch are connected with the output end of the normal state data module, a large amount of detailed data based on the actual body shape is stored in the normal state data module, and the system can match similar body state data according to the measured result for direct use.
Furthermore, the input ends of the upper limb modeling operation branch, the lower limb modeling operation branch and the body modeling operation branch are connected with the output ends of the model database and the posture data database, and the system can directly extract the manufactured model data from the model database and the posture data database according to the prepared data, so that the time required by system modeling can be shortened.
Furthermore, the main-view three-dimensional image module comprises a dynamic capture grating, a main shaft long-focus probe and an auxiliary shaft micro-focus probe, the output ends of the dynamic capture grating, the main shaft long-focus probe and the auxiliary shaft micro-focus probe are connected with the input ends of the animation synthesis module and the scene synthesis unit, and the auxiliary shaft micro-focus probe can capture some slight actions.
Furthermore, the input end of the animation synthesis module is connected with the output end of the light and shadow filling module, the input end of the scene synthesis unit is connected with the output end of the environment subtraction unit, the input ends of the animation synthesis module and the scene synthesis unit are connected with the output end of the basic origin coordinate, a three-axis environment coordinate with the horizontal gravity sensing plate as the basic origin is established after the branch computer completes static modeling, and modeling data except for the user and non-specific objects can be automatically subtracted by the system in the environment modeling process.
Further, the output ends of the upper limb modeling operation branch end, the lower limb modeling operation branch end and the body modeling operation branch end are connected with the input end of the compressed data channel, the output end of the compressed data channel is connected with the input end of the master model joint control terminal, and the input end of the master model joint control terminal is connected with the output end of the multi-path decompression module.
Furthermore, the master mode joint control terminal comprises an animation synthesis module, the input end of the animation synthesis module is connected with the output ends of the dynamic synchronization unit, the action filtering unit and the frame number adjusting module, the frame number adjusting module comprises a linear optimization unit and a distortion correction unit, the synthesized model and the transmitted data are optimized, and the smoothness of virtual action feedback is guaranteed.
A method for feeding back actions based on a three-dimensional engine virtual image comprises the following steps:
the method comprises the following steps: the user station is in the monitoring range of the horizontal gravity sensing plate and faces the main-view three-dimensional image module, two hands respectively correspond to the left-view three-dimensional image module and the right-view three-dimensional image module on two sides, and the rear-view three-dimensional image module is positioned behind and below the user;
step two: after the equipment is operated, the dynamic capture grating and the main shaft long-focus probe in the three-dimensional image module are put into operation, at the moment, a user needs to keep a static picture as much as possible, and data are collected in real time through the dynamic capture grating and the main shaft long-focus probe;
step three: the collected data are mainly divided into upper limbs, lower limbs and a body, after the collection of user data is completed, virtual image modeling is carried out through computer software, and meanwhile, each group of data is provided with an independent computer to realize modeling;
step four: in the modeling process, the system can extract the existing data with the highest matching degree from the model database and the posture database according to the height and posture data of the user for direct application, and then the system can finely adjust the model in the database through a normal logic algorithm to ensure the harmony of the whole modeling;
step five: after the branch computer completes static modeling, a three-axis environment coordinate with a horizontal gravity sensing plate as a basic origin is established, and modeling data except for a user and non-specific objects can be automatically subtracted by the system in the environment modeling process;
step six: the data on the three component end computers are transmitted to the main model joint control terminal through a special compression transmission channel, and the main model joint control terminal joints and combines the three groups of modeling data together to form a complete user three-dimensional model;
step seven: after the main-end computer completes the module assembly, a user can display some actions on the horizontal gravity sensing plate, the auxiliary shaft micro-focus probe is matched with the dynamic capture grating and the main shaft long-focus probe to capture changes of some details on the limbs of the user, and meanwhile, the horizontal gravity sensing plate on the sole can also collect gravity center changes of the user under different actions;
step eight: data in the whole process can be continuously uploaded to a main-end computer after being optimized, and are displayed by the same action of a virtual modeling image feedback part in the computer.
The working principle is as follows: when the system is used, the user stands in the monitoring range of the horizontal gravity sensing plate and faces the main-view three-dimensional image module, two hands respectively correspond to the left-view three-dimensional image module and the right-view three-dimensional image module on two sides, the rear-view three-dimensional image module is positioned behind and below the user, after the equipment is operated, a dynamic capture grating and a main shaft tele probe in the three-dimensional image module are put into operation, at the moment, the user needs to keep a static picture as much as possible, the acquisition of real-time modeling data is carried out through the dynamic capture grating and the main shaft tele probe, the acquired data is mainly divided into upper limbs, lower limbs and bodies, after the acquisition of user data is completed, virtual image modeling is carried out through computer software, meanwhile, each group of data is provided with an independent computer to realize modeling, and the system can extract the existing data with the highest matching degree from a model database and a body state database according to the height and body state data of the user, then the system can finely adjust the model in the database through a normal logic algorithm to ensure the coordination of the whole modeling, a branch computer can establish a three-axis environment coordinate with a horizontal gravity sensing plate as a basic origin after completing the static modeling, the system can automatically reduce the modeling data except the user and non-specific objects in the process of the environment modeling, the data on the three-component end computer is transmitted to a main model joint control terminal through a special compression transmission channel, the main model joint control terminal joints and combines the three groups of modeling data together to form a complete user three-dimensional model, after the main end computer completes the modeling, the user can display some actions on the horizontal gravity sensing plate, a secondary shaft micro-focus probe is matched with a dynamic capture grating and a main shaft long focus probe to capture the changes of some details on the limbs of the user, and meanwhile, the horizontal gravity sensing plate at the sole can also collect the gravity center changes under different actions of the user, data in the whole process can be continuously uploaded to a main-end computer after being optimized, and are displayed by the same action of a virtual modeling image feedback part in the computer.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (8)

1. The utility model provides a feedback action system based on three-dimensional engine virtual image, includes three-dimensional image collection frame, its characterized in that: three-dimensional image gathers the frame and includes that main three-dimensional image module, back vision three-dimensional image module, left side look three-dimensional image module, right side look three-dimensional image module and horizontal gravity sensing board, and main three-dimensional image module, back vision three-dimensional image module, left side look three-dimensional image module and right side look the component structure of three-dimensional image module the same, the output and the body modeling operation branch end of main three-dimensional image module, and the output and the low limbs modeling operation branch end of back vision three-dimensional image module, the output and the upper limbs modeling operation branch end of left side look three-dimensional image module and right side look three-dimensional image module.
2. The three-dimensional engine avatar based feedback action system of claim 1, wherein: the input ends of the upper limb modeling operation branch, the lower limb modeling operation branch and the body modeling operation branch are connected with the output end of the normal data module.
3. The three-dimensional engine avatar based feedback action system of claim 1, wherein: and the input ends of the upper limb modeling operation branch, the lower limb modeling operation branch and the body modeling operation branch are connected with the output ends of the model database and the body state data database.
4. The three-dimensional engine avatar based feedback action system of claim 1, wherein: the main-view three-dimensional image module comprises a dynamic capture grating, a main shaft long-focus probe and an auxiliary shaft micro-focus probe, and the output ends of the dynamic capture grating, the main shaft long-focus probe and the auxiliary shaft micro-focus probe are connected with the input ends of the animation synthesis module and the scene synthesis unit.
5. The three-dimensional engine avatar based feedback action system of claim 4, wherein: the input end of the animation synthesis module is connected with the output end of the light and shadow filling module, the input end of the scene synthesis unit is connected with the output end of the environment subtraction unit, and the input ends of the animation synthesis module and the scene synthesis unit are connected with the output end of the basic origin coordinate.
6. The three-dimensional engine avatar based feedback action system of claim 1, wherein: the output ends of the upper limb modeling operation branch end, the lower limb modeling operation branch end and the body modeling operation branch end are connected with the input end of the compressed data channel, the output end of the compressed data channel is connected with the input end of the master model joint control terminal, and the input end of the master model joint control terminal is connected with the output end of the multi-path decompression module.
7. The three-dimensional engine avatar based feedback action system of claim 6, wherein: the master model joint control terminal comprises an animation synthesis module, the input end of the animation synthesis module is connected with the output ends of the dynamic synchronization unit, the action filtering unit and the frame number adjusting module, and the frame number adjusting module comprises a linear optimization unit and a distortion correction unit.
8. A method for feedback action based on three-dimensional engine avatar based on any one of claims 1-7, wherein the method comprises the following steps:
the method comprises the following steps: the user station is in the monitoring range of the horizontal gravity sensing plate and faces the main-view three-dimensional image module, two hands respectively correspond to the left-view three-dimensional image module and the right-view three-dimensional image module on two sides, and the rear-view three-dimensional image module is positioned behind and below the user;
step two: after the equipment is operated, the dynamic capture grating and the main shaft long-focus probe in the three-dimensional image module are put into operation, at the moment, a user needs to keep a static picture as much as possible, and data are collected in real time through the dynamic capture grating and the main shaft long-focus probe;
step three: the collected data are mainly divided into upper limbs, lower limbs and a body, after the collection of user data is completed, virtual image modeling is carried out through computer software, and meanwhile, each group of data is provided with an independent computer to realize modeling;
step four: in the modeling process, the system can extract the existing data with the highest matching degree from the model database and the posture database according to the height and posture data of the user for direct application, and then the system can finely adjust the model in the database through a normal logic algorithm to ensure the harmony of the whole modeling;
step five: after the branch computer completes static modeling, a three-axis environment coordinate with a horizontal gravity sensing plate as a basic origin is established, and modeling data except for a user and non-specific objects can be automatically subtracted by the system in the environment modeling process;
step six: the data on the three component end computers are transmitted to the main model joint control terminal through a special compression transmission channel, and the main model joint control terminal joints and combines the three groups of modeling data together to form a complete user three-dimensional model;
step seven: after the main-end computer completes the module assembly, a user can display some actions on the horizontal gravity sensing plate, the auxiliary shaft micro-focus probe is matched with the dynamic capture grating and the main shaft long-focus probe to capture changes of some details on the limbs of the user, and meanwhile, the horizontal gravity sensing plate on the sole can also collect gravity center changes of the user under different actions;
step eight: data in the whole process can be continuously uploaded to a main-end computer after being optimized, and are displayed by the same action of a virtual modeling image feedback part in the computer.
CN202110447823.9A 2021-04-25 2021-04-25 Three-dimensional engine-based virtual image feedback action system and method Active CN113129413B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110447823.9A CN113129413B (en) 2021-04-25 2021-04-25 Three-dimensional engine-based virtual image feedback action system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110447823.9A CN113129413B (en) 2021-04-25 2021-04-25 Three-dimensional engine-based virtual image feedback action system and method

Publications (2)

Publication Number Publication Date
CN113129413A true CN113129413A (en) 2021-07-16
CN113129413B CN113129413B (en) 2023-05-16

Family

ID=76780123

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110447823.9A Active CN113129413B (en) 2021-04-25 2021-04-25 Three-dimensional engine-based virtual image feedback action system and method

Country Status (1)

Country Link
CN (1) CN113129413B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103134444A (en) * 2013-02-01 2013-06-05 同济大学 Double-field variable-focus three-dimensional measurement system
CN104700433A (en) * 2015-03-24 2015-06-10 中国人民解放军国防科学技术大学 Vision-based real-time general movement capturing method and system for human body
CN107274465A (en) * 2017-05-31 2017-10-20 珠海金山网络游戏科技有限公司 A kind of main broadcaster methods, devices and systems of virtual reality
US20170354353A1 (en) * 2016-06-08 2017-12-14 Korea Institute Of Science And Technology Motion capture system using fbg sensor
CN110087059A (en) * 2018-01-26 2019-08-02 四川大学 A kind of Interactive Free stereo display method for true three-dimension scene
CN110503707A (en) * 2019-07-31 2019-11-26 北京毛毛虫森林文化科技有限公司 A kind of true man's motion capture real-time animation system and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103134444A (en) * 2013-02-01 2013-06-05 同济大学 Double-field variable-focus three-dimensional measurement system
CN104700433A (en) * 2015-03-24 2015-06-10 中国人民解放军国防科学技术大学 Vision-based real-time general movement capturing method and system for human body
US20170354353A1 (en) * 2016-06-08 2017-12-14 Korea Institute Of Science And Technology Motion capture system using fbg sensor
CN107274465A (en) * 2017-05-31 2017-10-20 珠海金山网络游戏科技有限公司 A kind of main broadcaster methods, devices and systems of virtual reality
CN110087059A (en) * 2018-01-26 2019-08-02 四川大学 A kind of Interactive Free stereo display method for true three-dimension scene
CN110503707A (en) * 2019-07-31 2019-11-26 北京毛毛虫森林文化科技有限公司 A kind of true man's motion capture real-time animation system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王涛等: "三维人脸表情动态采集***的设计", 《光学精密工程》 *

Also Published As

Publication number Publication date
CN113129413B (en) 2023-05-16

Similar Documents

Publication Publication Date Title
AU2006282764B2 (en) Capturing and processing facial motion data
CN106600709A (en) Decoration information model-based VR virtual decoration method
CN202662016U (en) Real-time virtual fitting device
CN101277454A (en) Method for generating real time tridimensional video based on binocular camera
CN107065197B (en) Human eye tracking remote rendering real-time display method and system for VR glasses
CN101520902A (en) System and method for low cost motion capture and demonstration
CN203773476U (en) Virtual reality system based on 3D interaction
CN110503707A (en) A kind of true man's motion capture real-time animation system and method
CN109806580A (en) Mixed reality system and method based on wireless transmission
CN106126145A (en) A kind of display packing and electronic equipment
JP2022512262A (en) Image processing methods and equipment, image processing equipment and storage media
CN105959667A (en) Three-dimensional image collection device and system
CN205005198U (en) Head -mounted display
CN113129413A (en) Virtual image feedback action system and method based on three-dimensional engine
CN207502836U (en) A kind of augmented reality display device
CN204300649U (en) One wears display frame
CN205621077U (en) Binocular vision reconsitution device based on range image
CN203825855U (en) Hot-line work simulation training system based on three-dimensional kinect camera
CN107087153A (en) 3D rendering generation method, device and VR equipment
CN111369653A (en) Three-dimensional animation system based on human face
CN111026264A (en) Recognition system for AR display equipment
CN206200977U (en) Portable distal end is come personally interaction platform
CN205318025U (en) Virtual reality display equipment
CN220918079U (en) Basketball professional action training correction system
CN110610536A (en) Method for displaying real scene for VR equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant