CN107219888A - Indoor expansible interactive walkthrough realization method and system based on Kinect - Google Patents
Indoor expansible interactive walkthrough realization method and system based on Kinect Download PDFInfo
- Publication number
- CN107219888A CN107219888A CN201710369481.7A CN201710369481A CN107219888A CN 107219888 A CN107219888 A CN 107219888A CN 201710369481 A CN201710369481 A CN 201710369481A CN 107219888 A CN107219888 A CN 107219888A
- Authority
- CN
- China
- Prior art keywords
- virtual
- coordinate
- kinect
- scenes
- expansible
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/163—Wearable computers, e.g. on a belt
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/003—Navigation within 3D models or images
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Graphics (AREA)
- Processing Or Creating Images (AREA)
Abstract
The present invention relates to a kind of indoor expansible interactive walkthrough implementation method based on Kinect, comprise the following steps:A, build in 3D engines and to show the environment that equipment is interacted with head;B, by 3D models of place import 3D engines, carry out 3D scene constructions, head show equipment in show the 3D scenes;C, establishment mobile route and the virtual point on mobile route in 3D scenes;The sequence frame of target bone dot position information is uploaded to action recognition module by d, the target bone dot position information using Kinect sensor acquisition human body limb, Kinect sequentially in time;E, action recognition module follow the trail of the change of target bone dot position information, access action command database, judge virtual acting instruction;F, virtual acting instruction control virtual point movement.The present invention obtains human action or status information using somatosensory device, controls the conversion and movement of 3D scenes, sets mobile route in 3D scenes, realize the continuous switching of 3D scenes, user is had sense on the spot in person.
Description
Technical field
The present invention relates to roaming system field, and in particular to a kind of indoor expansible interactive walkthrough based on Kinect is realized
Method and system.
Background technology
For real estate client, that is, there is the consumer of purchase of property purpose, allow client to can see the house for failing to be delivered for use
Inner case after shaping, the following house internal structure of oneself of witness, while a kind of brand-new experience is made, increase is mutual
Dynamic element and expansible element, allow realtor to analyze client at home by the data that are obtained during user's use
Preference, increase client's stickiness, increase purchase intention, final to realize successfully transaction, this mode of doing business is following trend.
In recent years, with the development of Kinect technologies, the Kinect head-mounted displays (calling in the following text " head shows ") of immersion are slowly
Into the sight of people.There are many developers to make some interesting products, such as virtual worldwide travel, with head is aobvious just can be with
See the Eiffel Tower of France, or Egyptian pyramid, allow you not go out room enough it is seen that the beautiful scenery of foreign countries.
But, this kind of product has following defect:
First, interaction, message of film and TV is converted to 3d signals by some products by technological means, and to give head aobvious come defeated
Go out, content is fixed, the receiving that user can only be passive.
Second, the free degree is poor, virtual worldwide travel as mentioned above, it is seen that picture be the landscape map made
Picture, can only handoff scenario or switching viewing angle, without too many autgmentability.
The content of the invention
It is an object of the invention to provide a kind of indoor expansible interactive walkthrough realization method and system based on Kinect, tool
Have the advantages that interactivity is good, the free degree is high.
The purpose of the present invention is to be achieved through the following technical solutions:
A kind of indoor expansible interactive walkthrough implementation method based on Kinect, comprises the following steps:
D, build in 3D engines and to show the environment that equipment is interacted with head;
E, by 3D models of place import 3D engines, carry out 3D scene constructions, head show equipment in show the 3D scenes;
F, establishment mobile route and the virtual point on mobile route in 3D scenes;
D, the target bone dot position information for obtaining using Kinect sensor human body limb, Kinect is by target skeleton point
The sequence frame of positional information is uploaded to action recognition module sequentially in time;
E, action recognition module follow the trail of the change of target bone dot position information, access action command database, judge empty
Intend action command;
F, virtual acting instruction control virtual point movement.
In preferred embodiments, it is described that concretely comprising the following steps for mobile route is created in 3D scenes:
Two-dimensional coordinate system is set up in 3D scenes, the coordinate that mobile route in two-dimensional coordinate system is covered constitutes path coordinate
Arrangement includes origin coordinates, path coordinate and terminal point coordinate, the seat of the virtual point in sequence in set, path coordinate set
Mark is in path coordinate set.
In preferred embodiments, Kinect sensor obtains the bone dot position information of human body limb, therefrom extracts
Coordinate data of the target skeleton point in Kinect coordinate systems, and add time mark, the seat that Kinect will be marked with the time
Mark data are uploaded to action recognition module sequentially in time.
In preferred embodiments, the coordinate data of flanking sequence frame is done phasor difference calculating by action recognition module, is obtained
To flanking sequence frame in coordinate displacement direction and coordinate displacement distance.
In preferred embodiments, the method for judging virtual acting instruction for respectively according to coordinate displacement direction and
Coordinate displacement distance accesses action command storehouse, if coordinate displacement direction and coordinate displacement distance are satisfied by same action command
Setting, then return to the action command;Or, the relative position combination of at least two bone coordinates and elapsed time mark are full
The setting of sufficient action command, then return to the action command.
In preferred embodiments, virtual controlling disk is set in the scene, and virtual controlling disk is provided with direction cursor, when
Display instructs the direction cursor of equidirectional with the virtual acting on virtual controlling disk during virtual acting instruction triggers.
A kind of indoor expansible interactive walkthrough system, including with lower module:
3D engines, for importing 3D models of place, carry out 3D scene constructions, and create mobile route in 3D models of place
And the virtual point on mobile route;
Aobvious equipment, for showing the 3D scenes;
3D scene databases, the mobile route for storing 3D scenes and the 3D scenes;
Action command database, for storing virtual acting instruction;
Kinect, the target bone dot position information for obtaining human body limb, by the sequence of target bone dot position information
Row frame is uploaded to action recognition module sequentially in time;
Action recognition module, the change for following the trail of target bone dot position information accesses action command according to the change
Database, judges virtual acting instruction, according to virtual acting instruction control virtual point movement.
In preferred embodiments, the 3D engines are set on virtual controlling disk, the virtual controlling disk in the scene
Provided with direction cursor, for when virtual acting instruction triggers, display to instruct phase Tongfang with the virtual acting on virtual controlling disk
To direction cursor.
Beneficial effects of the present invention are:
1st, the present invention can provide the user experience sense on the spot in person, using 3D engine renders, by the aobvious friendship with engine of head
Mutually, equipment is shown by head and shows 3D scenes, realized the house indoor roaming body-sensing of user's immersion, allow user to see the scape in future
See, obtain unprecedented experience sense, each room of observable is simultaneously carried out individual according to preference to house interior trim, furniture, electrical equipment etc.
Propertyization is changed;
2nd, the present invention puts the side for going to see relative to traditional product to fix from the point of view of one or jump to another from a point
Formula, obtains human action or status information using somatosensory device, to control the conversion and movement of virtual image, is set in 3D scenes
Mobile route is put, the continuous switching of virtual image is realized, user is had stronger sensation on the spot in person.
Brief description of the drawings
The present invention is described in further detail below according to accompanying drawing.
The step of Fig. 1 is the indoor expansible interactive walkthrough implementation method based on Kinect described in the embodiment of the present invention is flowed
Cheng Tu;
Fig. 2 is the system architecture of the indoor expansible interactive walkthrough system based on Kinect described in the embodiment of the present invention
Figure.
In figure:
1st, 3D engines;2nd, head shows equipment;3rd, 3D scene databases;4th, action command database;5、Kinect;6th, action is known
Other module;7th, virtual controlling disk.
Embodiment
To make the purpose, technical scheme and advantage of the embodiment of the present invention clearer, below in conjunction with the embodiment of the present invention
In accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is
A part of embodiment of the present invention, rather than whole embodiments.The present invention implementation being generally described and illustrated herein in the accompanying drawings
The component of example can be arranged and designed with a variety of configurations.Therefore, reality of the invention below to providing in the accompanying drawings
The detailed description for applying example is not intended to limit the scope of claimed invention, but is merely representative of the selected implementation of the present invention
Example.Based on the embodiment in the present invention, what those of ordinary skill in the art were obtained under the premise of creative work is not made
Every other embodiment, belongs to the scope of protection of the invention.
Embodiments of the invention are described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end
Same or similar label represents same or similar element or the element with same or like function.Below with reference to attached
The embodiment of figure description is exemplary, it is intended to for explaining the present invention, and be not considered as limiting the invention.
The present invention is further illustrated with specific embodiment below with reference to accompanying drawings.
As shown in figure 1, a kind of indoor expansible interactive walkthrough implementation method based on Kinect5 of the embodiment of the present invention,
Comprise the following steps:
Built in 3D engines 1 and show the environment that equipment 2 is interacted with head;
3D models of place are imported into 3D engines 1,3D scene constructions are carried out, the 3D scenes are shown in head shows equipment 2;
Mobile route is created in 3D scenes and moves along the virtual point of path movement;
The target bone dot position information of human body limb is obtained using Kinect5 sensors, Kinect5 is by target skeleton point
The sequence frame of positional information is uploaded to action recognition module 6 sequentially in time;
Action recognition module 6 follows the trail of the change of target bone dot position information, accesses action command database 4, judges empty
Intend action command;
Virtual acting instruction control virtual point movement.
Built in 3D engines 1 and show the environment that equipment 2 is interacted with head, head shows equipment 2 and receives the control letter that 3D engines 1 are sent
Breath, extracts corresponding 3D models of place from scene database, switches between corresponding 3D models of place, 3D models of place
Matched by label with control information.
3D models of place are imported into 3D engines 1, scene construction is carried out, the 3D scenes are shown in head shows equipment 2, it is described
Concretely comprising the following steps for mobile route is created in 3D scenes:
Two-dimensional coordinate system is set up in each 3D scenes put up, and mobile route is added in two-dimensional coordinate system, is moved
Dynamic path is formulated according to 3D scenes feature, and the mobile route of each 3D scenes is different, and the coordinate that mobile route is covered is constituted
Origin coordinates, path coordinate and terminal point coordinate, origin coordinates, path coordinate are included in path coordinate set, path coordinate set
And terminal point coordinate is arranged in order, the coordinate of the virtual point is located in path coordinate set, the switching in order of path coordinate,
Realize the coordinate movement of virtual point.
The target bone dot position information of human body limb is obtained using Kinect5 sensors, Kinect5 is by target skeleton point
The sequence frame of positional information is uploaded to action recognition module 6 sequentially in time, concretely comprises the following steps:
Kinect5 sensors obtain the bone dot position information of human body limb, therefrom extract target skeleton point in Kinect5
Coordinate data in coordinate system, and time mark is added, Kinect5 will be marked with the coordinate data that the time marks according to the time
Order is uploaded to action recognition module 6.
The coordinate data that adjacent time is marked is done phasor difference calculating by action recognition module 6, and such as coordinate A (5,6) is with sitting
Mark two coordinates that A1 (6,7) marks for the adjacent time in same path, the coordinate bit that A1 marks for A latter time, coordinate
A1 subtracts its X values relative value after coordinate A and calculates coordinate displacement apart from D by Pythagorean theorem with Y relative values;Coordinate displacement direction
According to X relative values and the positive and negative values of Y relative values, using previous coordinate A as quadrant center, A coordinate systems are formed, according to X relative values and Y
The positive and negative values of relative value, it can be inferred that A1 is located in that limit of A coordinate systems, so as to obtain the coordinate displacement side from A to A1
To and coordinate displacement distance.
The method for judging virtual acting instruction is dynamic according to coordinate displacement direction and coordinate displacement distance access respectively
Make instruction database, if coordinate displacement direction and coordinate displacement distance are satisfied by the setting of same action command, return to the action
Instruction;Or, the relative position of at least two groups bone coordinates is combined and elapsed time marks the setting for being satisfied by action command,
Then return to the action command.
It is specially that action command is divided into two kinds, and the former is gesture, and the latter is posture.
1st, gesture example:
Right hand skeleton point position coordinates is obtained, coordinate moving direction is to the left, coordinate displacement is 400, is as defined
The action that one right hand is waved to the left, can control virtual point to be moved to the left.
Right hand skeleton point position coordinates is obtained, coordinate moving direction is forward, coordinate displacement is 100, is as defined
The action that one right hand is pressed forward, can control virtual point to be clicked in current location.
2nd, posture example:
Obtain the skeleton point position coordinates of the right hand and left hand, the time marker spacing be 500, the right hand is left immediately below right shoulder
Hand is in the left obliquely downward of left shoulder, and the angle of the skeleton point position coordinates of the right hand and left hand is 135 °, is set up by origin of the angle center of circle
Coordinate system, right hand skeleton point coordinate is located in the Y-axis of central coordinate of circle system, and left hand skeleton point coordinate is located at the first of central coordinate of circle system
In quadrant, continue 4 time marker spacings (time is 2000 milliseconds), the coordinate displacement distance of left hand and the right hand is 0, is returned dynamic
Make instruction to quit a program.
Virtual controlling disk 7 is set in the scene, and virtual controlling disk 7 is provided with direction cursor, when virtual acting instruction triggers
When virtual controlling disk 7 on display the direction cursor of equidirectional is instructed with the virtual acting, such as virtual point is moved to the left, empty
The left-hand button intended on control keyboard is lighted;Virtual point is clicked in current location, and the center button of virtual controlling keyboard is lighted.
As shown in Fig. 2 a kind of indoor expansible interactive walkthrough system, including with lower module:
3D engines 1, for importing 3D models of place, carry out 3D scene constructions, and the mobile road of establishment in 3D models of place
Footpath and the virtual point on mobile route;
Aobvious equipment 2, for showing the 3D scenes;
3D scene databases 3, the mobile route for storing 3D scenes and the 3D scenes;
Action command database 4, for storing virtual acting instruction;
Kinect5, the target bone dot position information for obtaining human body limb, by the sequence of target bone dot position information
Row frame is uploaded to action recognition module 6 sequentially in time;
Action recognition module 6, the change for following the trail of target bone dot position information accesses action command according to the change
Database 4, judges virtual acting instruction, according to virtual acting instruction control virtual point movement.
The 3D engines 1 set virtual controlling disk 7 in the scene, and the virtual controlling disk 7 is provided with direction cursor, is used for
When virtual acting instruction triggers, display instructs the direction cursor of equidirectional with the virtual acting on virtual controlling disk 7.
The present invention is not limited to above-mentioned preferred forms, and anyone can show that other are various under the enlightenment of the present invention
The product of form, however, make any change in its shape or structure, it is every that there is skill identical or similar to the present application
Art scheme, is within the scope of the present invention.
Claims (8)
1. a kind of indoor expansible interactive walkthrough implementation method based on Kinect, it is characterised in that comprise the following steps:
A, build in 3D engines and to show the environment that equipment is interacted with head;
B, by 3D models of place import 3D engines, carry out 3D scene constructions, head show equipment in show the 3D scenes;
C, establishment mobile route and the virtual point on mobile route in 3D scenes;
D, the target bone dot position information for obtaining using Kinect sensor human body limb, Kinect is by target skeleton point position
The sequence frame of information is uploaded to action recognition module sequentially in time;
E, action recognition module follow the trail of the change of target bone dot position information, access action command database, judge virtual dynamic
Instruct;
F, virtual acting instruction control virtual point movement.
2. the indoor expansible interactive walkthrough implementation method according to claim 1 based on Kinect, it is characterised in that institute
State and concretely comprising the following steps for mobile route is created in 3D scenes:
Two-dimensional coordinate system is set up in 3D scenes, the coordinate that mobile route in two-dimensional coordinate system is covered constitutes path coordinate collection
Close, arrangement includes origin coordinates, path coordinate and terminal point coordinate, the coordinate of the virtual point in sequence in path coordinate set
In path coordinate set.
3. the indoor expansible interactive walkthrough implementation method according to claim 1 based on Kinect, it is characterised in that
Kinect sensor obtains the bone dot position information of human body limb, therefrom extracts target skeleton point in Kinect coordinate systems
Coordinate data, and time mark is added, the coordinate data marked with the time is uploaded to action by Kinect sequentially in time
Identification module.
4. the indoor expansible interactive walkthrough implementation method according to claim 3 based on Kinect, it is characterised in that dynamic
Make identification module and the coordinate data of flanking sequence frame is done into phasor difference calculating, obtain flanking sequence frame in coordinate displacement direction and seat
Marker displacement distance.
5. the indoor expansible interactive walkthrough implementation method according to claim 4 based on Kinect, it is characterised in that institute
State and judge the method for virtual acting instruction to access action command storehouse according to coordinate displacement direction and coordinate displacement distance respectively, if
Coordinate displacement direction and coordinate displacement distance are satisfied by the setting of same action command, then return to the action command;Or, extremely
The relative position combination of few two bone coordinates and elapsed time mark are satisfied by the setting of action command, then return to the action
Instruction.
6. the indoor expansible interactive walkthrough implementation method according to claim 1 based on Kinect, it is characterised in that
Virtual controlling disk is set in scene, and virtual controlling disk is provided with direction cursor, the virtual controlling disk when virtual acting instruction triggers
Upper display instructs the direction cursor of equidirectional with the virtual acting.
7. a kind of indoor expansible interactive walkthrough system, it is characterised in that including with lower module:
3D engines, for importing 3D models of place, carry out 3D scene constructions, and establishment mobile route and position in 3D models of place
In the virtual point on mobile route;
Aobvious equipment, for showing the 3D scenes;
3D scene databases, the mobile route for storing 3D scenes and the 3D scenes;
Action command database, for storing virtual acting instruction;
Kinect, the target bone dot position information for obtaining human body limb, by the sequence frame of target bone dot position information
Action recognition module is uploaded to sequentially in time;
Action recognition module, the change for following the trail of target bone dot position information accesses action command data according to the change
Storehouse, judges virtual acting instruction, according to virtual acting instruction control virtual point movement.
8. indoor expansible interactive walkthrough system according to claim 1, it is characterised in that the 3D engines are in the scene
Virtual controlling disk is set, and the virtual controlling disk is provided with direction cursor, for the virtual controlling when virtual acting instruction triggers
Display instructs the direction cursor of equidirectional with the virtual acting on disk.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710369481.7A CN107219888A (en) | 2017-05-23 | 2017-05-23 | Indoor expansible interactive walkthrough realization method and system based on Kinect |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710369481.7A CN107219888A (en) | 2017-05-23 | 2017-05-23 | Indoor expansible interactive walkthrough realization method and system based on Kinect |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107219888A true CN107219888A (en) | 2017-09-29 |
Family
ID=59944334
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710369481.7A Pending CN107219888A (en) | 2017-05-23 | 2017-05-23 | Indoor expansible interactive walkthrough realization method and system based on Kinect |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107219888A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109191593A (en) * | 2018-08-27 | 2019-01-11 | 百度在线网络技术(北京)有限公司 | Motion control method, device and the equipment of virtual three-dimensional model |
CN110134478A (en) * | 2019-04-28 | 2019-08-16 | 深圳市思为软件技术有限公司 | The scene conversion method, apparatus and terminal device of panoramic scene |
CN110968281A (en) * | 2018-09-30 | 2020-04-07 | 千寻位置网络有限公司 | Scene presenting method and device, execution terminal, center console and control system |
CN111028339A (en) * | 2019-12-06 | 2020-04-17 | 国网浙江省电力有限公司培训中心 | Behavior action modeling method and device, electronic equipment and storage medium |
CN111698425A (en) * | 2020-06-22 | 2020-09-22 | 四川易热科技有限公司 | Method for realizing consistency of real scene roaming technology |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103116857A (en) * | 2013-02-01 | 2013-05-22 | 武汉百景互动科技有限责任公司 | Virtual sample house wandering system based on body sense control |
CN103519788A (en) * | 2013-10-18 | 2014-01-22 | 南京师范大学 | Attention scenario evaluation system based on Kinect interaction |
CN105511602A (en) * | 2015-11-23 | 2016-04-20 | 合肥金诺数码科技股份有限公司 | 3d virtual roaming system |
CN106327589A (en) * | 2016-08-17 | 2017-01-11 | 北京中达金桥技术股份有限公司 | Kinect-based 3D virtual dressing mirror realization method and system |
CN206115390U (en) * | 2016-07-29 | 2017-04-19 | 青岛市经纬蓝图信息技术有限公司 | Virtual roaming device in digit coastal city based on virtual reality helmet |
-
2017
- 2017-05-23 CN CN201710369481.7A patent/CN107219888A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103116857A (en) * | 2013-02-01 | 2013-05-22 | 武汉百景互动科技有限责任公司 | Virtual sample house wandering system based on body sense control |
CN103519788A (en) * | 2013-10-18 | 2014-01-22 | 南京师范大学 | Attention scenario evaluation system based on Kinect interaction |
CN105511602A (en) * | 2015-11-23 | 2016-04-20 | 合肥金诺数码科技股份有限公司 | 3d virtual roaming system |
CN206115390U (en) * | 2016-07-29 | 2017-04-19 | 青岛市经纬蓝图信息技术有限公司 | Virtual roaming device in digit coastal city based on virtual reality helmet |
CN106327589A (en) * | 2016-08-17 | 2017-01-11 | 北京中达金桥技术股份有限公司 | Kinect-based 3D virtual dressing mirror realization method and system |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109191593A (en) * | 2018-08-27 | 2019-01-11 | 百度在线网络技术(北京)有限公司 | Motion control method, device and the equipment of virtual three-dimensional model |
CN110968281A (en) * | 2018-09-30 | 2020-04-07 | 千寻位置网络有限公司 | Scene presenting method and device, execution terminal, center console and control system |
CN110968281B (en) * | 2018-09-30 | 2023-09-08 | 千寻位置网络有限公司 | Scene presentation method and device, execution terminal, center console and control system |
CN110134478A (en) * | 2019-04-28 | 2019-08-16 | 深圳市思为软件技术有限公司 | The scene conversion method, apparatus and terminal device of panoramic scene |
CN110134478B (en) * | 2019-04-28 | 2022-04-05 | 深圳市思为软件技术有限公司 | Scene conversion method and device of panoramic scene and terminal equipment |
CN111028339A (en) * | 2019-12-06 | 2020-04-17 | 国网浙江省电力有限公司培训中心 | Behavior action modeling method and device, electronic equipment and storage medium |
CN111028339B (en) * | 2019-12-06 | 2024-03-29 | 国网浙江省电力有限公司培训中心 | Behavior modeling method and device, electronic equipment and storage medium |
CN111698425A (en) * | 2020-06-22 | 2020-09-22 | 四川易热科技有限公司 | Method for realizing consistency of real scene roaming technology |
CN111698425B (en) * | 2020-06-22 | 2021-11-23 | 四川可易世界科技有限公司 | Method for realizing consistency of real scene roaming technology |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107219888A (en) | Indoor expansible interactive walkthrough realization method and system based on Kinect | |
CN103246351B (en) | A kind of user interactive system and method | |
CN106652590B (en) | Teaching method, teaching identifier and tutoring system | |
JP2019061707A (en) | Control method for human-computer interaction and application thereof | |
CN110908504B (en) | Augmented reality museum collaborative interaction method and system | |
CN106683197A (en) | VR (virtual reality) and AR (augmented reality) technology fused building exhibition system and VR and AR technology fused building exhibition method | |
CN105373224A (en) | Hybrid implementation game system based on pervasive computing, and method thereof | |
CN105027030A (en) | Wireless wrist computing and control device and method for 3d imaging, mapping, networking and interfacing | |
CN106873767A (en) | The progress control method and device of a kind of virtual reality applications | |
CN101960820A (en) | A media system and method | |
CN1932799A (en) | System and method for simulating real three-dimensional virtual network travel | |
CN109696961A (en) | Historical relic machine & equipment based on VR technology leads reward and realizes system and method, medium | |
CN107957775A (en) | Data object exchange method and device in virtual reality space environment | |
CN110298873A (en) | Construction method, construction device, robot and the readable storage medium storing program for executing of three-dimensional map | |
CN104731343A (en) | Virtual reality man-machine interaction children education experience system based on mobile terminal | |
CN110531847B (en) | Social contact method and system based on augmented reality | |
CN107463262A (en) | A kind of multi-person synergy exchange method based on HoloLens | |
CN103257707B (en) | Utilize the three-dimensional range method of Visual Trace Technology and conventional mice opertaing device | |
CN106066688B (en) | A kind of virtual reality exchange method and device based on wearable gloves | |
CN103019386A (en) | Method for controlling human-machine interaction and application thereof | |
CN106371613B (en) | The VR starry sky production of Collaborative Visualization programming and observation system | |
CN109584361A (en) | A kind of equipment cable is virtually pre-installed and trajectory measurement method and system | |
CN109806580A (en) | Mixed reality system and method based on wireless transmission | |
CN105955488B (en) | A kind of method and apparatus of operation control terminal | |
CN205594583U (en) | Virtual impression system of VR based on BIM |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20170929 |
|
WD01 | Invention patent application deemed withdrawn after publication |