CN114020978B - Park digital roaming display method and system based on multi-source information fusion - Google Patents

Park digital roaming display method and system based on multi-source information fusion Download PDF

Info

Publication number
CN114020978B
CN114020978B CN202111146316.8A CN202111146316A CN114020978B CN 114020978 B CN114020978 B CN 114020978B CN 202111146316 A CN202111146316 A CN 202111146316A CN 114020978 B CN114020978 B CN 114020978B
Authority
CN
China
Prior art keywords
user
glove
virtual
park
interaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111146316.8A
Other languages
Chinese (zh)
Other versions
CN114020978A (en
Inventor
赵鹏飞
王维
韩沫
刘海
张权
赵怡梦
魏一博
刘行易
秦烽铭
胡明康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Research Center of Information Technology of Beijing Academy of Agriculture and Forestry Sciences
Original Assignee
Research Center of Information Technology of Beijing Academy of Agriculture and Forestry Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Research Center of Information Technology of Beijing Academy of Agriculture and Forestry Sciences filed Critical Research Center of Information Technology of Beijing Academy of Agriculture and Forestry Sciences
Priority to CN202111146316.8A priority Critical patent/CN114020978B/en
Publication of CN114020978A publication Critical patent/CN114020978A/en
Application granted granted Critical
Publication of CN114020978B publication Critical patent/CN114020978B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/904Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a park digital roaming display method and system based on multi-source information fusion, wherein the method comprises the following steps: tracking the lower limbs of a user through a sensor in the universal running machine, determining the corresponding moving speed and direction of the user, and determining the moving state of the user in a virtual scene of a three-dimensional model of a park according to the moving speed and direction; the method comprises the steps of carrying out upper limb tracking through VR gloves worn by a user, under the condition that the operation of the VR gloves on an interactive object is touch, carrying out display of touch effect according to preset interaction information, or under the condition that the operation of the VR gloves on the interactive object is grabbing, carrying out corresponding virtual farming operation on the interactive object according to the operation of the user. The method enables the user to roam and interact in the virtual park in an immersive manner, so that the roaming interest of the user is stimulated, the participation of the user is improved, good user experience is created, and the user can better know the development and operation conditions of the park.

Description

Park digital roaming display method and system based on multi-source information fusion
Technical Field
The invention relates to the field of virtual reality, in particular to a park digital roaming display method and system based on multi-source information fusion.
Background
The Virtual Reality (VR) technology is a comprehensive technology integrating multiple technologies such as three-dimensional tracking, pattern recognition and multimedia, and has three outstanding characteristics of immersive, interactive and conceptual. The VR uses a computer to generate a realistic and interactive virtual three-dimensional environment, and a user can obtain an immersive virtual experience by means of special equipment such as helmets, data gloves, operating levers, sensors and the like.
For digital display of the agricultural park, the VR technology is utilized to match with external devices such as HTC (high-speed computer) visual head-mounted devices, control handles and gesture capturing devices, roaming interaction is carried out in real time, multidimensional and autonomous interactive operation is realized, and park scenes are visited from different angles. The three-dimensional roaming mode of the agricultural park based on the VR technology breaks through the time and space limitations in the traditional agricultural park display design, and the agricultural park is presented in a three-dimensional and all-dimensional mode from the perspective of a user, so that the user participates in the three-dimensional roaming mode and generates interaction, and the feeling of 'being personally on the scene' is given to the user.
The user wears VR headset equipment, such as HTC Vive, and user movement data and movement direction under the real environment are captured through two infrared sensors of HTC Vive own, and in the virtual scene, the virtual user also moves. In this way, the user can roam autonomously in the constructed virtual scene. However, as the user wears the VR glasses, the sight is shielded by the VR glasses, the external environment cannot be perceived, and in the moving process, the phenomenon of wall collision or stumbling by the connecting wires can occur, so that certain potential safety hazards exist.
Because of the above drawbacks, in the agricultural park roaming system, the user is generally limited to walk autonomously in the display space, and the virtual view of the user is usually directly transmitted to the destination based on the virtual transmission mechanism. For example, in the case of real simulation, the user needs to walk to reach point B at point a of the park. But to ensure user security, the user can go directly from point a to point B through a virtual transport mechanism. The design mode lacks of action simulation of walking of the user, so that the user has insufficient participation and cannot truly integrate into a virtual scene.
Disclosure of Invention
Aiming at the problems existing in the prior art, the invention provides a park digital roaming display method and system based on multi-source information fusion.
The invention provides a park digital roaming display method based on multi-source information fusion, which comprises the following steps: tracking the lower limbs of a user through a sensor in the universal running machine, determining the corresponding moving speed and direction of the user, and determining the moving state of the user in a virtual scene of a three-dimensional model of a park according to the moving speed and direction; the method comprises the steps of carrying out upper limb tracking through VR gloves worn by a user, under the condition that the operation of the VR gloves on an interactive object is touch, carrying out display of touch effect according to preset interaction information, or under the condition that the operation of the VR gloves on the interactive object is grabbing, carrying out corresponding virtual farming operation on the interactive object according to the operation of the user.
According to one embodiment of the invention, the park digital roaming display method based on multi-source information fusion further comprises the following steps: real-time environment data of a real park collected by a sensor are obtained from a background database, and the real-time environment data are respectively associated with interaction points in virtual sensors at corresponding positions of a scene; the associated interaction points are used for displaying according to the real-time environment data or displaying the real-time environment data directly at the virtual sensor interaction points under the condition that the associated virtual sensor interaction points are touched by a user through the VR glove.
According to an embodiment of the invention, the method for digitally displaying the garden roaming based on the multi-source information fusion further comprises the following steps before the user is tracked by the sensor in the universal running machine: obtaining high-definition images of park buildings and roads, generating a three-dimensional model of the park, and carrying out texture mapping and light and shadow parameter setting according to the high-definition images to obtain a preliminary three-dimensional model; importing the preliminary three-dimensional model into a Unity platform, adding sky, ground and illumination effects, and performing space parameter adjustment to obtain the three-dimensional model; wherein the spatial parameters include three-dimensional position, rotation angle, and size.
According to an embodiment of the invention, the park digital roaming display method based on multi-source information fusion further comprises the following steps after the three-dimensional model is obtained: a CAMERARIG component is added at a predetermined location of the three-dimensional model virtual scene and adjusted to fit the model size within the scene.
According to an embodiment of the invention, the method for digitally displaying the garden roaming based on the multi-source information fusion further comprises the following steps before the user is tracked by the sensor in the universal running machine: adding a role control preform of the universal running machine in a scene of the three-dimensional model, and associating the universal running machine with the VR glove; by adding a preset Movement Component script file and setting the maximum moving speed and gravity sensing value parameters of the user in the virtual scene, the real-time simulation of the walking action of the user in the virtual scene is realized.
According to an embodiment of the invention, the park digital roaming display method based on multi-source information fusion, which is used for tracking the upper limbs through VR gloves worn by a user, comprises the following steps: based on VR glove interaction item scripts and collision detection components mounted on interaction objects in a three-dimensional model scene, monitoring VR glove actions according to touch event scripts; based on VR glove interaction item scripts and collision detection components mounted on interaction objects in a three-dimensional model scene, monitoring VR glove actions according to the set grabbing levels and grabbing event scripts.
According to the park digital roaming display method based on multi-source information fusion, the interactive object is an interactive button, and the VR glove interaction item script and the collision detection component mounted on the interactive object in the three-dimensional model scene monitor the VR glove action according to the touch event script, and the method comprises the following steps: based on the VR glove key interface script and the collision detection component mounted on the UI interaction button, monitoring VR glove actions according to the set click event script; correspondingly, under the condition that the virtual finger corresponding to the VR glove is determined to click the interactive button, displaying information corresponding to the interactive button is executed, and the displayed information comprises two-dimensional pictures, videos and text introduction of the park.
The invention also provides a park digital roaming display system based on multi-source information fusion, which comprises: the lower limb tracking module is used for tracking the lower limb of the user through a sensor in the universal running machine, determining the corresponding moving speed and direction of the user, and determining the moving state of the user in the virtual scene of the three-dimensional model of the park according to the moving speed and direction; the upper limb tracking module is used for tracking the upper limb through the VR glove worn by the user, and displaying the touch effect according to preset interaction information under the condition that the operation of the user on the interactive object through the VR glove is touch or performing corresponding virtual farming operation on the interactive object according to the operation of the user under the condition that the operation of the user on the interactive object through the VR glove is grabbing.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the steps of the park digital roaming display method based on multi-source information fusion.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of a campus digital roaming display method based on multi-source information fusion as described in any one of the above.
According to the garden digital roaming display method and system based on multi-source information fusion, the sensors in the universal running machine are used for tracking the lower limbs of the user, and the VR gloves worn by the user are used for tracking the upper limbs of the user, so that the garden information of the user is displayed, virtual farming operations are executed, the user is enabled to walk and interact in the virtual garden in an immersive manner by taking the user as a basic starting point, the roaming interest of the user is stimulated, the participation of the user is improved, good user experience is created, and the user can better know the development and operation conditions of the garden.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a park digital roaming display method based on multi-source information fusion provided by the invention;
FIG. 2 is a block diagram of a campus digital roaming display system based on multi-source information fusion provided by the invention;
FIG. 3 is a schematic diagram of a digital roaming display system for a campus based on multi-source information fusion according to the present invention;
Fig. 4 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The method and system for park digital roaming display based on multi-source information fusion of the present invention are described below with reference to fig. 1-4. Fig. 1 is a flow chart of a method for park digital roaming display based on multi-source information fusion, and as shown in fig. 1, the method for park digital roaming display based on multi-source information fusion includes:
101. And tracking the lower limbs of the user through a sensor in the universal running machine, determining the corresponding moving speed and direction of the user, and determining the moving state of the user in the virtual scene of the three-dimensional model of the park according to the moving speed and direction.
102. The method comprises the steps of carrying out upper limb tracking through VR gloves worn by a user, under the condition that the operation of the VR gloves on an interactive object is touch, carrying out display of touch effect according to preset interaction information, or under the condition that the operation of the VR gloves on the interactive object is grabbing, carrying out corresponding virtual farming operation on the interactive object according to the operation of the user.
The technical scheme mainly comprises a virtual scene module and a roaming interaction module. The virtual scene module is constructed based on the Unity engine to complete the construction of the virtual park scene.
In the digital display process of the agricultural park, gesture motion information of a user is collected through the gesture capture device, so that corresponding interactive operation or agricultural operation is completed. The Leap Motion equipment is generally used for collecting gesture actions of a user, and the user-defined action analysis module is used for identifying clicking, grabbing, touching and other actions of the user. However, the collection range of the Leap Motion is 25 mm to 600 mm, so that the hands are required to be ensured to be in the detectable range of the equipment, and inconvenience is brought in the virtual farming operation process. Moreover, the device cannot provide haptic feedback, and the user cannot perceive whether to trigger corresponding virtual operations, so that gesture actions of the user cannot be captured finely.
In the invention, interaction is realized by combining VR gloves. Specifically, the roaming interaction module is implemented based on VR headset, universal treadmill, and VR glove, such as HTC Vive, virtuix Omni, and Noitom Hi, respectively. Based on HTC VIVE SDK, virtuix Omni SDK and Noitom Hi SDK, user operation is customized through script design, data acquisition and response are carried out, and user action simulation in a virtual scene is completed.
Through Virtuix Omni equipment, the user's range of motion is fixed, solves the potential safety hazard problem to through universal tracking equipment, gather user action data, truly simulate user's action of walking, realize 360 all-round free walking and running of user in virtual environment. The space limitation is broken through Noitom Hi gesture capturing equipment, gesture actions of a user are collected with high precision, and real farm work operation is simulated.
According to the garden digital roaming display method based on multi-source information fusion, the sensors in the universal running machine are used for tracking the lower limbs of the user, and the VR gloves worn by the user are used for tracking the upper limbs of the user, so that the garden information of the user is displayed, virtual farming operations are executed, the user is enabled to walk and interact in a virtual garden in an immersive manner by taking the user as a basic starting point, the roaming interest of the user is stimulated, the participation of the user is improved, good user experience is created, and the user can better know the development and operation conditions of the garden.
In one embodiment, the method further comprises: real-time environment data of a real park collected by a sensor are obtained from a background database, and the real-time environment data are respectively associated with interaction points in virtual sensors at corresponding positions of a scene; the associated interaction points are used for displaying according to the real-time environment data or displaying the real-time environment data directly at the virtual sensor interaction points under the condition that the associated virtual sensor interaction points are touched by a user through the VR glove.
In an agricultural campus, there are many sensors and intelligent utility devices by which the characteristics of the campus intellectualization and modernization can be presented. But current display system, focus on the virtual roaming of garden and the show of garden scene, neglected the show of multisource agricultural data, the user can't know more in depth that the garden developed condition and garden theory, and its show content is single.
Further, the technical scheme related to the embodiment of the invention comprises a virtual scene module, a roaming interaction module and a big data display module. The big data display module is completed by UGUI XChart components, and analysis of sensor data is carried out through JSON, so that multi-source fusion and dynamic display of the data are realized.
Firstly, JSON data are analyzed, agricultural data (air temperature, air humidity, soil temperature, illumination intensity and the like) acquired by a real park sensor are analyzed through JSON, data streams to be displayed are acquired, and the data streams are transmitted to a background database for storage.
Secondly, data visualization, namely, retrieving agricultural data stored in a background database through a UGUI XChart component, and displaying the data in a virtual scene in the forms of a line graph, a histogram, a pie graph, a radar graph and the like through script design. Through wearing HTC Vive equipment, users open UI panels based on menu interaction functions of roaming interaction modules, view the agricultural data in real time, and know intelligent development of a park more intuitively. The interaction points of the virtual sensors can be displayed directly, and the user can check after roaming to the corresponding sensor positions.
According to the park digital roaming display method based on multi-source information fusion, disclosed by the embodiment of the invention, the park sensor data is dynamically and intuitively displayed in the virtual environment based on the big data visualization technology, so that the fusion display of the multi-source data is realized, the present situation of the park is displayed more comprehensively and more truly, and the roaming immersion and participation of users are improved.
In one embodiment, before the tracking of the lower limb of the user by the sensor in the universal treadmill, the method further comprises: obtaining high-definition images of park buildings and roads, generating a three-dimensional model of the park, and carrying out texture mapping and light and shadow parameter setting according to the high-definition images to obtain a preliminary three-dimensional model; importing the preliminary three-dimensional model into a Unity platform, adding sky, ground and illumination effects, and performing space parameter adjustment to obtain the three-dimensional model; wherein the spatial parameters include three-dimensional position, rotation angle, and size.
Specifically, the virtual scene module implementation flow is as follows:
For garden data acquisition, can carry out the shooting of outdoor scene through high definition digtal camera, unmanned aerial vehicle to different building structure in garden, building distribution, road trend, architectural style, acquire high quality photo.
For modeling of the three-dimensional model, a building model can be manufactured by 3ds max software aiming at the photo materials acquired by the process, and texture mapping and light and shadow design are carried out.
For virtual scene construction, a three-dimensional model manufactured according to the flow can be imported into a Unity platform, special effects such as virtual sky, ground, illumination and the like are added in the virtual scene, and the reality of the environment is improved. In the Unity platform, parameters such as the three-dimensional position, the rotation angle, the size and the like of the model are adjusted, and finally, the equal-proportion simulation of the real park is realized, and a realistic virtual park scene is constructed.
In one embodiment, after the obtaining the three-dimensional model, the method further includes: a CAMERARIG component is added at a predetermined location of the three-dimensional model virtual scene and adjusted to fit the model size within the scene.
For VR scene construction, steamVR Plugin and Noitom Hi Unity SDKs are introduced into the virtual scene constructed by the process, a [ CAMERARIG ] _Hi5 component is added into the scene, and the virtual scene is placed at a preset position and is adjusted in size, so that the virtual scene is matched with the size of the model in the scene. The component [ CAMERARIG ] Hi5 contains the child Camera (head) as the VR Camera. When the user wears HTC Vive on the head and Noitom Hi gloves on the hand, the user can enter a virtual scene, and the user can autonomously observe the scene at 360 degrees and freely operate the hands at will, and the virtual double-hand model in the scene can simulate hand action data in real time.
In one embodiment, before the tracking of the lower limb of the user by the sensor in the universal treadmill, the method further comprises: adding a role control preform of the universal running machine in a scene of the three-dimensional model, and associating the universal running machine with the VR glove; by adding a preset Movement Component script file and setting the maximum moving speed and gravity sensing value parameters of the user in the virtual scene, the real-time simulation of the walking action of the user in the virtual scene is realized.
Specifically, for walking motion simulation, omni SDK is introduced into the VR scene constructed as described above, so as to complete the tracking simulation of the scene on the user's limb motion. Adding [ Omni Character Controller ] prefabricates (i.e., character control prefabricates) in the scene, and assigning [ CAMERARIG ] _Hi5 in the scene to [ CAMERA REFERENCE ] variables (enabling association of the universal treadmill with the VR glove). By adding the [ Omni Movement Component ] script file, parameters such as the maximum moving speed, the gravity sensing value and the like of the user in the virtual scene are set, and real-time simulation of the travelling actions such as forward movement, backward movement, transverse movement and the like of the user in the virtual scene is realized.
And importing Hi5_interaction_SDK in the configured VR scene to finish the customization of the interactive operation.
In one embodiment, the upper limb tracking by VR glove worn by the user comprises: based on VR glove interaction item scripts and collision detection components mounted on interaction objects in a three-dimensional model scene, monitoring VR glove actions according to touch event scripts; based on VR glove interaction item scripts and collision detection components mounted on interaction objects in a three-dimensional model scene, monitoring VR glove actions according to the set grabbing levels and grabbing event scripts.
For the virtual interactive response, in the VR scene configured in the above embodiment, the hi5_interaction_sdk is imported to complete the customization of the interactive operation.
Object touch, a VR glove Interaction Item script, such as a Hi5_ Glove _interaction_Item script, is mounted on an Interaction object, a collision detection Collider component is added, and monitoring is conducted through a custom touch event script. When the virtual double hands touch the interactive object, acquiring event monitoring feedback, judging whether the operation is double-hand touch operation, and making corresponding reaction to simulate the effect of touching the object by a user in a real environment.
Object grabbing, namely mounting a Hi5_ Glove _interaction_item script on an interactive object, adding a collision detection Collider component, setting a grabbing Grasp level, and monitoring through a custom grabbing event script. When the virtual double hands grab the interactive object, the event monitoring feedback is acquired, whether the operation is double-hand grabbing operation is judged, corresponding reactions are made, the effect of grabbing the object by the user in the real environment is simulated, and the user is guided to perform corresponding virtual farming operation.
In one embodiment, the interactive object is an interactive button, the VR glove interaction item script and the collision detection component mounted on the interactive object in the three-dimensional model scene monitor VR glove actions according to the touch event script, and the method includes: based on the VR glove key interface script and the collision detection component mounted on the UI interaction button, monitoring VR glove actions according to the set click event script; under the condition that the virtual finger corresponding to the VR glove is determined to click the interactive button, executing the display of the information corresponding to the interactive button; the information displayed includes two-dimensional pictures, videos and text presentations of the campus.
For menu interaction display, a VR glove key interface script, such as a Hi5_ Interace _Button script, is mounted on a UI interaction Button, a collision detection Collider component is added, and monitoring is performed through a custom click event script. When the virtual finger clicks the interactive button, acquiring event monitoring feedback, judging whether the event monitoring feedback is finger clicking operation, making corresponding reaction, simulating the effect of clicking the button by a user in a real environment, and realizing that the user views two-dimensional pictures, videos, text introduction and the like of a park in a virtual scene.
Fig. 2 is a frame diagram of a digital roaming display system for a campus based on multi-source information fusion, which can be referred to in conjunction with the above embodiments, and the VR system operation flow based on the above embodiments is as follows:
(1) The user starts the roaming experience system;
(2) Wearing low friction shoes special for Omni equipment, entering an Omni operation table, and wearing a movable waist ring. (the waist ring fixes the user in the area of the operation desk, the user can freely move 360 degrees in the operation desk and can prevent the user from falling down or falling out of the sensing area);
(3) The user wears HTC virtual headgear, and the hand wears Noitom Hi VR gloves. Ready for virtual park roaming.
(4) The- (6) operation may enable autonomous virtual roaming and free interaction of users at multiple areas, multiple buildings, multiple points of interaction on a campus, illustrated by building a.
(4) The user walks: the user is in the virtual environment facing building a. In the console, the user walks in place, but in the virtual scene, the user is walking towards building a. ( The user walks in place within the console area, with the change that the user moves in the location of the virtual environment, similar to the user running on a treadmill apparatus. Omni device can track the walking motion of a user, and determine whether to walk or run according to the moving speed of the user )
(5) User interaction: after the user arrives at building a, building a contains an interaction point. The user touches the interactive response (virtual interactive response-object touch) through the VR glove, touches the interactive object, and pops up the UI menu introduction of the building a. And then, checking the picture, the text and the like of the building A through VR glove menu interaction response (virtual interaction response-menu interaction). The user can perform virtual farming operations through the grabbing response (virtual interaction response-object grabbing) of the VR glove, and farm work activities such as fruit picking and watering of the user are simulated.
(6) Big data display: after the user arrives at building a, building a contains a virtual sensor model. The user can select different display modes independently through touch interaction response (virtual interaction response-object touch) of the VR glove, touch the sensor model and pop up corresponding environment data, and the multisource dynamic display of the agricultural real-time data can be performed.
(7) The user exits the virtual roaming system.
The park digital roaming display system based on the multi-source information fusion provided by the invention is described below, and the park digital roaming display system based on the multi-source information fusion described below and the park digital roaming display method based on the multi-source information fusion described above can be correspondingly referred to each other.
Fig. 3 is a schematic structural diagram of a park digital roaming display system based on multi-source information fusion, and as shown in fig. 3, the park digital roaming display system based on multi-source information fusion includes: a lower limb tracking module 301 and an upper limb tracking module 302. The lower limb tracking module 301 is configured to track a user's lower limb through a sensor in the universal treadmill, determine a corresponding movement speed and direction of the user, and determine a movement state of the user in a virtual scene of the three-dimensional model of the campus according to the movement speed and direction; the upper limb tracking module 302 is configured to perform upper limb tracking through VR gloves worn by a user, and perform display of a touch effect according to preset interaction information when the operation of the user on the interactive object through the VR gloves is obtained to touch the interactive object, or perform corresponding virtual farming operation on the interactive object according to the operation of the user when the operation of the user on the interactive object through the VR gloves is obtained to grasp the interactive object.
The system embodiment provided in the embodiment of the present invention is for implementing the above method embodiments, and specific flow and details refer to the above method embodiments, which are not repeated herein.
According to the park digital roaming display system based on multi-source information fusion, provided by the embodiment of the invention, the lower limbs of the user are tracked through the sensors in the universal running machine, and the upper limbs of the user are tracked through the VR gloves worn by the user, so that park information of the user is displayed, virtual farming operations are executed, the user is enabled to walk and interact in the virtual park in an immersive manner by taking the user as a basic departure point, the roaming interest of the user is stimulated, the participation of the user is improved, good user experience is created, and the user can better know the development and operation conditions of the park.
Fig. 4 is a schematic structural diagram of an electronic device according to the present invention, as shown in fig. 4, the electronic device may include: processor 401, communication interface (Communications Interface) 402, memory 403 and communication bus 404, wherein processor 401, communication interface 402 and memory 403 complete communication with each other through communication bus 404. The processor 401 may invoke logic instructions in the memory 403 to perform a campus digital roaming presentation method based on multi-source information fusion, the method comprising: tracking the lower limbs of a user through a sensor in the universal running machine, determining the corresponding moving speed and direction of the user, and determining the moving state of the user in a virtual scene of a three-dimensional model of a park according to the moving speed and direction; the method comprises the steps of carrying out upper limb tracking through VR gloves worn by a user, under the condition that the operation of the VR gloves on an interactive object is touch, carrying out display of touch effect according to preset interaction information, or under the condition that the operation of the VR gloves on the interactive object is grabbing, carrying out corresponding virtual farming operation on the interactive object according to the operation of the user.
Further, the logic instructions in the memory 403 may be implemented in the form of software functional units and stored in a computer readable storage medium when sold or used as a stand alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the computer to perform the method of digital roaming display of a campus based on multi-source information fusion provided by the methods described above, the method comprising: tracking the lower limbs of a user through a sensor in the universal running machine, determining the corresponding moving speed and direction of the user, and determining the moving state of the user in a virtual scene of a three-dimensional model of a park according to the moving speed and direction; the method comprises the steps of carrying out upper limb tracking through VR gloves worn by a user, under the condition that the operation of the VR gloves on an interactive object is touch, carrying out display of touch effect according to preset interaction information, or under the condition that the operation of the VR gloves on the interactive object is grabbing, carrying out corresponding virtual farming operation on the interactive object according to the operation of the user.
In yet another aspect, the present invention further provides a non-transitory computer readable storage medium having stored thereon a computer program, which when executed by a processor, is implemented to perform the method for digital roaming exhibition on a campus based on multi-source information fusion provided in the foregoing embodiments, the method comprising: tracking the lower limbs of a user through a sensor in the universal running machine, determining the corresponding moving speed and direction of the user, and determining the moving state of the user in a virtual scene of a three-dimensional model of a park according to the moving speed and direction; the method comprises the steps of carrying out upper limb tracking through VR gloves worn by a user, under the condition that the operation of the VR gloves on an interactive object is touch, carrying out display of touch effect according to preset interaction information, or under the condition that the operation of the VR gloves on the interactive object is grabbing, carrying out corresponding virtual farming operation on the interactive object according to the operation of the user.
The system embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (7)

1. The park digital roaming display method based on multi-source information fusion is characterized by comprising the following steps of:
Tracking the lower limbs of a user through a sensor in the universal running machine, determining the corresponding moving speed and direction of the user, and determining the moving state of the user in a virtual scene of a three-dimensional model of a park according to the moving speed and direction;
the method comprises the steps that upper limb tracking is conducted through VR gloves worn by a user, when the operation of the user on an interactive object through the VR gloves is touch, the touch effect is displayed according to preset interaction information, or when the operation of the user on the interactive object through the VR gloves is grabbing, corresponding virtual farming operations are conducted on the interactive object according to the operation of the user;
The method further comprises the steps of:
Real-time environment data of a real park collected by a sensor are obtained from a background database, and the real-time environment data are respectively associated with interaction points in virtual sensors at corresponding positions of a scene;
The associated interaction points are used for displaying according to the real-time environment data or displaying the real-time environment data directly at the virtual sensor interaction points under the condition that the associated virtual sensor interaction points are touched by a user through the VR glove;
The VR glove worn by the user performs upper limb tracking, including:
based on VR glove interaction item scripts and collision detection components mounted on interaction objects in a three-dimensional model scene, monitoring VR glove actions according to touch event scripts;
Based on VR glove interaction item scripts and collision detection components mounted on interaction objects in a three-dimensional model scene, monitoring VR glove actions according to the set grabbing levels and grabbing event scripts;
The interactive object is an interactive button, the VR glove interaction item script and the collision detection component mounted on the interactive object in the three-dimensional model scene are based on the interaction button, and the VR glove action is monitored according to the touch event script, comprising the following steps:
based on the VR glove key interface script and the collision detection component mounted on the UI interaction button, monitoring VR glove actions according to the set click event script;
correspondingly, under the condition that the virtual finger corresponding to the VR glove is determined to click the interactive button, displaying information corresponding to the interactive button is executed, and the displayed information comprises two-dimensional pictures, videos and text introduction of the park.
2. The method for digital roaming display of a campus based on multi-source information fusion of claim 1, further comprising, prior to the user's lower limb tracking via the sensor in the universal treadmill:
obtaining high-definition images of park buildings and roads, generating a three-dimensional model of the park, and carrying out texture mapping and light and shadow parameter setting according to the high-definition images to obtain a preliminary three-dimensional model;
Importing the preliminary three-dimensional model into a Unity platform, adding sky, ground and illumination effects, and performing space parameter adjustment to obtain the three-dimensional model;
wherein the spatial parameters include three-dimensional position, rotation angle, and size.
3. The method for digital roaming display of a campus based on multi-source information fusion according to claim 2, further comprising, after the obtaining the three-dimensional model:
A CAMERARIG component is added at a predetermined location of the three-dimensional model virtual scene and adjusted to fit the model size within the scene.
4. The method for digital roaming display of a campus based on multi-source information fusion of claim 1, further comprising, prior to the user's lower limb tracking via the sensor in the universal treadmill:
adding a role control preform of the universal running machine in a scene of the three-dimensional model, and associating the universal running machine with the VR glove;
By adding a preset Movement Component script file and setting the maximum moving speed and gravity sensing value parameters of the user in the virtual scene, the real-time simulation of the walking action of the user in the virtual scene is realized.
5. A system for digital roaming display of a campus based on multi-source information fusion, wherein the system is configured to implement the digital roaming display method of a campus based on multi-source information fusion according to any one of claims 1 to 4, the system comprising:
the lower limb tracking module is used for tracking the lower limb of the user through a sensor in the universal running machine, determining the corresponding moving speed and direction of the user, and determining the moving state of the user in the virtual scene of the three-dimensional model of the park according to the moving speed and direction;
The upper limb tracking module is used for tracking the upper limb through the VR glove worn by the user, and displaying the touch effect according to preset interaction information under the condition that the operation of the user on the interactive object through the VR glove is touch or performing corresponding virtual farming operation on the interactive object according to the operation of the user under the condition that the operation of the user on the interactive object through the VR glove is grabbing.
6. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor performs the steps of the method for digital roaming presentation on a campus based on multi-source information fusion as claimed in any one of claims 1 to 4.
7. A non-transitory computer readable storage medium having stored thereon a computer program, which when executed by a processor, implements the steps of the method for digital roaming display of a campus based on multi-source information fusion of any one of claims 1 to 4.
CN202111146316.8A 2021-09-28 2021-09-28 Park digital roaming display method and system based on multi-source information fusion Active CN114020978B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111146316.8A CN114020978B (en) 2021-09-28 2021-09-28 Park digital roaming display method and system based on multi-source information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111146316.8A CN114020978B (en) 2021-09-28 2021-09-28 Park digital roaming display method and system based on multi-source information fusion

Publications (2)

Publication Number Publication Date
CN114020978A CN114020978A (en) 2022-02-08
CN114020978B true CN114020978B (en) 2024-06-11

Family

ID=80055003

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111146316.8A Active CN114020978B (en) 2021-09-28 2021-09-28 Park digital roaming display method and system based on multi-source information fusion

Country Status (1)

Country Link
CN (1) CN114020978B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494667B (en) * 2022-02-21 2022-11-04 北京华建云鼎科技股份公司 Data processing system and method for adding crash box

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106601062A (en) * 2016-11-22 2017-04-26 山东科技大学 Interactive method for simulating mine disaster escape training
CN108919940A (en) * 2018-05-15 2018-11-30 青岛大学 A kind of Virtual Campus Cruise System based on HTC VIVE
CN109067822A (en) * 2018-06-08 2018-12-21 珠海欧麦斯通信科技有限公司 The real-time mixed reality urban service realization method and system of on-line off-line fusion
CN111667560A (en) * 2020-06-04 2020-09-15 成都飞机工业(集团)有限责任公司 Interaction structure and interaction method based on VR virtual reality role
CN111694426A (en) * 2020-05-13 2020-09-22 北京农业信息技术研究中心 VR virtual picking interactive experience system, method, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109690450B (en) * 2017-11-17 2020-09-29 腾讯科技(深圳)有限公司 Role simulation method in VR scene and terminal equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106601062A (en) * 2016-11-22 2017-04-26 山东科技大学 Interactive method for simulating mine disaster escape training
CN108919940A (en) * 2018-05-15 2018-11-30 青岛大学 A kind of Virtual Campus Cruise System based on HTC VIVE
CN109067822A (en) * 2018-06-08 2018-12-21 珠海欧麦斯通信科技有限公司 The real-time mixed reality urban service realization method and system of on-line off-line fusion
CN111694426A (en) * 2020-05-13 2020-09-22 北京农业信息技术研究中心 VR virtual picking interactive experience system, method, electronic equipment and storage medium
CN111667560A (en) * 2020-06-04 2020-09-15 成都飞机工业(集团)有限责任公司 Interaction structure and interaction method based on VR virtual reality role

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于虚拟技术的核电站应急辅助***的开发;陈艳芳;刘海鹏;;核安全;20191230(06);全文 *
面向驾驶员行为研究的基础理论――多源信息融合算法综述;王雷;王晓原;刘智平;刘海红;;交通标准化;20070115(01);全文 *

Also Published As

Publication number Publication date
CN114020978A (en) 2022-02-08

Similar Documents

Publication Publication Date Title
US9342230B2 (en) Natural user interface scrolling and targeting
CN105027030B (en) The wireless wrist calculating connected for three-dimensional imaging, mapping, networking and interface and control device and method
TW202004421A (en) Eye tracking with prediction and late update to GPU for fast foveated rendering in an HMD environment
CN101231752B (en) Mark-free true three-dimensional panoramic display and interactive apparatus
CN111191322B (en) Virtual maintainability simulation method based on depth perception gesture recognition
US9799143B2 (en) Spatial data visualization
CN106873778A (en) A kind of progress control method of application, device and virtual reality device
CN107004279A (en) Natural user interface camera calibrated
US9799142B2 (en) Spatial data collection
KR20150103723A (en) Extramissive spatial imaging digital eye glass for virtual or augmediated vision
CN112198959A (en) Virtual reality interaction method, device and system
WO2016109250A1 (en) Sample based color extraction for augmented reality
TW201246088A (en) Theme-based augmentation of photorepresentative view
US20180247463A1 (en) Information processing apparatus, information processing method, and program
CN103501869A (en) Manual and camera-based game control
CN103501868A (en) Control of separate computer game elements
WO2020236315A1 (en) Real-world object recognition for computing device
US20140173524A1 (en) Target and press natural user input
Capece et al. Graphvr: A virtual reality tool for the exploration of graphs with htc vive system
CN111643899A (en) Virtual article display method and device, electronic equipment and storage medium
CN114020978B (en) Park digital roaming display method and system based on multi-source information fusion
CN110389664B (en) Fire scene simulation analysis device and method based on augmented reality
CN202003298U (en) Three-dimensional uncalibrated display interactive device
CN109643182B (en) Information processing method and device, cloud processing equipment and computer program product
CN107122002A (en) A kind of wear-type visual device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant