CN111915738A - Method for realizing virtual reality and head-mounted virtual reality equipment - Google Patents

Method for realizing virtual reality and head-mounted virtual reality equipment Download PDF

Info

Publication number
CN111915738A
CN111915738A CN202010809020.9A CN202010809020A CN111915738A CN 111915738 A CN111915738 A CN 111915738A CN 202010809020 A CN202010809020 A CN 202010809020A CN 111915738 A CN111915738 A CN 111915738A
Authority
CN
China
Prior art keywords
virtual
virtual reality
display position
inclination angle
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010809020.9A
Other languages
Chinese (zh)
Inventor
徐雪峰
杜文龙
潘洋宇
王悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Vocational College of Electronics and Information
Original Assignee
Jiangsu Vocational College of Electronics and Information
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Vocational College of Electronics and Information filed Critical Jiangsu Vocational College of Electronics and Information
Priority to CN202010809020.9A priority Critical patent/CN111915738A/en
Publication of CN111915738A publication Critical patent/CN111915738A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C1/00Measuring angles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01PMEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
    • G01P15/00Measuring acceleration; Measuring deceleration; Measuring shock, i.e. sudden change of acceleration
    • G01P15/02Measuring acceleration; Measuring deceleration; Measuring shock, i.e. sudden change of acceleration by making use of inertia forces using solid seismic masses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4084Scaling of whole images or parts thereof, e.g. expanding or contracting in the transform domain, e.g. fast Fourier transform [FFT] domain scaling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention belongs to the technical field of virtual reality, and discloses a method for realizing virtual reality and a head-mounted virtual reality device, wherein the head-mounted virtual reality device comprises: the glasses comprise a fixing band, an upper elastic band, a glasses shell, a glasses frame, glasses, an action control module, a content adjusting module, a virtual reality module and a data line interface. The invention tracks the orientation and the posture of the human body through the action control module, and controls the moving direction of the virtual world character by utilizing the obtained orientation of the human body; the obtained attitude angle is used for controlling the walking action of the virtual world character, so that the difference between visual perception and physical perception is overcome, and the strong dizzy feeling is brought to the user; meanwhile, the condition of view angle jump deviation caused by directly updating the virtual content from the wrong position to the correct position is avoided through the content adjusting module, the perception of a user on the offset correcting process is reduced, and meanwhile the display effect of the virtual content is guaranteed.

Description

Method for realizing virtual reality and head-mounted virtual reality equipment
Technical Field
The invention belongs to the technical field of virtual reality, and particularly relates to a method for realizing virtual reality and a head-mounted virtual reality device method.
Background
Virtual Reality (abbreviated as VR) is a new practical technology developed in the 20 th century. The virtual reality technology comprises a computer, electronic information and simulation technology, and the basic realization mode is that the computer simulates a virtual environment so as to provide people with environmental immersion. With the continuous development of social productivity and scientific technology, VR technology is increasingly in great demand in various industries. The VR technology has made great progress and gradually becomes a new scientific and technical field. The virtual reality technology is accepted by more and more people, a user can experience the truest feeling in the virtual reality world, the reality of the simulation environment is hard to distinguish from the real world, and people can feel personally on the scene; meanwhile, the virtual reality has all human perception functions, such as auditory perception systems, visual perception systems, tactile perception systems, gustatory perception systems, olfactory perception systems and the like; finally, the system has a super-strong simulation system, thereby really realizing human-computer interaction, enabling people to operate at will and obtaining the most real feedback of the environment in the operation process. It is the presence, multi-awareness, interactivity, etc. features of virtual reality technology that make it popular with many people. However, the existing method for realizing virtual reality and the head-mounted virtual reality equipment are easy to be dizzy after being worn for a long time; meanwhile, when the virtual content cannot be accurately displayed, not only the interaction between the user and the virtual picture is affected, but also the impression of the user is affected.
In summary, the problems and disadvantages of the prior art are: the existing method for realizing virtual reality and the head-mounted virtual reality equipment are easy to dizzy after long wearing time; meanwhile, when the virtual content cannot be accurately displayed, not only the interaction between the user and the virtual picture is affected, but also the impression of the user is affected.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a method for realizing virtual reality and a head-mounted virtual reality device.
The invention is realized in such a way that a method for realizing virtual reality comprises the following steps:
step one, fixing the head of a user through a fixing belt; the tightness is adjusted by the upper elastic band; the user glasses are opposite to the glasses frame; the method comprises the steps of acquiring an inverted reduced real image of a target object by using an objective lens, and inverting the acquired inverted reduced real image through an erect image lens.
And step two, processing the reversed reduced real image through an eyepiece by using a virtual reality module through a virtual reality program to form an upright amplified virtual image, and constructing a virtual scene corresponding to the target object in a virtual reality interaction environment.
And step three, accessing a mobile phone or a computer through a data line interface to obtain a virtual video source, and performing image synthesis on the vertically amplified virtual image and the obtained virtual video source corresponding to the target object.
Step four, constructing a virtual world character model by utilizing three-dimensional software through an action control module; and the orientation and posture of the human body trunk are tracked by using the inertial sensor.
Fifthly, controlling the moving direction of the virtual world character according to the acquired orientation of the human body trunk by using an inertial sensor; and controlling the walking action of the virtual world character by using the obtained attitude angle.
And step six, obtaining the orientation and the attitude angle of the current human body by calculating the quadratic integral of the angular acceleration value of the gyroscope in time.
Step seven, iterative calculation is carried out on the error function of the attitude angle by adopting a gradient descent method, and the error is continuously corrected until the error value in the current iterative process is in a set range; when the integration is carried out in time, the whole integration process is divided into a plurality of integration processes by utilizing a periodically appeared reference point, wherein the reference point is a position point with the speed close to zero, which is found from the measurement value of an acceleration sensor in the inertial sensor when the inertial sensor suddenly turns.
Step eight, in the process that the body returns to the upright state, the moving speed of the virtual world character linearly decreases until the virtual world character is static, and the control operation of virtual reality walking is realized.
And step nine, acquiring the control action detected through the touch area through a content adjusting module, and generating a virtual content control instruction corresponding to the control action.
Step ten, acquiring the current position and attitude information of the terminal equipment; and determining the theoretical display position of the virtual content displayed by the terminal equipment in the virtual space according to the position and posture information.
Step eleven, when the theoretical display position is not matched with the actual display position in the virtual space, judging whether the terminal equipment is in a motion state or not according to the change information of the position and posture information.
Step twelve, sending a virtual content control instruction to the terminal equipment, and detecting whether the virtual content displayed by the terminal equipment has deviation, wherein the deviation is used for representing that an error exists between an actual display position and a theoretical display position of the virtual content in a virtual space.
Step thirteen, when the terminal equipment is in a motion state, determining the current motion direction of the terminal equipment; and acquiring the offset direction of the actual display position relative to the theoretical display position.
Fourteen, when the current motion direction is matched with the offset direction, adjusting the display position of the virtual content in the virtual space.
And step fifteen, matching the actual display position of the adjusted virtual content in the virtual space with the theoretical display position to realize the adjustment operation of the virtual content.
Further, in the fifth step, the method for controlling the walking motion of the virtual world character by using the obtained attitude angle includes:
the forward speed, the backward speed, the leftward translation speed and the rightward translation speed of the virtual world character are controlled by utilizing the forward inclination angle, the backward inclination angle, the leftward inclination angle and the rightward inclination angle of the human body trunk in a one-to-one correspondence mode.
Further, utilize the anteversion angle, hypsokinesis angle, the angle of left incline, the right angle one-to-one of human trunk to control virtual world personage's speed of advancing, speed of retreating, translation speed to the left, translation speed to the right specifically include:
when the forward inclination angle, the backward inclination angle, the left inclination angle and the right inclination angle of the human body are acquired by the inertial sensor, the forward inclination angle, the backward inclination angle, the left inclination angle and the right inclination angle are transmitted to the calculation control platform, and the calculation control platform runs an application program to simulate a virtual reality environment; when the current inclination angle, the back inclination angle, the left inclination angle and the right inclination angle are larger than the preset angles, triggering the actions of forward movement, backward movement, leftward translation and rightward translation of the character in the virtual reality environment of the simulation; the corresponding speed of the moving is in direct proportion to the size of the inclination angle.
Further, in the seventh step, in the process of performing iterative computation on the error function of the attitude angle by using the gradient descent method, when the periodically-appearing reference points are used to divide the whole integration process into a plurality of segments for performing temporal integration, the error in each segment is suppressed by using a median filtering method.
Further, in a fourteenth step, the current motion direction includes a motion component of each coordinate axis of a spatial coordinate system, and the offset direction includes an offset component of each coordinate axis of the spatial coordinate system;
the method for adjusting the display position of the virtual content in the virtual space when the current motion direction is matched with the offset direction comprises the following steps:
when the direction of the motion component of the terminal device on a target coordinate axis is consistent with the direction of the offset component of the virtual content on the target coordinate axis, moving the display position of the virtual content in the virtual space along the direction opposite to the offset component of the virtual content on the target coordinate axis, wherein the target coordinate axis is any one of the coordinate axes.
Further, in a fourteenth step, the method for adjusting the display position of the virtual content in the virtual space further includes:
acquiring an offset value of the virtual content in the virtual space according to the actual display position and the theoretical display position;
gradually adjusting the display position of the virtual content in the virtual space according to the offset value until the actual display position of the virtual content in the virtual space is matched with the theoretical display position.
Further, the gradually adjusting the display position of the virtual content in the virtual space according to the offset value of the virtual content includes:
acquiring the current movement speed of the terminal equipment;
determining a current adjustment amount of the display position of the virtual content in the virtual space according to the offset value and the current movement speed;
and adjusting the display position of the virtual content in the virtual space according to the current adjustment amount.
Another object of the present invention is to provide a head-mounted virtual reality device applying the method for implementing virtual reality, wherein the head-mounted virtual reality device comprises:
the glasses comprise a fixing band, an upper elastic band, a glasses shell, a glasses frame, glasses, a virtual reality module, an action control module, a content adjusting module and a data line interface.
The back end of the fixing band is connected with an elastic band; the front end of the fixing band is connected with the spectacle shell; the left side of the front surface of the glasses shell is provided with a data line interface; a spectacle frame is embedded in the spectacle shell; the two sides in the frame are embedded with spectacle lenses; the left side of the lower part in the mirror frame is provided with an action control module; a content adjusting module is arranged at the right side of the lower part in the mirror frame; the data line interface is respectively connected with the virtual reality module, the action control module and the content adjusting module through circuit lines;
the virtual reality module is used for carrying out image synthesis on the acquired virtual image and the virtual scene through a virtual reality program and displaying virtual reality;
the action control module is used for carrying out virtual reality walking control operation through the inertial sensor;
and the content adjusting module is used for adjusting the virtual content.
It is a further object of the present invention to provide a computer program product stored on a computer readable medium, comprising a computer readable program for providing a user input interface for implementing said method of implementing virtual reality when executed on an electronic device.
Another object of the present invention is to provide a computer-readable storage medium storing instructions which, when executed on a computer, cause the computer to perform the method for implementing virtual reality.
By combining all the technical schemes, the invention has the advantages and positive effects that: the invention tracks the orientation and the posture of the human body through the action control module, and controls the moving direction of the virtual world character by utilizing the obtained orientation of the human body; the obtained attitude angle is used for controlling the walking action of the virtual world character, so that the difference between visual perception and physical perception is overcome, and the strong dizzy feeling is brought to the user; meanwhile, whether the virtual content displayed by the terminal equipment is deviated or not is detected through the content adjusting module, when the virtual content is deviated, whether the terminal equipment is in a motion state or not is detected, wherein the deviation is used for representing that an error exists between an actual display position and a theoretical display position of the virtual content in a virtual space, when the terminal equipment is in the motion state, the display position of the virtual content in the virtual space is adjusted, and the actual display position of the adjusted virtual content in the virtual space is matched with the theoretical display position. When the terminal equipment is in a motion state, the display position of the offset virtual content can be adjusted, so that the condition of view angle jump deviation caused by directly updating the virtual content from a wrong position to a correct position is avoided, the perception of a user in the offset correction process is reduced, and the display effect of the virtual content is ensured.
Drawings
Fig. 1 is a flowchart of a method for implementing virtual reality according to an embodiment of the present invention.
Fig. 2 is a block diagram of a head-mounted virtual reality device according to an embodiment of the present invention;
in the figure: 1. fixing belts; 2. elastic band is applied; 3. a lens housing; 4. a mirror frame; 5. an ophthalmic lens; 6. an action control module; 7. a content adjustment module; 8. a data line interface; 9. and a virtual reality module.
Fig. 3 is a flowchart of a method for performing virtual reality walking control operation by an inertial sensor according to an embodiment of the present invention.
Fig. 4 is a flowchart of a method for image synthesis of an acquired virtual image and a virtual scene by a virtual reality program according to an embodiment of the present invention.
Fig. 5 is a flowchart of a method for adjusting virtual content by a content adjusting module according to an embodiment of the present invention.
Detailed Description
In order to further understand the contents, features and effects of the present invention, the following embodiments are illustrated and described in detail with reference to the accompanying drawings.
The structure of the present invention will be described in detail below with reference to the accompanying drawings.
As shown in fig. 1, the method for implementing virtual reality provided in the embodiment of the present invention includes the following steps:
s101, fixing the head of a user through a fixing belt; the tightness is adjusted by the upper elastic band; the user glasses are opposite to the glasses frame; and accessing a mobile phone or a computer through a data line interface to obtain a virtual video source.
S102, performing virtual reality walking control operation by using an inertial sensor through an action control module; and adjusting the virtual content through a content adjusting module.
And S103, synthesizing the acquired virtual image and the virtual scene by using a virtual reality program through the virtual reality module, and displaying virtual reality.
As shown in fig. 2, the head-mounted virtual reality device provided in the embodiment of the present invention includes: the glasses comprise a fixing band 1, an upper elastic band 2, a glasses shell 3, a glasses frame 4, glasses 5, an action control module 6, a content adjusting module 7, a data line interface 8 and a virtual reality module 9.
The back end of the fixing band 1 is connected with an elastic band 2; the front end of the fixing band 1 is connected with the spectacle shell 3; the left side of the front of the glasses shell 3 is provided with a data line interface 8; a spectacle frame 4 is embedded in the spectacle shell 3; two sides of the inner side of the spectacle frame 4 are embedded with spectacle lenses 5; the left side of the lower part in the mirror frame 4 is provided with an action control module 6; a content adjusting module 7 is arranged at the right side of the lower part in the lens frame 4; the data line interface 8 is respectively connected with the action control module 6, the content adjusting module 7 and the virtual reality module 9 through circuit lines;
the action control module 6 is used for carrying out virtual reality walking control operation through the inertial sensor;
a content adjusting module 7, configured to perform an adjusting operation on the virtual content;
and the virtual reality module 9 is configured to perform image synthesis on the acquired virtual image and the virtual scene through a virtual reality program, and perform virtual reality display.
The invention is further described with reference to specific examples.
Example 1
As shown in fig. 1 and fig. 3, the method for implementing virtual reality according to the embodiment of the present invention includes:
s201, constructing a virtual world character model through three-dimensional software; and the orientation and posture of the human body trunk are tracked by using the inertial sensor.
S202, controlling the moving direction of the virtual world character by using the obtained human body orientation; and controlling the walking action of the virtual world character by using the obtained attitude angle.
S203, when the body returns to the upright state, the moving speed of the virtual world character linearly decreases until the virtual world character is still.
The invention obtains the orientation and attitude angle of the current human body by calculating the quadratic integral of the angular acceleration value of the gyroscope in time; meanwhile, iterative computation is carried out on the error function of the attitude angle by adopting a gradient descent method, and the error is continuously corrected until the error value in the process of the iteration is in a set range; when the integration is carried out in time, the whole integration process is divided into a plurality of integration processes by utilizing a periodically appeared reference point, wherein the reference point is a position point with the speed close to zero, which is found from the measurement value of an acceleration sensor in the inertial sensor when the inertial sensor suddenly turns.
In the iterative computation process of the error function of the attitude angle by adopting the gradient descent method, provided by the embodiment of the invention, when the periodically-appearing reference point is used for dividing the whole integration process into a plurality of segments for time integration, and the error in each segment is inhibited by a median filtering method.
The walking action of the virtual world character controlled by the obtained attitude angle provided by the embodiment of the invention specifically comprises the following steps: the forward speed, the backward speed, the leftward translation speed and the rightward translation speed of the virtual world character are controlled by utilizing the forward inclination angle, the backward inclination angle, the leftward inclination angle and the rightward inclination angle of the human body trunk in a one-to-one correspondence mode.
The embodiment of the invention provides a method for controlling the forward speed, the backward speed, the leftward translation speed and the rightward translation speed of a virtual world character by utilizing the forward inclination angle, the backward inclination angle, the leftward inclination angle and the rightward inclination angle of the human trunk in a one-to-one correspondence manner, which specifically comprises the following steps:
when the forward inclination angle, the backward inclination angle, the left inclination angle and the right inclination angle of the human body are acquired by the inertial sensor, the forward inclination angle, the backward inclination angle, the left inclination angle and the right inclination angle are transmitted to the calculation control platform, and the calculation control platform runs an application program to simulate a virtual reality environment; when the current inclination angle, the back inclination angle, the left inclination angle and the right inclination angle are larger than the preset angles, triggering the actions of forward movement, backward movement, leftward translation and rightward translation of the character in the virtual reality environment of the simulation; the corresponding speed of the moving is in direct proportion to the size of the inclination angle.
Example 2
As shown in fig. 1 and fig. 4, as a preferred embodiment, the method for implementing virtual reality according to the embodiment of the present invention for image synthesis of an acquired virtual image and a virtual scene by a virtual reality program includes:
s301, acquiring an inverted reduced real image of the target object by using an objective lens, and inverting the acquired inverted reduced real image through an erect lens.
S302, the reversed reduced real image is processed through an eyepiece by a virtual reality module by using a virtual reality program to form an upright enlarged virtual image, and a virtual scene corresponding to the target object is constructed in a virtual reality interaction environment.
And S303, accessing a mobile phone or a computer through a data line interface to obtain a virtual video source, and performing image synthesis on the vertically amplified virtual image and the obtained virtual video source corresponding to the target object.
Example 3
As shown in fig. 1 and fig. 5, as a preferred embodiment, the method for implementing virtual reality according to the embodiment of the present invention adjusts virtual content through a content adjustment module, and includes:
s401, acquiring the control action detected through the touch area, and generating a virtual content control instruction corresponding to the control action.
S402, sending a virtual content control instruction to a terminal device, and detecting whether the virtual content displayed by the terminal device has deviation, wherein the deviation is used for representing that an error exists between an actual display position and a theoretical display position of the virtual content in a virtual space.
S402, when the virtual content deviates, detecting whether the terminal equipment is in a motion state; and when the terminal equipment is in a motion state, adjusting the display position of the virtual content in the virtual space, wherein the actual display position of the adjusted virtual content in the virtual space is matched with the theoretical display position.
The adjusting of the display position of the virtual content in the virtual space when the terminal device is in a motion state provided by the embodiment of the invention comprises:
when the terminal equipment is in a motion state, determining the current motion direction of the terminal equipment;
acquiring the offset direction of the actual display position relative to the theoretical display position;
when the current motion direction matches the offset direction, adjusting a display position of the virtual content in the virtual space.
The present motion direction provided in the embodiment of the present invention includes motion components of each coordinate axis of a spatial coordinate system, the offset direction includes offset components of each coordinate axis of the spatial coordinate system, and when the present motion direction is matched with the offset direction, adjusting a display position of the virtual content in the virtual space includes:
when the direction of the motion component of the terminal device on a target coordinate axis is consistent with the direction of the offset component of the virtual content on the target coordinate axis, moving the display position of the virtual content in the virtual space along the direction opposite to the offset component of the virtual content on the target coordinate axis, wherein the target coordinate axis is any one of the coordinate axes.
The method for adjusting the display position of the virtual content in the virtual space provided by the embodiment of the invention comprises the following steps:
acquiring an offset value of the virtual content in the virtual space according to the actual display position and the theoretical display position;
gradually adjusting the display position of the virtual content in the virtual space according to the offset value until the actual display position of the virtual content in the virtual space is matched with the theoretical display position.
The step of gradually adjusting the display position of the virtual content in the virtual space according to the offset value of the virtual content provided by the embodiment of the present invention includes:
acquiring the current movement speed of the terminal equipment;
determining a current adjustment amount of the display position of the virtual content in the virtual space according to the offset value and the current movement speed;
and adjusting the display position of the virtual content in the virtual space according to the current adjustment amount.
Before the detecting whether the virtual content displayed by the terminal device has the offset, the method further includes:
acquiring the current position and posture information of the terminal equipment;
according to the position and posture information, determining a theoretical display position of the virtual content displayed by the terminal equipment in a virtual space;
when the virtual content is shifted, detecting whether the terminal device is in a motion state, including: and when the theoretical display position is not matched with the actual display position in the virtual space, judging whether the terminal equipment is in a motion state or not according to the change information of the position and posture information.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When used in whole or in part, can be implemented in a computer program product that includes one or more computer instructions. When loaded or executed on a computer, cause the flow or functions according to embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL), or wireless (e.g., infrared, wireless, microwave, etc.)). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The above description is only for the purpose of illustrating the present invention and the appended claims are not to be construed as limiting the scope of the invention, which is intended to cover all modifications, equivalents and improvements that are within the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A method for realizing virtual reality is characterized by comprising the following steps:
step one, fixing the head of a user through a fixing belt; the tightness is adjusted by the upper elastic band; the user glasses are opposite to the glasses frame; acquiring an inverted and reduced real image of a target object by using an objective lens, and inverting the acquired inverted and reduced real image through an erect image lens;
processing the reversed reduced real image through an eyepiece by using a virtual reality module through a virtual reality program to form an upright amplified virtual image, and constructing a virtual scene corresponding to the target object in a virtual reality interaction environment;
accessing a mobile phone or a computer through a data line interface to obtain a virtual video source, and performing image synthesis on the vertically amplified virtual image and the obtained virtual video source corresponding to the target object;
step four, constructing a virtual world character model by utilizing three-dimensional software through an action control module; tracking the orientation and the posture of the human body trunk by using an inertial sensor;
fifthly, controlling the moving direction of the virtual world character according to the acquired orientation of the human body trunk by using an inertial sensor; meanwhile, the walking action of the virtual world character is controlled by using the obtained attitude angle;
step six, obtaining the orientation and the attitude angle of the current human body trunk by calculating the quadratic integral of the angular acceleration value of the gyroscope in time;
step seven, iterative calculation is carried out on the error function of the attitude angle by adopting a gradient descent method, and the error is continuously corrected until the error value in the current iterative process is in a set range; when integration is carried out in time, dividing the whole integration process into a plurality of integration processes by using a periodically appeared reference point, wherein the reference point is a position point with the speed close to zero, which is found from the measurement value of an acceleration sensor in the inertial sensor when the inertial sensor suddenly turns;
step eight, in the process that the body returns to the upright state, the moving speed of the virtual world figure linearly decreases until the virtual world figure is static, and the control operation of virtual reality walking is realized;
step nine, acquiring the control action detected through the touch area through a content adjusting module, and generating a virtual content control instruction corresponding to the control action;
step ten, acquiring the current position and attitude information of the terminal equipment; according to the position and posture information, determining a theoretical display position of the virtual content displayed by the terminal equipment in a virtual space;
step eleven, when the theoretical display position is not matched with the actual display position in the virtual space, judging whether the terminal equipment is in a motion state or not according to the change information of the position and posture information;
step twelve, sending a virtual content control instruction to a terminal device, and detecting whether the virtual content displayed by the terminal device has an offset, wherein the offset is used for representing that an error exists between an actual display position and a theoretical display position of the virtual content in a virtual space;
step thirteen, when the terminal equipment is in a motion state, determining the current motion direction of the terminal equipment; acquiring the offset direction of the actual display position relative to the theoretical display position;
fourteen, when the current motion direction is matched with the offset direction, adjusting the display position of the virtual content in the virtual space;
and step fifteen, matching the actual display position of the adjusted virtual content in the virtual space with the theoretical display position to realize the adjustment operation of the virtual content.
2. The method for realizing virtual reality according to claim 1, wherein in step five, the method for controlling the walking action of the virtual world character by using the obtained attitude angle comprises the following steps:
the forward speed, the backward speed, the leftward translation speed and the rightward translation speed of the virtual world character are controlled by utilizing the forward inclination angle, the backward inclination angle, the leftward inclination angle and the rightward inclination angle of the human body trunk in a one-to-one correspondence mode.
3. The method according to claim 2, wherein the controlling the forward speed, the backward speed, the leftward translation speed, and the rightward translation speed of the virtual world character by using the forward inclination angle, the backward inclination angle, the leftward inclination angle, and the rightward inclination angle of the human body trunk in a one-to-one correspondence manner specifically comprises:
when the forward inclination angle, the backward inclination angle, the left inclination angle and the right inclination angle of the human body are acquired by the inertial sensor, the forward inclination angle, the backward inclination angle, the left inclination angle and the right inclination angle are transmitted to the calculation control platform, and the calculation control platform runs an application program to simulate a virtual reality environment; when the current inclination angle, the back inclination angle, the left inclination angle and the right inclination angle are larger than the preset angles, triggering the actions of forward movement, backward movement, leftward translation and rightward translation of the character in the virtual reality environment of the simulation; the corresponding speed of the moving is in direct proportion to the size of the inclination angle.
4. The method according to claim 1, wherein in step seven, in the iterative computation of the error function of the attitude angle by using the gradient descent method, when the periodically-occurring reference points are used to divide the whole integration process into a plurality of segments for temporal integration, the error in each segment is suppressed by using a median filtering method.
5. The method for realizing virtual reality according to claim 1, wherein in the fourteenth step, the current motion direction includes motion components of respective coordinate axes of a spatial coordinate system, and the offset direction includes offset components of respective coordinate axes of the spatial coordinate system;
the method for adjusting the display position of the virtual content in the virtual space when the current motion direction is matched with the offset direction comprises the following steps:
when the direction of the motion component of the terminal device on a target coordinate axis is consistent with the direction of the offset component of the virtual content on the target coordinate axis, moving the display position of the virtual content in the virtual space along the direction opposite to the offset component of the virtual content on the target coordinate axis, wherein the target coordinate axis is any one of the coordinate axes.
6. The method for realizing virtual reality according to claim 1, wherein in step fourteen, the method for adjusting the display position of the virtual content in the virtual space further comprises:
acquiring an offset value of the virtual content in the virtual space according to the actual display position and the theoretical display position;
gradually adjusting the display position of the virtual content in the virtual space according to the offset value until the actual display position of the virtual content in the virtual space is matched with the theoretical display position.
7. The method for realizing virtual reality according to claim 6, wherein the step-by-step adjustment of the display position of the virtual content in the virtual space according to the offset value of the virtual content comprises:
acquiring the current movement speed of the terminal equipment;
determining a current adjustment amount of the display position of the virtual content in the virtual space according to the offset value and the current movement speed;
and adjusting the display position of the virtual content in the virtual space according to the current adjustment amount.
8. A head-mounted virtual reality device applying the method for realizing virtual reality according to any one of claims 1 to 7, wherein the head-mounted virtual reality device comprises:
the glasses comprise a fixing band, an upper elastic band, a glasses shell, a glasses frame, glasses, a virtual reality module, an action control module, a content adjusting module and a data line interface;
the back end of the fixing band is connected with an elastic band; the front end of the fixing band is connected with the spectacle shell; the left side of the front surface of the glasses shell is provided with a data line interface; a spectacle frame is embedded in the spectacle shell; the two sides in the frame are embedded with spectacle lenses; the left side of the lower part in the mirror frame is provided with an action control module; a content adjusting module is arranged at the right side of the lower part in the mirror frame; the data line interface is respectively connected with the virtual reality module, the action control module and the content adjusting module through circuit lines;
the virtual reality module is used for carrying out image synthesis on the acquired virtual image and the virtual scene through a virtual reality program and displaying virtual reality;
the action control module is used for carrying out virtual reality walking control operation through the inertial sensor;
and the content adjusting module is used for adjusting the virtual content.
9. A computer program product stored on a computer readable medium, comprising a computer readable program for providing a user input interface for implementing a method of implementing virtual reality as claimed in any one of claims 1 to 7 when executed on an electronic device.
10. A computer-readable storage medium storing instructions which, when executed on a computer, cause the computer to perform a method of implementing virtual reality as claimed in any one of claims 1 to 7.
CN202010809020.9A 2020-08-12 2020-08-12 Method for realizing virtual reality and head-mounted virtual reality equipment Withdrawn CN111915738A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010809020.9A CN111915738A (en) 2020-08-12 2020-08-12 Method for realizing virtual reality and head-mounted virtual reality equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010809020.9A CN111915738A (en) 2020-08-12 2020-08-12 Method for realizing virtual reality and head-mounted virtual reality equipment

Publications (1)

Publication Number Publication Date
CN111915738A true CN111915738A (en) 2020-11-10

Family

ID=73284656

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010809020.9A Withdrawn CN111915738A (en) 2020-08-12 2020-08-12 Method for realizing virtual reality and head-mounted virtual reality equipment

Country Status (1)

Country Link
CN (1) CN111915738A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112562068A (en) * 2020-12-24 2021-03-26 北京百度网讯科技有限公司 Human body posture generation method and device, electronic equipment and storage medium
CN112732081A (en) * 2020-12-31 2021-04-30 珠海金山网络游戏科技有限公司 Virtual object moving method and device
CN114385002A (en) * 2021-12-07 2022-04-22 达闼机器人有限公司 Intelligent equipment control method, device, server and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112562068A (en) * 2020-12-24 2021-03-26 北京百度网讯科技有限公司 Human body posture generation method and device, electronic equipment and storage medium
CN112562068B (en) * 2020-12-24 2023-07-14 北京百度网讯科技有限公司 Human body posture generation method and device, electronic equipment and storage medium
CN112732081A (en) * 2020-12-31 2021-04-30 珠海金山网络游戏科技有限公司 Virtual object moving method and device
CN114385002A (en) * 2021-12-07 2022-04-22 达闼机器人有限公司 Intelligent equipment control method, device, server and storage medium

Similar Documents

Publication Publication Date Title
US11914147B2 (en) Image generation apparatus and image generation method using frequency lower than display frame rate
CN111915738A (en) Method for realizing virtual reality and head-mounted virtual reality equipment
US10649212B2 (en) Ground plane adjustment in a virtual reality environment
JP2022000640A (en) Information processing device, information processing method, and information processing program
CN104536579B (en) Interactive three-dimensional outdoor scene and digital picture high speed fusion processing system and processing method
CN106873767B (en) Operation control method and device for virtual reality application
CN108596854B (en) Image distortion correction method and device, computer readable medium, electronic device
JP5800602B2 (en) Information processing system, portable electronic device, program, and information storage medium
CN103180893A (en) Method and system for use in providing three dimensional user interface
EP3529686A1 (en) Method and apparatus for providing guidance in a virtual environment
US11914762B2 (en) Controller position tracking using inertial measurement units and machine learning
CN103480154A (en) Obstacle avoidance apparatus and obstacle avoidance method
CN110780742B (en) Eyeball tracking processing method and related device
WO2019087564A1 (en) Information processing device, information processing method, and program
CN112926521B (en) Eyeball tracking method and system based on light source on-off
CN105630152A (en) Device and method for processing visual data, and related computer program product
CN110688002B (en) Virtual content adjusting method, device, terminal equipment and storage medium
CN106802716B (en) Data processing method of virtual reality terminal and virtual reality terminal
CN113658249A (en) Rendering method, device and equipment of virtual reality scene and storage medium
WO2022146858A1 (en) Controller position tracking using inertial measurement units and machine learning
CN114494658A (en) Special effect display method, device, equipment, storage medium and program product
WO2017156741A1 (en) Head motion compensation method and associated device
KR102481528B1 (en) Method for broadcasting service of virtual reality game, apparatus and system for executing the method
CN115793261B (en) Visual compensation method, system and equipment for VR glasses
KR102423869B1 (en) Method for broadcasting service of virtual reality game, apparatus and system for executing the method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20201110