CN107783652B - Method, system and device for realizing virtual reality - Google Patents

Method, system and device for realizing virtual reality Download PDF

Info

Publication number
CN107783652B
CN107783652B CN201710964798.5A CN201710964798A CN107783652B CN 107783652 B CN107783652 B CN 107783652B CN 201710964798 A CN201710964798 A CN 201710964798A CN 107783652 B CN107783652 B CN 107783652B
Authority
CN
China
Prior art keywords
target object
action
virtual reality
depth
depth camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710964798.5A
Other languages
Chinese (zh)
Other versions
CN107783652A (en
Inventor
刘杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GCI Science and Technology Co Ltd
Original Assignee
GCI Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GCI Science and Technology Co Ltd filed Critical GCI Science and Technology Co Ltd
Priority to CN201710964798.5A priority Critical patent/CN107783652B/en
Publication of CN107783652A publication Critical patent/CN107783652A/en
Application granted granted Critical
Publication of CN107783652B publication Critical patent/CN107783652B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a method, a system and a device for realizing virtual reality, a computer storage medium and equipment, wherein the method comprises the following steps: acquiring depth position information of each point on the surface of the target object in a plurality of adjacent time slices relative to a preset reference point; determining the position of a preset part of the target object according to the depth position information; determining action information of the target object according to the variation of the position in different time slices; and adjusting the operation parameters of the virtual reality realization system according to the action information, and outputting the adjusted virtual reality video stream. According to the scheme, the virtual reality technology can be realized without an additional motion sensing peripheral, and the complexity of virtual reality operation is reduced.

Description

Method, system and device for realizing virtual reality
Technical Field
The invention relates to the technical field of virtual reality, in particular to a method, a system and a device for realizing virtual reality.
Background
Virtual Reality (VR) technology is a computer simulation system that can create and experience a Virtual world, and can generate a Virtual information environment in a multidimensional information space to interact with a real environment, and make a user personally and immerse in the Virtual information environment.
However, the traditional technology supports various virtual reality implementations through peripheral devices such as a somatosensory handle and a somatosensory headgear, and the peripheral devices are provided with some related sensors, such as a gyroscope, a gravity sensor, a pressure sensor and the like. After the virtual reality user must wear the peripherals, the virtual reality user can perform corresponding actions by matching with the virtual reality video displayed in the virtual reality glasses, and the implementation method has the problem of high operation complexity.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a method, a system, and an apparatus for implementing virtual reality with simple operations.
A virtual reality implementation method comprises the following steps:
acquiring depth position information of each point on the surface of the target object in a plurality of adjacent time slices relative to a preset reference point;
determining the position of a preset part of the target object according to the depth position information;
determining action information of the target object according to the variation of the position in different time slices;
and adjusting the operation parameters of the virtual reality realization system according to the action information, and outputting the adjusted virtual reality video stream.
The step of obtaining the depth position information of each point on the surface of the target object in a plurality of adjacent time slices relative to a preset reference point is executed by a depth camera.
Receiving a data acquisition instruction sent by a control terminal through a depth camera, and acquiring depth position information of each point on the surface of a target object in a plurality of adjacent time slices relative to the depth camera according to the data acquisition instruction, so that a user can freely control the operation process of virtual reality; or if the depth camera detects that the target object enters the visual field range of the depth camera, acquiring depth position information of each point on the surface of the target object relative to the depth camera in a plurality of adjacent time slices, and improving the convenience of virtual reality operation through automatic detection of the depth camera.
After the depth position information of each point of the surface of the target object in a plurality of adjacent time slices relative to a preset reference point is obtained, the geometric center of the target object is determined, and the angle of the depth camera is adjusted according to the geometric center, so that the geometric center of the target object is included in the visual field range of the depth camera, the degree of freedom of virtual reality is improved, and the virtual reality experience is closer to reality.
The virtual reality implementation method comprises the steps of determining the position of the preset part of the target object according to the depth position information, generating a 3D image of the preset part of the target object in a 3D depth coordinate system according to the depth position information, and acquiring the position of the preset part of the target object according to the 3D image, so that the position of the preset part of the target object is more accurately positioned.
The action information comprises the action type and the action degree parameter of the target object, and the action information of the target object is more accurately identified.
The method for realizing the virtual reality comprises the steps of determining action information of a target object according to the variation of the position in different time slices, comparing the variation with a variation reference value of a preset action type of a corresponding part, determining the action type of the target object as the preset action type if the variation is greater than or equal to the variation reference value, and determining an action degree parameter of the target object according to the variation of the position in different time slices, thereby improving the accuracy of identifying the action information of the target object.
A system for implementing virtual reality, comprising:
the information acquisition module is used for acquiring depth position information of each point on the surface of the target object in a plurality of adjacent time slices relative to a preset reference point;
the data processing module is used for determining the position of a preset part of the target object according to the depth position information;
the action recognition module is used for determining action information of the target object according to the variation of the position in different time slices;
and the parameter adjusting module is used for adjusting the operation parameters of the virtual reality realizing system according to the action information and outputting the adjusted virtual reality video stream.
An apparatus for implementing virtual reality, comprising:
a depth camera and an intelligent gateway;
the depth camera acquires depth position information of each point on the surface of the target object in a plurality of adjacent time slices relative to the depth camera and sends the depth position information to the intelligent gateway;
and the intelligent gateway determines the position of a preset part of the target object according to the depth position information, determines the action information of the target object according to the variation of the position in different time slices, adjusts the operation parameters of the virtual reality realization system according to the action information, and outputs the adjusted virtual reality video stream.
The virtual reality realization device further comprises virtual reality glasses, the virtual reality glasses are used for receiving the output adjusted virtual reality video stream, displaying the output adjusted virtual reality video stream, and the adjusted virtual reality video stream is convenient to view.
The device for realizing the virtual reality further comprises a control terminal, wherein the control terminal is used for sending a data acquisition instruction to the depth camera, and the depth camera acquires depth position information of each point on the surface of the target object in a plurality of adjacent time slices relative to the depth camera after receiving the data acquisition instruction, so that the controllability of the virtual reality operation is improved.
A computer storage medium on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements a method of virtual reality implementation.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements a virtual reality implementation method when executing the program.
According to the method, the system and the device for realizing the virtual reality, the computer storage medium and the equipment, the position of the preset part of the target object is determined according to the acquired depth position information of the target object, the action information of the target object is determined according to the variation of the position in different time slices, the virtual reality parameters are adjusted according to the action information, no additional motion sensing peripheral equipment is needed in the realization process of the virtual reality, and the complexity of virtual reality operation is reduced.
Drawings
FIG. 1 is a flowchart of a method for implementing virtual reality according to an embodiment;
FIG. 2 is a diagram illustrating an embodiment of an intelligent gateway recognizing a user action;
FIG. 3 is a schematic structural diagram of a system for implementing virtual reality according to an embodiment;
fig. 4 is a schematic diagram of an implementation apparatus of virtual reality according to an embodiment.
Detailed Description
The technical solution of the present invention will be described in detail below with reference to specific embodiments and accompanying drawings to make it more clear.
As shown in fig. 1, the present invention provides a method for implementing virtual reality, which includes the following steps:
s10, acquiring depth position information of each point on the surface of the target object in a plurality of adjacent time slices relative to a preset reference point;
this step may be performed by a depth camera, which may be an infrared assisted texture depth camera, a Time Of Flight (TOF) camera, or other depth camera. The depth camera can receive a data acquisition instruction sent by the control terminal, and can also acquire depth position information of each point on the surface of the target object in a plurality of adjacent time slices relative to the depth camera when the depth camera detects that the target object enters the visual field range of the depth camera. The control terminal can be an intelligent control terminal such as a mobile phone and a tablet personal computer.
In a practical application, the target object may be a user using a virtual reality device. For convenience of description, the following description will be given taking the target object as a user as an example.
In one embodiment, a user may send data acquisition instructions to the TOF camera via the control terminal to cause the TOF camera to begin acquiring depth position information of the user relative to the TOF camera within adjacent time slices within its field of view. The virtual reality operation process can be more freely controlled by the user through the mode that the peripheral control terminal triggers the virtual reality.
In another embodiment, the depth camera employs an infrared assisted texture binocular depth camera, and the 3D depth camera automatically continuously acquires depth position information of the user's body relative to the depth camera in time slices when the depth camera detects that the user is wearing virtual reality glasses and is within the field of view of the depth camera. The convenience of the virtual reality operation process is improved by the mode that the depth camera automatically detects and starts to collect.
After the depth position information of each point of the surface of the target object in a plurality of adjacent time slices relative to a preset reference point is obtained, the geometric center of the target object can be determined, and the angle of the 3D depth camera is adjusted according to the geometric center, so that the visual field range of the depth camera contains the geometric center.
In one embodiment, the depth camera acquires depth position information of a user relative to the depth camera in a plurality of adjacent time slices in the visual field of the depth camera, determines a geometric center of the user, adjusts an acquisition angle according to the movement of the geometric center, and controls the visual field range of the depth camera to contain the geometric center, so that the degree of freedom of a virtual reality operation process is improved, and virtual reality experience is closer to reality.
After acquiring the depth position information of the target object, the depth camera outputs the depth information, and the process proceeds to step S20.
S20, determining the position of the preset part of the target object according to the depth position information;
this step can be carried out by intelligent gateway, and this intelligent gateway can establish wireless connection with above-mentioned depth camera, and the connected mode can include ethernet, USB or WIFI etc to realize wireless communication, improve virtual reality's convenience. The depth position information can be received through the intelligent gateway, a 3D image of a preset part of the target object is generated in a 3D depth coordinate system according to the depth position information, and the position of the preset part of the target object is obtained according to the 3D image. The preset part of the target object can be the trunk, the arms, the lower limbs and the like of the user.
In one embodiment, the method comprises the steps of receiving collected depth position information of a user body relative to a depth camera through an intelligent gateway, generating a 3D image of a trunk, arms and lower limbs of the user according to the depth position information in a 3D depth coordinate system, and obtaining positions of the trunk, arms and lower limbs of the user according to the 3D image. The 3D image of the preset part of the target object is reconstructed in the 3D depth coordinate system, so that the position of the preset part of the target object is more accurately positioned.
S30, determining the action information of the target object according to the variation of the position in different time slices;
the step can be executed by the intelligent gateway, and the intelligent gateway determines the action information of the target object according to the position variation of the preset part of the target object in different time slices. The action information can be the action type and the action degree parameter of a target object, the position variation of a preset part can be compared with the preset action type variation reference value of the same part, when the variation is larger than or equal to the variation reference value, the action type of the preset part is determined to be the preset action type, and the action degree parameter of the action type of the target object is determined according to the variation of the position, the accuracy of identifying the action information of the target object can be improved through the variation reference value of the preset action type, and the implementation details of virtual reality are enhanced. The action type can comprise boxing, kicking, hugging and the like, and the action degree parameter can be the distance of the action and the time for completing the action.
Fig. 2 is a schematic diagram illustrating the recognition of a user action for an intelligent gateway.
Specifically, the user carries out virtual reality recreation in the depth camera field of vision, gather the depth position information of user's surface each point for the camera by the depth camera, and return this depth position information to intelligent gateway, handle this depth position information by intelligent gateway, confirm user's health position, the position variation of position at different time slices such as truck, arm, low limbs, compare position variation with the preset change parameter, thereby confirm user's action information, can be for a boxing action, a kicking action, a hugging action.
In one embodiment, the intelligent gateway compares the position variation of the lower limb part of the user in the 3D depth coordinate system in different time slices with a kicking variation reference value of the lower limb part, determines that the position variation is greater than the variation reference value, determines that the action information of the user is a kicking action, and analyzes that the time for completing the kicking action is 3S and the kicking distance is 50 cm.
In one embodiment, the depth camera collects depth position information of the right hand part of the user relative to the camera in different time slices, the depth position information comprises the depth position information of the right hand of the time slice 1 and the depth position information of the right hand of the time slice 2, the depth camera is connected with the intelligent gateway in a wireless connection mode and returns acquired data to the intelligent gateway, the intelligent gateway judges that the user makes a right-hand boxing action according to the position change of the right hand part of the user in a 3D coordinate system in the different time slices, action degree parameters of the right-hand boxing are identified, boxing action time including completion of boxing is 2S, and boxing distance is 20 cm.
And S40, adjusting the operation parameters of the virtual reality realization system according to the action information, and outputting the adjusted virtual reality video stream.
The step can be executed by the intelligent gateway, and the intelligent gateway adjusts the operation parameters of the virtual reality realization system according to the acquired action information of the target object and outputs the adjusted virtual reality video stream. In one embodiment, the determined action information of the user is a kicking action, the distance is 50cm, the direction is the east-righting direction, and a vase is arranged in the direction 50cm away from the user, so that the intelligent gateway adjusts the operation parameters of the virtual reality implementation system according to the kicking action of the user, namely, the scene that the vase is broken by kicking of the user in the virtual reality is achieved, and the adjusted virtual reality video stream is output.
After outputting the adjusted virtual reality video stream, the process may return to step S10 again.
According to the method for realizing the virtual reality, the position of the preset part of the target object is determined according to the acquired depth position information of the target object, the action information of the target object is determined according to the variation of the position in different time slices, the virtual reality parameters are adjusted according to the action information, and a user does not need to wear an additional somatosensory peripheral in the process of realizing the virtual reality, so that the complexity of virtual reality operation is reduced, and the virtual reality experience is closer to a real scene in real life; in addition, the method can interpret the action type and action degree parameter of the user without wearing the somatosensory peripheral, and is beneficial to enriching the implementation details of virtual reality.
As shown in fig. 3, the present invention further provides a system for implementing virtual reality, which may include:
the information acquisition module 10 is used for acquiring depth position information of each point on the surface of the target object in a plurality of adjacent time slices relative to a preset reference point;
the functions Of the module can be realized by a depth camera, and the depth camera can be an infrared auxiliary texture depth camera, a Time Of Flight (TOF) camera or other depth cameras. . The depth camera can receive a data acquisition instruction sent by the control terminal, and can also acquire depth position information of each point on the surface of the target object in a plurality of adjacent time slices relative to the depth camera when the depth camera detects that the target object enters the visual field range of the depth camera. The control terminal can be an intelligent control terminal such as a mobile phone and a tablet personal computer.
In a practical application, the target object may be a user using a virtual reality device. For convenience of description, the following description will be given taking the target object as a user as an example.
In one embodiment, a user may send data acquisition instructions to the TOF camera via the control terminal to cause the TOF camera to begin acquiring depth position information of the user relative to the TOF camera within adjacent time slices within its field of view. The virtual reality operation process can be more freely controlled by the user through the mode that the peripheral control terminal triggers the virtual reality.
In another embodiment, the depth camera employs an infrared assisted texture binocular depth camera, and the 3D depth camera automatically continuously acquires depth position information of the user's body relative to the depth camera in time slices when the depth camera detects that the user is wearing virtual reality glasses and is within the field of view of the depth camera. The convenience of the virtual reality operation process is improved by the mode that the depth camera automatically detects and starts to collect.
After the depth position information of each point of the surface of the target object relative to a preset reference point in a plurality of adjacent time slices is obtained, the geometric center of the target object can be determined, and the angle of the 3D depth camera can be adjusted according to the geometric center, so that the visual field range of the depth camera contains the geometric center.
In one embodiment, the depth camera acquires depth position information of a user relative to the depth camera in a plurality of adjacent time slices in the visual field of the depth camera, determines a geometric center of the user, adjusts an acquisition angle according to the movement of the geometric center, and controls the visual field range of the depth camera to contain the geometric center, so that the degree of freedom of a virtual reality operation process is improved, and virtual reality experience is closer to reality.
After acquiring the depth position information of the target object, the information acquisition module 10 outputs the depth information to the data processing module 20.
The data processing module 20 is configured to determine a position of a preset portion of the target object according to the depth position information;
the function of the module can be realized by a data processing unit on the intelligent gateway, the intelligent gateway can establish wireless connection with the depth camera, and the connection mode can comprise Ethernet, USB or WIFI and the like, so that wireless communication is realized, and the convenience of virtual reality is improved. The depth position information can be received through a data processing unit on the intelligent gateway, a 3D image of a preset part of the target object is generated in a 3D depth coordinate system according to the depth position information, and the position of the preset part of the target object is obtained according to the 3D image. The preset part of the target object can be the trunk, the arms, the lower limbs and the like of the user.
In one embodiment, the collected depth position information of the body of the user relative to the depth camera is received through a processing unit on the intelligent gateway, a 3D image of the trunk, the arms and the lower limbs of the user is generated according to the depth position information in a 3D depth coordinate system, and the positions of the trunk, the arms and the lower limbs of the user are obtained according to the 3D image. The 3D image of the preset part of the target object is reconstructed in the 3D depth coordinate system, so that the position of the preset part of the target object is more accurately positioned.
The action recognition module 30 is configured to determine action information of the target object according to the variation of the position in different time slices;
the function of the module can be realized by an action recognition unit on the intelligent gateway, and the action recognition unit determines the action information of the target object according to the position variation of the preset part of the target object in different time slices. The action information can be the action type and the action degree parameter of a target object, the position variation of a preset part can be compared with the preset action type variation reference value of the same part, when the variation is larger than or equal to the variation reference value, the action type of the preset part is determined to be the preset action type, and the action degree parameter of the action type of the target object is determined according to the variation of the position, the accuracy of identifying the action information of the target object can be improved through the variation reference value of the preset action type, and the implementation details of virtual reality are enhanced. The action type can comprise boxing, kicking, hugging and the like, and the action degree parameter can be the distance of the action and the time for completing the action.
Specifically, the user carries out virtual reality recreation in the depth camera field of vision, gather the depth position information of user's surface each point for the camera by the depth camera, and return this depth position information to intelligent gateway, handle this depth position information by intelligent gateway, confirm user's health position, the position variation of position at different time slices such as truck, arm, low limbs, compare position variation with the preset change parameter, thereby confirm user's action information, can be for a boxing action, a kicking action, a hugging action.
In one embodiment, the action recognition unit on the intelligent gateway compares the position variation of the lower limb part of the user in the 3D depth coordinate system in different time slices with the kicking variation reference value of the lower limb part, determines that the position variation is greater than the variation reference value, determines that the action information of the user is a kicking action, and analyzes that the time for completing the kicking action is 3S and the kicking distance is 50 cm.
And the parameter adjusting module 40 is configured to adjust an operation parameter of the virtual reality implementation system according to the action information, and output an adjusted virtual reality video stream.
The function of the module can be executed by a parameter adjusting unit on the intelligent gateway, and the parameter adjusting unit adjusts the operation parameters of the virtual reality realization system according to the acquired action information of the target object and outputs the adjusted virtual reality video stream. In one embodiment, it is determined that the action information of the user is a kicking action, the distance is 50cm, the direction is the east-righting direction, and there is a vase 50cm away from the user in the direction, so that the parameter adjusting unit adjusts the operation parameters of the virtual reality implementation system according to the kicking action of the user, that is, a scene that the vase is broken by kicking the user in the virtual reality is implemented, and outputs the adjusted virtual reality video stream.
The system does not need additional somatosensory peripherals in the implementation process of virtual reality, reduces the complexity of virtual reality operation, and enables the virtual reality experience to be closer to a real scene in real life; and the system can interpret the action type and action degree parameter of the user without wearing the somatosensory peripheral, and is beneficial to enriching the implementation details of virtual reality.
The implementation system of the virtual reality and the implementation method of the virtual reality correspond to each other, and the technical features and the beneficial effects described in the embodiment of the implementation method of the virtual reality are all applicable to the embodiment of the implementation system of the virtual reality, so that the statement is made.
As shown in fig. 4, the present invention further provides a device for implementing virtual reality, which may include:
a depth camera 101 and an intelligent gateway 201;
the depth camera 101 acquires depth position information of each point on the surface of the target object in a plurality of adjacent time slices relative to the depth camera, and sends the depth position information to the intelligent gateway 201;
the intelligent gateway 201 determines the position of the preset part of the target object according to the depth position information, determines the action information of the target object according to the variation of the position in different time slices, adjusts the operation parameters of the virtual reality realization system according to the action information, and outputs the adjusted virtual reality video stream.
The left side of fig. 4 is the field of view range of the depth camera 101, and the depth camera 101 can collect depth position information of each point on the surface of the target object in several adjacent time slices within the field of view range relative to the camera.
The depth camera 101 may be an infrared assisted texture depth camera, a Time Of Flight (TOF) camera, or other depth camera. The depth camera 101 can receive a data acquisition instruction sent by the control terminal, and the depth camera 101 can also be used for enabling the camera to acquire depth position information of each point on the surface of the target object in a plurality of adjacent time slices relative to the camera when the target object is detected to enter the visual field range of the camera. The control terminal can be an intelligent control terminal such as a mobile phone and a tablet personal computer.
In a practical application, the target object may be a user using a virtual reality device. For convenience of description, the following description will be given taking the target object as a user as an example.
In one embodiment, a user may send data acquisition instructions to the TOF camera via the control terminal to cause the TOF camera to begin acquiring depth position information of the user relative to the TOF camera within adjacent time slices within its field of view. The virtual reality operation process can be more freely controlled by the user through the mode that the peripheral control terminal triggers the virtual reality.
In another embodiment, the depth camera employs an infrared assisted texture binocular depth camera, and the 3D depth camera automatically continuously acquires depth position information of the user's body relative to the depth camera in time slices when the depth camera detects that the user is wearing virtual reality glasses and is within the field of view of the depth camera. The convenience of the virtual reality operation process is improved by the mode that the depth camera automatically detects and starts to collect.
After the depth position information of each point of the surface of the target object relative to the preset reference point in a plurality of adjacent time slices is obtained, the geometric center of the target object can be determined, and the angle of the depth camera 101 is adjusted according to the geometric center, so that the geometric center is included in the visual field range of the depth camera 101.
In one embodiment, the depth camera 101 acquires depth position information of the user relative to the depth camera 101 in a plurality of adjacent time slices in the visual field of the depth camera 101, determines a geometric center of the user, adjusts an acquisition angle according to movement of the geometric center, and controls the visual field range of the depth camera 101 to include the geometric center, so that the degree of freedom of a virtual reality operation process is improved, and virtual reality experience is closer to reality.
After acquiring the depth position information of the target object, the depth camera 101 returns the depth information to the connected smart gateway 201.
Fig. 4 right side is intelligent gateway 201 with depth camera 101 is connected, and this intelligent gateway 201 can establish wireless connection with depth camera 101, and the connected mode can include ethernet, USB or WIFI etc to realize wireless communication, improve virtual reality's convenience. The depth position information may be received by the data processing unit 202 on the intelligent gateway 201, a 3D image of a preset portion of the target object is generated in a 3D depth coordinate system according to the depth position information, and a position of the preset portion of the target object is obtained according to the 3D image. The preset part of the target object can be the trunk, the arms, the lower limbs and the like of the user.
In one embodiment, the acquired depth position information of the body of the user relative to the depth camera is received by the data processing unit 202 on the intelligent gateway 201, a 3D image of the trunk, arms and lower limbs of the user is generated according to the depth position information in a 3D depth coordinate system, and the positions of the trunk, arms and lower limbs of the user are acquired according to the 3D image. The 3D image of the preset part of the target object is reconstructed in the 3D depth coordinate system, so that the position of the preset part of the target object is more accurately positioned.
The action recognition unit 203 on the intelligent gateway 201 receives the position information sent by the data processing unit 202, and determines the action information of the target object according to the variation of the position in different time slices. The action information can be the action type and the action degree parameter of a target object, the position variation of a preset part can be compared with the preset action type variation reference value of the same part, when the variation is larger than or equal to the variation reference value, the action type of the preset part is determined to be the preset action type, and the action degree parameter of the action type of the target object is determined according to the variation of the position, the accuracy of identifying the action information of the target object can be improved through the variation reference value of the preset action type, and the implementation details of virtual reality are enhanced. The action type can comprise boxing, kicking, hugging and the like, and the action degree parameter can be the distance of the action and the time for completing the action.
In an embodiment, the action recognition unit 203 on the intelligent gateway 201 compares the position variation of the lower limb portion of the user in the 3D depth coordinate system in different time slices with the kicking variation reference value of the lower limb portion, determines that the position variation is greater than the variation reference value, determines that the action information of the user is a kicking action, and analyzes that the time for completing the kicking action is 3S and the kicking distance is 50 cm.
The game adjusting unit 204 on the intelligent gateway 201 adjusts the operation parameters of the virtual reality implementation system according to the action information of the target object acquired by the action identifying unit 203, and outputs the adjusted virtual reality video stream. In one embodiment, the action recognition unit 203 has determined that the action information of the user is a kicking action, the distance is 50cm, the direction is the east-righting direction, and there is a vase 50cm away from the user in this direction, so that the game adjustment unit 204 on the intelligent gateway 201 adjusts the operation parameters of the virtual reality implementation system according to the kicking action of the user, that is, a scene that the vase is broken by kicking the user in the virtual reality is implemented, and outputs the adjusted virtual reality video stream.
Specifically, the data processing unit 202, the action recognition unit 203 and the parameter adjustment unit 204 on the intelligent gateway 201 can serve software running on the intelligent gateway, thereby reducing the cost of manufacturing additional hardware elements.
Further, an apparatus for implementing virtual reality may further include:
a cell phone 301 and virtual reality glasses 401;
the mobile phone 301 is used for establishing connection with the depth camera 101 and sending a data acquisition instruction to the depth camera 101, so that the controllability of virtual reality operation is improved. In one embodiment, the handset 301 establishes a connection with the depth camera 101 and sends data acquisition instructions to the depth camera 101. After receiving the data acquisition instruction, the depth camera 101 obtains depth position information of each point on the surface of the target object relative to the depth camera in adjacent time slices.
The virtual reality glasses 401 are configured to receive the adjusted virtual reality video stream output by the intelligent gateway 201, and display the adjusted virtual reality video stream, so as to facilitate viewing of the adjusted virtual reality video stream.
According to the device for realizing the virtual reality, an additional somatosensory peripheral is not needed in the process of realizing the virtual reality, the complexity of virtual reality operation is reduced, and the virtual reality experience is closer to a real scene in real life; and the device can interpret the action type and action degree parameter of the user without wearing the somatosensory peripheral, and is beneficial to enriching the implementation details of virtual reality.
The present invention also provides a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the method of implementing virtual reality in any of the embodiments described above. The method executed by the computer-readable storage medium is the same as the method for implementing virtual reality in the above embodiments, and details are not repeated here.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
The present invention also provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the processor implements the method for implementing virtual reality in any of the above embodiments. The method executed by the processor in the computer device is the same as the virtual reality implementation method in the above embodiments, and details are not repeated here.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A virtual reality implementation method is characterized by comprising the following steps:
acquiring depth position information of each point on the surface of the target object in a plurality of adjacent time slices relative to a preset reference point; wherein the preset reference point comprises a depth camera;
determining a geometric center of the target object;
according to the geometric center, adjusting the angle of the depth camera to enable the geometric center of the target object to be included in the visual field range of the depth camera;
determining the position of a preset part of the target object according to the depth position information; wherein the preset parts comprise the trunk, the arms and the lower limbs of the target object;
determining action information of the target object according to the variation of the position in different time slices; the action information comprises an action type and action degree parameters of the target object, wherein the action type comprises boxing, kicking and hugging, and the action degree parameters comprise an action distance, action completion time and an action direction;
and when the target to be broken is determined to appear at the action distance between the action direction and the target object, adjusting the operation parameters of the virtual reality realization system according to the action information, and outputting the adjusted virtual reality video stream.
2. The method for implementing virtual reality according to claim 1, wherein the step of obtaining depth position information of each point on the surface of the target object relative to a preset reference point in a plurality of adjacent time slices is performed by a depth camera;
the method comprises the following steps of obtaining depth position information of each point on the surface of a target object in a plurality of adjacent time slices relative to a preset reference point, wherein the step comprises the following steps:
receiving a data acquisition instruction sent by a control terminal through a depth camera, and acquiring depth position information of each point on the surface of a target object in a plurality of adjacent time slices relative to the depth camera according to the data acquisition instruction;
or;
and if the target object is detected to enter the visual field range of the depth camera, acquiring depth position information of each point on the surface of the target object relative to the depth camera in a plurality of adjacent time slices.
3. The method for implementing virtual reality according to claim 1, wherein the step of determining the position of the preset portion of the target object according to the depth position information includes:
generating a 3D image of a preset part of the target object in a 3D depth coordinate system according to the depth position information;
and acquiring the position of the preset part of the target object according to the 3D image.
4. The method for implementing virtual reality according to any one of claims 1 to 3, wherein the action information includes an action type and an action degree parameter of the target object.
5. The method for implementing virtual reality according to claim 4, wherein the step of determining the motion information of the target object according to the variation of the position in different time slices comprises:
comparing the variable quantity with a variable quantity reference value of a preset action type of a corresponding part;
if the variation is larger than or equal to the variation reference value, determining the action type of the target object as a preset action type;
and determining the action degree parameter of the target object according to the variation of the position in different time slices.
6. A system for implementing virtual reality, comprising:
the information acquisition module is used for acquiring depth position information of each point on the surface of the target object in a plurality of adjacent time slices relative to a preset reference point; wherein the preset reference point comprises a depth camera; also for determining a geometric center of the target object; according to the geometric center, adjusting the angle of the depth camera to enable the geometric center of the target object to be included in the visual field range of the depth camera;
the data processing module is used for determining the position of a preset part of the target object according to the depth position information; wherein the preset parts comprise the trunk, the arms and the lower limbs of the target object;
the action recognition module is used for determining action information of the target object according to the variation of the position in different time slices; the action information comprises an action type and action degree parameters of the target object, wherein the action type comprises boxing, kicking and hugging, and the action degree parameters comprise an action distance, action completion time and an action direction;
and the parameter adjusting module is used for adjusting the operation parameters of the virtual reality realizing system according to the action information and outputting the adjusted virtual reality video stream when the target to be broken appears at the action distance between the target object and the action direction.
7. The system according to claim 6, wherein the data processing module is further configured to receive the depth position information, generate a 3D image of the preset portion of the target object in a 3D depth coordinate system according to the depth position information, and obtain a position of the preset portion of the target object according to the 3D image.
8. An apparatus for implementing virtual reality, comprising:
a depth camera and an intelligent gateway;
the depth camera acquires depth position information of each point on the surface of the target object in a plurality of adjacent time slices relative to the depth camera, and determines the geometric center of the target object; adjusting the angle of the depth camera according to the geometric center to enable the visual field range of the depth camera to include the geometric center of the target object, and sending the depth position information to an intelligent gateway;
the intelligent gateway determines the position of a preset part of the target object according to the depth position information, wherein the preset part comprises a trunk, arms and lower limbs of the target object, determines the action information of the target object according to the variation of the position in different time slices when the target to be broken appears at the action distance between the intelligent gateway and the target object in the action direction, adjusts the operation parameters of a virtual reality realization system according to the action information, and outputs an adjusted virtual reality video stream; the action information comprises an action type and an action degree parameter of the target object, the action type comprises boxing, kicking and hugging, and the action degree parameter comprises an action distance, action completion time and an action direction.
9. The apparatus for implementing virtual reality according to claim 8, further comprising:
virtual reality glasses;
the virtual reality glasses are used for receiving the output adjusted virtual reality video stream and displaying the output adjusted virtual reality video stream.
10. The apparatus for implementing virtual reality according to claim 8, further comprising:
a control terminal;
the control terminal is used for sending a data acquisition instruction to the depth camera;
and after receiving the data acquisition instruction, the depth camera acquires depth position information of each point on the surface of the target object relative to the depth camera in a plurality of adjacent time slices.
CN201710964798.5A 2017-10-17 2017-10-17 Method, system and device for realizing virtual reality Active CN107783652B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710964798.5A CN107783652B (en) 2017-10-17 2017-10-17 Method, system and device for realizing virtual reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710964798.5A CN107783652B (en) 2017-10-17 2017-10-17 Method, system and device for realizing virtual reality

Publications (2)

Publication Number Publication Date
CN107783652A CN107783652A (en) 2018-03-09
CN107783652B true CN107783652B (en) 2020-11-13

Family

ID=61434535

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710964798.5A Active CN107783652B (en) 2017-10-17 2017-10-17 Method, system and device for realizing virtual reality

Country Status (1)

Country Link
CN (1) CN107783652B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114625251A (en) * 2022-03-11 2022-06-14 平安普惠企业管理有限公司 Interaction method and device based on VR, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103529944A (en) * 2013-10-17 2014-01-22 合肥金诺数码科技股份有限公司 Human body movement identification method based on Kinect
CN104460962A (en) * 2013-09-18 2015-03-25 天津联合动力信息技术有限公司 4D somatosensory interaction system based on game engine
CN106295479A (en) * 2015-06-05 2017-01-04 上海戏剧学院 Based on body-sensing technology action recognition editing system
CN106601062A (en) * 2016-11-22 2017-04-26 山东科技大学 Interactive method for simulating mine disaster escape training
CN107133984A (en) * 2017-03-24 2017-09-05 深圳奥比中光科技有限公司 The scaling method and system of depth camera and main equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150040174A1 (en) * 2013-08-01 2015-02-05 Joiz Ip Ag System and method for synchronizing media platform devices

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104460962A (en) * 2013-09-18 2015-03-25 天津联合动力信息技术有限公司 4D somatosensory interaction system based on game engine
CN103529944A (en) * 2013-10-17 2014-01-22 合肥金诺数码科技股份有限公司 Human body movement identification method based on Kinect
CN106295479A (en) * 2015-06-05 2017-01-04 上海戏剧学院 Based on body-sensing technology action recognition editing system
CN106601062A (en) * 2016-11-22 2017-04-26 山东科技大学 Interactive method for simulating mine disaster escape training
CN107133984A (en) * 2017-03-24 2017-09-05 深圳奥比中光科技有限公司 The scaling method and system of depth camera and main equipment

Also Published As

Publication number Publication date
CN107783652A (en) 2018-03-09

Similar Documents

Publication Publication Date Title
US10460512B2 (en) 3D skeletonization using truncated epipolar lines
CN107172417B (en) Image display method, device and system of naked eye 3D screen
KR101791590B1 (en) Object pose recognition apparatus and method using the same
CN111819521B (en) Information processing device, information processing method, and program
US10313657B2 (en) Depth map generation apparatus, method and non-transitory computer-readable medium therefor
CN106125903B (en) Multi-person interaction system and method
US10169880B2 (en) Information processing apparatus, information processing method, and program
US8625898B2 (en) Computer-readable storage medium, image recognition apparatus, image recognition system, and image recognition method
KR20150027137A (en) Context-driven adjustment of camera parameters
CN109453517B (en) Virtual character control method and device, storage medium and mobile terminal
CN108885487B (en) Gesture control method of wearable system and wearable system
US10212409B2 (en) Method, apparatus, and non-transitory computer readable medium for generating depth maps
CN110780742B (en) Eyeball tracking processing method and related device
KR20200138349A (en) Image processing method and apparatus, electronic device, and storage medium
US10682270B2 (en) Seat, motion control method thereof and motion control system thereof
US20130321404A1 (en) Operating area determination method and system
CN113298956A (en) Image processing method, nail beautifying method and device, and terminal equipment
CN107783652B (en) Method, system and device for realizing virtual reality
CN110622218A (en) Image display method, device, storage medium and terminal
CN113010009B (en) Object sharing method and device
CN111179341B (en) Registration method of augmented reality equipment and mobile robot
AU2010338191B2 (en) Stabilisation method and computer system
CA3167578A1 (en) Depth sensor activation for localization based on data from monocular camera
CN111385481A (en) Image processing method and device, electronic device and storage medium
KR102129954B1 (en) Mobile terminal capable of 3D scanning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant