WO2017092432A1 - 一种虚拟现实交互方法、装置和*** - Google Patents

一种虚拟现实交互方法、装置和*** Download PDF

Info

Publication number
WO2017092432A1
WO2017092432A1 PCT/CN2016/096983 CN2016096983W WO2017092432A1 WO 2017092432 A1 WO2017092432 A1 WO 2017092432A1 CN 2016096983 W CN2016096983 W CN 2016096983W WO 2017092432 A1 WO2017092432 A1 WO 2017092432A1
Authority
WO
WIPO (PCT)
Prior art keywords
infrared
calibration
feature information
information
virtual reality
Prior art date
Application number
PCT/CN2016/096983
Other languages
English (en)
French (fr)
Inventor
张超
Original Assignee
乐视控股(北京)有限公司
乐视致新电子科技(天津)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 乐视控股(北京)有限公司, 乐视致新电子科技(天津)有限公司 filed Critical 乐视控股(北京)有限公司
Publication of WO2017092432A1 publication Critical patent/WO2017092432A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/014Hand-worn input/output arrangements, e.g. data gloves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Definitions

  • the present application relates to the field of virtual reality technologies, and in particular, to a virtual reality interaction method, apparatus, and system.
  • Virtual reality technology generates a simulated environment through computer and combines the acquired image information to realize interactive 3D vision and behavior, thus enabling Users are immersed in the simulation environment to realize the interaction between people and the virtual reality environment.
  • VR Virtual Reality
  • an important aspect affecting the interaction effect involves the technology of image information collection.
  • the prior art generally collects image information through a three-dimensional depth camera, and calculates the distance from a target object such as a collected object or a person by using the basic principle of stereo vision ranging to realize an interactive three-dimensional view, stereoscopic vision ranging.
  • the basic principle is to observe the same object from different viewpoints to obtain the perceptual images at different viewing angles, and calculate the distance information of the target objects by calculating the pixel deviation between the pixels of the image by the principle of triangulation.
  • the prior art can simulate a three-dimensional view and behavior through image information collected by a three-dimensional depth camera, thereby realizing virtual reality interaction.
  • the three-dimensional depth camera adopted by the prior art has difficulty in realizing virtual reality technology based on the camera in some scenarios due to its own price, technical maturity, ease of use, and the like, and an urgent need for other virtual reality technologies.
  • the embodiment of the present application provides a virtual reality interaction method, device, and system, which are used to solve the problem of acquiring an image through a three-dimensional depth camera in the prior art, and the virtual reality technology based on the camera is in some scenarios. Difficult to achieve, there is an urgent need for a problem with other virtual reality technologies.
  • An embodiment of the present application provides a virtual reality interaction method, where the method includes:
  • Determining three-dimensional motion trajectory information of each of the calibration points by using corresponding feature information of each of the calibration points in the first infrared image and corresponding feature information in the second infrared image, and passing each of the labels
  • the fixed three-dimensional motion trajectory information performs virtual reality interaction.
  • the embodiment of the present application further provides a virtual reality interaction device, where the device includes: a first infrared camera unit, a second infrared camera unit, an extraction unit, a determination unit, and an interaction unit, wherein:
  • a first infrared imaging unit configured to acquire at least two first infrared images of the calibration object by the first infrared camera, the calibration object comprising at least one calibration point, wherein the calibration point is used to provide infrared light;
  • a second infrared imaging unit configured to acquire at least two second infrared images of the calibration object by using a second infrared camera
  • An extracting unit configured to extract corresponding feature information of each of the calibration points in each of the first infrared images and corresponding feature information in each of the second infrared images, where the feature information is used to display each of the targets Positioning in the first infrared image or the second infrared image;
  • a determining unit configured to determine three-dimensional motion trajectory information of each of the calibration points by using corresponding feature information of each of the calibration points in the first infrared image and corresponding feature information in the second infrared image;
  • An interaction unit configured to perform virtual reality interaction by using three-dimensional motion trajectory information of each of the calibration points.
  • the embodiment of the present application further provides a virtual reality interaction system, where the system includes: a virtual reality interaction device and a calibration object, wherein:
  • the virtual reality interaction device includes a first infrared camera unit, a second infrared camera unit, an extraction unit, a determination unit, and an interaction unit, wherein: the first infrared camera unit is configured to pass the first infrared camera Collecting at least two first infrared images of the calibration object; a second infrared imaging unit configured to acquire at least two second infrared images of the calibration object by the second infrared camera; and an extracting unit configured to extract each of the calibration points Corresponding feature information in each of the first infrared images and corresponding feature information in each of the second infrared images, the feature information is used to display each of the calibration points in the first infrared image or the second infrared a position determining unit, configured to determine a three-dimensional shape of each of the calibration points by using corresponding feature information of each of the calibration points in the first infrared image and corresponding feature information in the second infrared image Motion trajectory information; an interaction unit configured to perform virtual reality interaction through three
  • the calibration object includes at least one calibration point for reflecting infrared light.
  • the embodiment of the present application provides an electronic device, including the virtual reality interaction device described in any of the foregoing embodiments.
  • the embodiment of the present application provides a non-transitory computer readable storage medium, wherein the non-transitory computer readable storage medium can store a computer instruction, and the virtual reality interaction method provided by the embodiment of the present application can be implemented when the computer instruction is executed Some or all of the steps in each implementation.
  • An embodiment of the present application provides an electronic device, including: one or more processors; and a memory; wherein the memory stores instructions executable by the one or more processors, the instructions being set to It is used to perform any of the above virtual reality interaction methods of the present application.
  • An embodiment of the present application provides a computer program product, the computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions, when the program instructions are executed by a computer,
  • the computer is caused to execute any of the above virtual reality interaction methods in the embodiments of the present application.
  • the virtual reality interaction method, device and system provided by the embodiment of the present application collect the infrared image of the calibration object through the first infrared camera and the second infrared camera, and perform feature information extraction and feature information on the collected infrared image. Analysis, determining the three-dimensional motion trajectory information of each calibration point of the calibration object, thereby performing virtual reality interaction, thereby solving the prior art virtual reality interaction by acquiring images through the three-dimensional depth camera, due to the price and technical maturity of the three-dimensional depth camera itself, The influence of the convenience and the like causes a problem that the virtual reality technology based on the camera is difficult to implement in some scenarios, and provides a new virtual reality technology.
  • FIG. 1 is a flowchart of a virtual reality interaction method according to Embodiment 1 of the present application.
  • FIG. 2 is a schematic diagram of a calibration glove in a practical application in Embodiment 1 of the present application;
  • FIG. 3 is a flowchart of a virtual reality interaction method according to Embodiment 2 of the present application.
  • FIG. 4 is a schematic diagram of a virtual reality interaction device in a practical application in Embodiment 2 of the present application;
  • FIG. 5 is a schematic structural diagram of a virtual reality interaction apparatus according to Embodiment 3 of the present application.
  • FIG. 6 is a schematic structural diagram of a virtual reality interaction system in Embodiment 4 of the present application.
  • FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • Embodiment 1 provides a virtual reality interaction method for solving the problem that the prior art acquires an image through a three-dimensional depth camera, and the virtual reality technology based on the camera is difficult to implement in a specific scene due to the price of the three-dimensional depth camera.
  • the specific flow chart of the method is shown in FIG. 1 and includes the following steps:
  • Step S11 acquiring at least two first infrared images of the calibration object by the first infrared camera, and acquiring the same number of second infrared images of the calibration object by the second infrared camera.
  • the first infrared camera and the second infrared camera refer to a camera capable of imaging infrared light
  • the interaction method based on the infrared camera has a relatively low cost
  • the infrared light has a large wavelength and a low frequency, so the infrared light is transmitted in the air.
  • the energy loss is small, it is not easily distorted by infrared light imaging.
  • the infrared camera may be a camera that adds an ordinary light of an infrared filter between the photosensitive device of the camera and the lens, thereby further reducing the implementation cost of the virtual reality interaction method, especially for
  • the infrared filter used can be an 850 nm infrared band pass filter.
  • the first infrared camera and the second infrared camera are usually installed on the same device, and the device may be a server, a mobile terminal such as a mobile phone, an iPad or a smart helmet, or a terminal such as a smart TV or a computer.
  • the virtual reality interaction method can transmit the collected infrared image to the server, and then perform calculation and simulation of the real environment, and can also perform calculation and simulation of the real environment by terminals such as mobile phones, iPads, smart helmets, smart televisions or computers.
  • terminals such as mobile phones, iPads, smart helmets, smart televisions or computers.
  • the application examples do not limit this.
  • the calibration object includes at least one calibration point for providing infrared light.
  • the calibration object refers to an object that is simultaneously photographed by the first infrared camera and the second infrared camera.
  • the object may be a person or an object, and the outer surface of the object has at least a partial area for providing infrared light, which provides infrared light. Part of the area is called the calibration point, and there must be at least one calibration point in the calibration.
  • there are many ways to provide infrared light at the calibration point including reflecting infrared light and the calibration point itself emitting infrared light.
  • the way to provide infrared light is to install a reflective material on the outer surface of each calibration point of the calibration object to reflect the infrared light emitted by other devices to the calibration object.
  • the first infrared camera captures at least two first infrared images of the calibration object
  • the second infrared camera captures at least two second infrared images of the calibration object, that is, the first infrared camera and the second infrared camera respectively acquire the same N and M infrared images are calibrated, and N and M are both greater than or equal to two.
  • the first infrared camera and the second infrared camera collect the same number of infrared images, that is, N and M are equal.
  • Step S12 extract corresponding feature information of each of the calibration points in each of the first infrared images, and And extracting corresponding feature information of each of the calibration points in each of the second infrared images.
  • the feature information is used to display the position of each of the calibration points in each of the first infrared images or each of the second infrared images.
  • the manner of extracting the corresponding characteristic information of each of the calibration points in each of the first infrared images is The following operations may be performed for each calibration point: first determining a corresponding area of the calibration point in each of the first infrared images, and then using a clustering algorithm to extract features corresponding to the calibration point in the corresponding area Information; for each of the first infrared images, first calculating feature information of each calibration point in each of the first infrared images, and then determining corresponding features of each calibration point in each of the first infrared images information.
  • the trajectory prediction algorithm such as Kalman prediction can be used to determine the corresponding region of the same calibration point in each of the first infrared images, and then k-means or k is used in all corresponding regions corresponding to the calibration point.
  • the clustering algorithm such as -mediods extracts the feature information corresponding to the calibration point, and may first calculate each of the first infrared images by using a clustering algorithm such as k-means or k-mediods for each of the first infrared images.
  • the feature information of the fixed point is then determined by a trajectory prediction algorithm such as a Kalman prediction to determine corresponding feature information of the calibration point in each of the first infrared images.
  • Step S13 determining the three-dimensionality of each of the calibration points by using corresponding feature information in each of the first infrared images and corresponding feature information of each of the calibration points in each of the second infrared images by using the calibration points. Motion track information, and virtual reality interaction through the three-dimensional motion track information of each of the calibration points.
  • the three-dimensional motion trajectory information of each of the calibration points refers to information of a motion trajectory of each of the calibration points on the calibration object in a three-dimensional space when the calibration object moves in a three-dimensional space.
  • the calibration points on the calibration glove are respectively 5 fingers, and when the calibration glove is moved in a three-dimensional space, the three-dimensional motion track information of each calibration point is Refers to the information of the trajectory of the calibration point on the five fingers of the calibration glove in three-dimensional space.
  • each of the calibration points corresponds to each of the first infrared images.
  • the feature information, and the corresponding feature information of the calibration point in each second infrared image determine the three-dimensional motion track information of the calibration point.
  • the virtual reality interaction may be performed by using the three-dimensional motion trajectory information of each of the calibration points, and the three-dimensional motion trajectory information of the calibration object may be determined by using the three-dimensional motion trajectory information of each of the calibration points, and the calibration object is determined. Comparing the three-dimensional motion trajectory information with the information in the database, acquiring an interaction instruction corresponding to the three-dimensional motion trajectory information of the calibration object in the database, performing virtual reality interaction through the interaction instruction; The three-dimensional motion trajectory information of the calibration point is compared with the information in the database, and the interaction instruction corresponding to the three-dimensional motion trajectory information of each of the calibration points in the database is acquired, and the virtual reality interaction is performed by the interaction instruction.
  • the infrared image of the calibration object is collected by the first infrared camera and the second infrared camera, and the feature information is extracted and the characteristic information is analyzed by the acquired infrared image to determine the
  • the three-dimensional motion trajectory information of each calibration point of the calibration object is used for virtual reality interaction, thereby solving the prior art virtual reality interaction by acquiring images through the three-dimensional depth camera, due to the price, technical maturity, ease of use, etc. of the three-dimensional depth camera itself
  • the influence of the cause causes the virtual reality technology based on the camera to be difficult to implement in some scenarios, and there is an urgent need for other virtual reality technologies.
  • step S13 of the first embodiment the corresponding feature information in each of the first infrared images and the corresponding feature information of each of the calibration points in each of the second infrared images are determined by using the calibration points.
  • the three-dimensional motion trajectory information of each of the calibration points is determined by determining corresponding three-dimensional motion trajectory information of each calibration point by corresponding feature information of each calibration point in each first infrared image and corresponding feature information in each second infrared image.
  • the feature information corresponding to each of the first infrared images by using the calibration points and each of the calibration points are Determining, in the second infrared image, the three-dimensional motion trajectory information of the calibration object, that is, first determining, when the first infrared image is collected, each calibration point in the infrared image to the first
  • the information about the distance between the center of the lens of the infrared camera and the second infrared camera is determined by the information of at least two of the distances, and the three-dimensional motion track information of each of the calibration points is determined, thus forming Embodiment 2 of the present application, such as Figure 3 is described.
  • Step S21 collecting at least two first infrared images of the calibration object by the first infrared camera, and acquiring the same number of second infrared images of the calibration object by the second infrared camera, the first infrared camera
  • the lens of the head and the second infrared camera are in the same plane.
  • the first infrared camera may be configured to acquire at least two first infrared images of the calibration object, while the second infrared camera acquires the same calibration object.
  • the second infrared image that is, the first infrared camera and the second infrared camera, simultaneously acquire the same calibration R image, and R is greater than or equal to 2.
  • the first infrared camera and the second infrared camera are usually fixed on the same device, and the lenses of the first infrared camera and the second infrared camera are in the same plane, by adjusting the device.
  • the infrared image of the calibration object is acquired in the direction, and an infrared light emitting device is usually also mounted on the device, and the infrared light emitting device emits infrared light to the calibration object, and the infrared light is reflected by the calibration point in the calibration object.
  • Step S22 Extract corresponding feature information of each of the calibration points in each of the first infrared images, and extract feature information corresponding to each of the calibration points in each of the second infrared images.
  • Step S22 is the same as step S12 in Embodiment 1, and will not be described here.
  • Step S23 Perform the following operations for each calibration point:
  • Step S231 determining a second infrared image acquired simultaneously with each of the first infrared images
  • Step S232 determining, by using the feature information corresponding to the calibration point in the first infrared image, and the corresponding feature information of the calibration point in the second infrared image, when acquiring the first infrared image, Vertical distance information of the line connecting the calibration point to the lens center of the first infrared camera and the second infrared camera;
  • the vertical distance information of the line connecting the calibration point to the lens center of the first infrared camera and the second infrared camera refers to: the calibration point to the lens center of the first infrared camera and the lens center of the second infrared camera The distance information of the vertical line segment.
  • the distance between the first infrared camera and the second infrared camera is known, and when the focal lengths of the two cameras are the same and are known, the corresponding feature information in the first infrared image is utilized by the calibration point, and Corresponding feature information in the two infrared images may be obtained by using a similar triangle to obtain vertical distance information of the connection point of the calibration point to the lens center of the first infrared camera and the second infrared camera; Corresponding feature information in the infrared image, and corresponding feature information in the second infrared image and information on the focal lengths of the two cameras, determining the parallax of the two cameras to the calibration point, through the parallax and the first infrared camera and the second infrared camera The distance between the calibration points to the first Vertical distance information of a line connecting the center of the lens of the infrared camera and the second infrared camera.
  • Step S233 determining, by using at least two pieces of the vertical distance information corresponding to the calibration point, the calibration point three-dimensional motion track information.
  • Step S234 Perform virtual reality interaction through the three-dimensional motion trajectory information of each of the calibration points.
  • the first infrared camera and the second infrared camera simultaneously acquire the same number of infrared images of the calibration object, and are disposed in the same lens by setting the first infrared camera and the second infrared camera. a plane, so that the vertical distance information of each calibration point of the calibration point to the lens center of the first infrared camera and the second infrared camera can be used to determine the three-dimensional motion trajectory information of each calibration point of the calibration object, thereby making the virtual reality method more Easy to implement.
  • the non-transitory computer readable storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).
  • Embodiment 3 provides a virtual reality interaction device for solving the problem that the prior art captures an image through a three-dimensional depth camera, and the virtual reality technology based on the camera is difficult to implement in a specific scene due to the price of the three-dimensional depth camera.
  • a schematic diagram of a specific structure of the apparatus 500 is shown in FIG. 5, and includes the following units: a first infrared camera unit 501, a second infrared camera unit 502, an extraction unit 503, a determination unit 504, and an interaction unit 505, wherein:
  • the first infrared camera unit 501 is configured to collect at least two first infrared images of the calibration object by using the first infrared camera, the calibration object includes at least one calibration point, and the calibration point is used to provide infrared light;
  • a second infrared imaging unit 502 configured to acquire at least two second infrared images of the calibration object by using a second infrared camera;
  • the extracting unit 503 is configured to extract feature information corresponding to each of the first infrared images of the calibration point and corresponding feature information in each of the second infrared images, where the feature information is used to display each of the a position of the calibration point in the first infrared image or the second infrared image;
  • a determining unit 504 configured to use a corresponding feature letter in each of the first infrared images by using each of the calibration points And corresponding feature information in the second infrared image, determining three-dimensional motion trajectory information of each of the calibration points;
  • the interaction unit 505 is configured to perform virtual reality interaction by using the three-dimensional motion trajectory information of each of the calibration points.
  • the extracting unit 503 may further include a first extracting subunit 5031 and a second extracting subunit 5032, wherein:
  • the first extraction subunit 5031 is configured to determine, for each calibration point, a corresponding area of the calibration point in each of the first infrared images or each of the second infrared images;
  • the second extraction sub-unit 5032 is configured to use the clustering algorithm to extract feature information corresponding to the calibration point in the corresponding area.
  • the interaction unit 505 may further include a first interaction unit 5051, a second interaction unit 5052, and a third interaction unit 5053, where:
  • a first interaction unit 5051 configured to determine three-dimensional motion trajectory information of the calibration object by using three-dimensional motion trajectory information of each of the calibration points;
  • the second interaction unit 5052 is configured to compare the three-dimensional motion trajectory information of the calibration object with the information in the database, and acquire an interaction instruction corresponding to the three-dimensional motion trajectory information of the calibration object in the database;
  • the third interaction unit 5053 is configured to perform virtual reality interaction by using the interaction instruction.
  • the first infrared camera unit and the second infrared camera unit collect at least two infrared images of the same calibration object through the infrared camera, and then extract the unit to extract each calibration point in each infrared image.
  • the determining unit determines the three-dimensional motion trajectory information of each calibration point by using the feature information corresponding to each calibration point, and the interaction unit performs virtual reality interaction based on the three-dimensional motion trajectory information of each calibration point. Therefore, the prior art solves the virtual reality interaction by acquiring images through the three-dimensional depth camera. Due to the influence of the price of the three-dimensional depth camera and the like, the virtual reality technology based on the camera is difficult to realize in a specific scene.
  • Embodiment 4 provides a virtual reality interaction system for solving the problem that the prior art acquires an image through a three-dimensional depth camera, and the virtual reality technology based on the camera is difficult to implement in a specific scene due to the price of the three-dimensional depth camera itself.
  • FIG. 6 is a schematic diagram of a specific structure of the virtual reality interaction system 600, including: a virtual reality interaction device 601 and a calibration object 602, wherein:
  • the virtual reality interaction device 601 includes: a first infrared imaging unit, a second infrared imaging unit, an extraction unit, a determining unit, and an interaction unit, wherein: the first infrared imaging unit is configured to collect at least the calibration object by the first infrared camera.
  • the calibration object includes at least one calibration point
  • the calibration point is used to provide infrared light
  • the second infrared imaging unit is configured to collect at least two second objects of the calibration object by the second infrared camera An infrared image
  • an extracting unit configured to extract corresponding feature information of each of the calibration points in each of the first infrared images and corresponding feature information in each of the second infrared images, wherein the feature information is used to display each a position of the calibration point in the first infrared image or the second infrared image
  • a determining unit configured to utilize corresponding feature information of each of the calibration points in the first infrared image and in the second infrared Corresponding feature information in the image, determining three-dimensional motion track information of each of the calibration points
  • an interaction unit for performing three-dimensional transport through each of the calibration points Track information virtual reality interaction.
  • the calibration 602 includes at least one calibration point for reflecting infrared light.
  • a virtual reality interactive system in practical applications including a virtual reality interactive helmet and calibration gloves.
  • the virtual reality interactive helmet has a dual infrared camera for collecting infrared images of the calibration gloves, and can usually be installed on the virtual reality interactive helmet.
  • Infrared light emitting device the five fingers of the calibration glove have materials capable of reflecting infrared light
  • the dual infrared camera on the virtual reality interactive helmet can transmit the collected infrared image to the remote server for processing after collecting the infrared image.
  • a processing device can also be installed on the virtual reality interactive helmet for processing.
  • the virtual reality interaction system provided in Embodiment 4 is configured, wherein the virtual reality interaction device collects an infrared image of the calibration object through the infrared camera in the first infrared imaging unit and the second infrared imaging unit, and collects the collected infrared
  • the image performs a series of processing for virtual reality interaction. Therefore, the prior art solves the virtual reality interaction by acquiring images through the three-dimensional depth camera. Due to the influence of the price of the three-dimensional depth camera and the like, the virtual reality technology based on the camera is difficult to realize in a specific scene.
  • an electronic device including the foregoing any one of the embodiments.
  • Virtual reality interactive device including the foregoing any one of the embodiments.
  • a non-transitory computer readable storage medium is also provided, the non-transitory computer readable storage medium storing computer executable instructions executable by any of the above methods
  • the virtual reality interaction method in the example is also provided.
  • FIG. 7 is a schematic diagram of a hardware structure of an electronic device for performing a virtual reality interaction method according to an embodiment of the present disclosure. As shown in FIG. 7, the device includes:
  • processors 710 and memory 720 one processor 710 is taken as an example in FIG.
  • the apparatus that performs the virtual reality interaction method may further include: an input device 730 and an output device 740.
  • the processor 710, the memory 720, the input device 730, and the output device 740 may be connected by a bus or other means, as exemplified by a bus connection in FIG.
  • the memory 720 is used as a non-transitory computer readable storage medium, and can be used for storing a non-volatile software program, a non-volatile computer executable program, and a module, such as a program instruction corresponding to the virtual reality interaction method in the embodiment of the present application.
  • / Module for example, the first infrared camera unit 501, the second infrared camera unit 502, the extraction unit 503, the determination unit 504, and the interaction unit 505 shown in FIG. 5).
  • the processor 710 executes various functional applications and data processing of the electronic device by executing non-volatile software programs, instructions, and modules stored in the memory 720, that is, implementing the virtual reality interaction method of the above method embodiment.
  • the memory 720 may include a storage program area and an storage data area, wherein the storage program area may store an operating system, an application required for at least one function; the storage data area may store data created according to use of the virtual reality interactive device, and the like.
  • memory 720 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device.
  • memory 720 can optionally include memory remotely located relative to processor 710, which can be connected to the virtual reality interactive device over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • Input device 730 can receive input numeric or character information and generate key signal inputs related to user settings and function control of the virtual reality interaction device.
  • the output device 740 can include a display device such as a display screen.
  • the one or more modules are stored in the memory 720, and when executed by the one or more processors 710, perform a virtual reality interaction method in any of the above method embodiments.
  • the above product can execute the method provided by the embodiment of the present application, and has a corresponding functional module for executing the method. And beneficial effects. For technical details that are not described in detail in this embodiment, reference may be made to the method provided by the embodiments of the present application.
  • the electronic device of the embodiment of the present application exists in various forms, including but not limited to:
  • Mobile communication devices These devices are characterized by mobile communication functions and are mainly aimed at providing voice and data communication.
  • Such terminals include: smart phones (such as iPhone), multimedia phones, functional phones, and low-end phones.
  • Ultra-mobile personal computer equipment This type of equipment belongs to the category of personal computers, has computing and processing functions, and generally has mobile Internet access.
  • Such terminals include: PDAs, MIDs, and UMPC devices, such as the iPad.
  • Portable entertainment devices These devices can display and play multimedia content. Such devices include: audio, video players (such as iPod), handheld game consoles, e-books, and smart toys and portable car navigation devices.
  • the server consists of a processor, a hard disk, a memory, a system bus, etc.
  • the server is similar to a general-purpose computer architecture, but because of the need to provide highly reliable services, processing power and stability High reliability in terms of reliability, security, scalability, and manageability.
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment. Those of ordinary skill in the art can understand and implement without deliberate labor.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种虚拟现实交互方法、装置和***。该方法包括:通过第一红外摄像头采集标定物的至少两张第一红外图像,通过第二红外摄像头采集标定物至少两张第二红外图像(S11),提取各标定点在各第一红外图像中对应的特征信息以及在各第二红外图像中对应的特征信息(S12),利用各标定点在第一红外图像中对应的特征信息以及在第二红外图像中对应的特征信息,确定各标定点的三维运动轨迹信息,并通过各标定点的三维运动轨迹信息进行虚拟现实交互(S13)。通过两个红外摄像头同时采集标定物的红外图像进行虚拟现实交互,解决了通过三维深度摄像头采集图像进行虚拟现实交互时,在某些场景下难以实现,迫切需要一种其它虚拟现实技术的问题。

Description

一种虚拟现实交互方法、装置和***
本申请要求于2015年12月1日提交中国专利局、申请号为201510870209.8、申请名称为“一种虚拟现实交互方法、装置和***”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及虚拟现实技术领域,尤其涉及一种虚拟现实交互方法、装置和***。
背景技术
随着社会的发展,各个行业的进步都为生活质量的提高做出了卓越贡献。其中,虚拟现实(Virtual Reality,VR)技术的出现极大地丰富了人们的生活,虚拟现实技术通过计算机生成一种模拟环境,并结合采集的图像信息实现交互式的三维视景和行为,从而使用户沉浸到模拟环境中,实现人与虚拟现实环境的交互。在虚拟现实技术中,影响交互效果的一个重要的方面涉及到图像信息采集的技术。
现有技术一般通过三维深度摄像头来采集图像信息,并利用立体视觉测距的基本原理计算与被采集物或人等目标对象的距离,用以实现交互式的三维视景,立体视觉测距的基本原理是从不同视点观察同一物体,以获取不同视角下的感知图像,通过三角测量原理计算图像像素间的像素偏差来计算目标对象的距离信息。
虽然现有技术能够通过三维深度摄像头采集到的图像信息模拟出三维视景和行为,进而实现虚拟现实交互。但是,现有技术采用的三维深度摄像头由于自身价格、技术成熟度、使用便捷性等原因,导致基于该摄像头的虚拟现实技术在某些场景下难以实现,迫切需要一种其它虚拟现实技术。
发明内容
本申请实施例提供一种虚拟现实交互方法、装置和***,用以解决现有技术通过三维深度摄像头采集图像,导致基于该摄像头的虚拟现实技术在某些场景下 难以实现,迫切需要一种其它虚拟现实技术的问题。
本申请实施例提供一种虚拟现实交互方法,所述方法包括:
通过第一红外摄像头采集标定物的至少两张第一红外图像,以及通过第二红外摄像头采集所述标定物至少两张第二红外图像,所述标定物包含至少一个标定点,所述标定点用于提供红外光;
提取各所述标定点在各所述第一红外图像中对应的特征信息以及在各所述第二红外图像中对应的特征信息,所述特征信息用于显示各所述标定点在所述第一红外图像或第二红外图像中的位置;
利用各所述标定点在所述第一红外图像中对应的特征信息以及在所述第二红外图像中对应的特征信息,确定各所述标定点的三维运动轨迹信息,并通过各所述标定点的三维运动轨迹信息进行虚拟现实交互。
本申请实施例还提供一种虚拟现实交互装置,所述装置包括:第一红外摄像单元、第二红外摄像单元、提取单元、确定单元和交互单元,其中:
第一红外摄像单元,用于通过第一红外摄像头采集标定物的至少两张第一红外图像,所述标定物包含至少一个标定点,所述标定点用于提供红外光;
第二红外摄像单元,用于通过第二红外摄像头采集所述标定物至少两张第二红外图像;
提取单元,用于提取各所述标定点在各所述第一红外图像中对应的特征信息以及在各所述第二红外图像中对应的特征信息,所述特征信息用于显示各所述标定点在所述第一红外图像或第二红外图像中的位置;
确定单元,用于利用各所述标定点在所述第一红外图像中对应的特征信息以及在所述第二红外图像中对应的特征信息,确定各所述标定点的三维运动轨迹信息;
交互单元,用于通过各所述标定点的三维运动轨迹信息进行虚拟现实交互。
本申请实施例还提供一种虚拟现实交互***,所述***包括:虚拟现实交互装置和标定物,其中:
所述虚拟现实交互装置包括第一红外摄像单元、第二红外摄像单元、提取单元、确定单元和交互单元,其中:第一红外摄像单元,用于通过第一红外摄像头 采集标定物的至少两张第一红外图像;第二红外摄像单元,用于通过第二红外摄像头采集所述标定物至少两张第二红外图像;提取单元,用于提取各所述标定点在各所述第一红外图像中对应的特征信息以及在各所述第二红外图像中对应的特征信息,所述特征信息用于显示各所述标定点在所述第一红外图像或第二红外图像中的位置;确定单元,用于利用各所述标定点在所述第一红外图像中对应的特征信息以及在所述第二红外图像中对应的特征信息,确定各所述标定点的三维运动轨迹信息;交互单元,用于通过各所述标定点的三维运动轨迹信息进行虚拟现实交互。
所述标定物包含至少一个标定点,所述标定点用于反射红外光。
本申请实施例提供一种电子设备,包括前述任一实施例所述的虚拟现实交互装置。
本申请实施例提供一种非暂态计算机可读存储介质,其中,该非暂态计算机可读存储介质可存储有计算机指令,该计算机指令执行时可实现本申请实施例提供的虚拟现实交互方法的各实现方式中的部分或全部步骤。
本申请实施例提供一种电子设备,包括:一个或多个处理器;以及,存储器;其中,所述存储器存储有可被所述一个或多个处理器执行的指令,所述指令被设置为用于执行本申请上述任一项虚拟现实交互方法。
本申请实施例提供一种计算机程序产品,所述计算机程序产品包括存储在非暂态计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,使所述计算机执行本申请实施例上述任一项虚拟现实交互方法。
本申请实施例提供的一种虚拟现实交互方法、装置和***,通过第一红外摄像头和第二红外摄像头采集标定物的红外图像,并通过对所采集的红外图像进行特征信息提取以及特征信息的分析,确定该标定物的各标定点的三维运动轨迹信息,从而进行虚拟现实交互,因此解决了现有技术通过三维深度摄像头采集图像进行虚拟现实交互,由于三维深度摄像头自身价格、技术成熟度、使用便捷性等原因的影响,导致基于该摄像头的虚拟现实技术在某些场景下难以实现的问题,提供了一种新的虚拟现实技术。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例1提供的一种虚拟现实交互方法的流程图;
图2为本申请实施例1中实际应用中的一种标定手套示意图;
图3为本申请实施例2提供的一种虚拟现实交互方法的流程图;
图4为本申请实施例2中实际应用中的一种虚拟现实交互设备示意图;
图5为本申请实施例3中的一种虚拟现实交互装置的结构示意图;
图6为本申请实施例4中的一种虚拟现实交互***的结构示意图;
图7为本申请实施例提供的一种电子设备的结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
以下结合附图,详细说明本申请各实施例提供的技术方案。
实施例1
实施例1提供一种虚拟现实交互方法,用以解决现有技术通过三维深度摄像头采集图像,由于三维深度摄像头自身价格等原因,导致基于该摄像头的虚拟现实技术在特定场景下难以实现的问题。该方法的具体流程示意图如附图1所示,包括下述步骤:
步骤S11:通过第一红外摄像头采集标定物的至少两张第一红外图像,以及通过第二红外摄像头采集所述标定物相同数量的第二红外图像。
所述第一红外摄像头和所述第二红外摄像头指能够将红外光成像的摄像头, 由于在市场中红外摄像头相对于三维深度摄像头来说价格十分低廉,因此基于红外摄像头的交互方法实现成本相应较低,另外,红外光线的波长较大,频率较低,所以红外光线在空气中传输时能量损失较少,通过红外光线成像不容易失真。需要说明的是,在实际应用中红外摄像头可以是,在摄像头的感光器件和镜头之间添加红外滤光片的普通光线的摄像头,这样可以进一步降低该虚拟现实交互方法的实现成本,特别的为了增大红外摄像头的拍摄效果,所采用的红外滤光片可以为850纳米的红外带通滤光片。
在实际应用中通常将第一红外摄像头和第二红外摄像头安装在同一个设备上,该设备可以为服务器,也可以为手机、iPad或智能头盔等移动终端,还可以为智能电视或电脑等终端,虚拟现实交互的方式可以通过将采集的红外图像传递给服务器,然后由进行运算和模拟现实环境,也可以由手机、iPad、智能头盔、智能电视或电脑等终端进行运算和模拟现实环境,本申请实施例不对此做出限定。
所述标定物包含至少一个标定点,所述标定点用于提供红外光。标定物指第一红外摄像头和第二红外摄像头同时拍照的对象,在现实中该对象可以为人或者物体,并且该对象的外表面至少要有部分的面积用于提供红外光,这个提供红外光的部分面积称为标定点,标定物中至少要有一个标定点,特别的在实际应用中标定点提供红外光的方式也可以有多种,包括反射红外光和标定点自身发射红外光,一种常用的提供红外光的方式是在标定物的各标定点的外表面安装反光材料,反射由其它设备向该标定物发射的红外光。
所述第一红外摄像头采集标定物的至少两张第一红外图像,以及第二红外摄像头采集所述标定物至少两张第二红外图像,是指第一红外摄像头和第二红外摄像头分别采集相同标定物N和M张红外图像,并且N和M均大于或等于2。在实际应用中,通常第一红外摄像头和第二红外摄像头采集相同数量的红外图像,也就是N和M相等。计算标定物的三维运动轨迹时,至少需要两张红外图像,并且在一段时间内采集的红外图像越多,越能精确地描述标定物在这段时间内的运动轨迹,但是又由于采集红外图像越多时,计算量十分繁琐,因此通常可以通过在一段时间内采集标定物的三张红外图像,来较好的描述标定物在这段时间内的运动轨迹。
步骤S12:提取各所述标定点在各所述第一红外图像中对应的特征信息,并 且提取各所述标定点在各所述第二红外图像中对应的特征信息。
所述特征信息用于显示各所述标定点在各所述第一红外图像或各所述第二红外图像中的位置。
由于第一红外摄像头采集了标定物的至少两张红外图像,并且该标定物中可能会有多标定点,因此提取各所述标定点在各所述第一红外图像中对应的特征信息的方式可以为,针对每一个标定点执行以下操作:先确定所述标定点在各所述第一红外图像中的对应区域,然后在所述对应区域内采用聚类算法提取所述标定点对应的特征信息;也可以为针对各所述第一红外图像,先计算出各所述第一红外图像中各标定点的特征信息,然后确定各标定点在各所述第一红外图像中的对应的特征信息。
在实际应用中可以先通过卡尔曼预测等轨迹预测算法,确定同一个标定点在各所述第一红外图像中的对应区域,然后在该标定点对应的全部对应区域内采用k-means或k-mediods等聚类算法提取该标定点对应的特征信息,也可以先对各所述第一红外图像采用k-means或k-mediods等聚类算法计算出各所述第一红外图像中各标定点的特征信息,然后通过卡尔曼预测等轨迹预测算法,确定该标定点在各所述第一红外图像中的对应的特征信息。
提取各所述标定点在各所述第二红外图像中对应的特征信息的方法,可以采用与提取各所述标定点在各所述第一红外图像中对应的特征信息相同的方法。
步骤S13:利用各所述标定点在各所述第一红外图像中对应的特征信息以及各所述标定点在各所述第二红外图像中对应的特征信息,确定各所述标定点的三维运动轨迹信息,并通过各所述标定点的三维运动轨迹信息进行虚拟现实交互。
各所述标定点的三维运动轨迹信息指所述标定物在三维空间上运动时,所述标定物上的各所述标定点在三维空间上运动轨迹的信息。例如实际应用中一种标定手套,如图2所示,该标定手套上的标定点分别为5个手指,当使得该标定手套在三维空间上运动时,各所述标定点的三维运动轨迹信息指该标定手套5个手指上的标定点分别在三维空间上运动轨迹的信息。
利用各所述标定点在各所述第一红外图像中对应的特征信息以及各所述标定点在各所述第二红外图像中对应的特征信息,确定各所述标定点的三维运动轨迹信息。以上述标定手套为例,分别通过每一个标定点在各第一红外图像中对应 的特征信息,以及该标定点在各第二红外图像中对应的特征信息,确定该标定点三维运动轨迹信息。
通过各所述标定点的三维运动轨迹信息进行虚拟现实交互的方式有多种,可以为通过各所述标定点的三维运动轨迹信息确定所述标定物的三维运动轨迹信息,将所述标定物的三维运动轨迹信息和数据库中的信息进行比对,获取所述数据库中与所述标定物的三维运动轨迹信息对应的交互指令,通过所述交互指令进行虚拟现实交互;也可以为将各所述标定点的三维运动轨迹信息分别和数据库中的信息进行比对,获取所述数据库中与各所述标定点的三维运动轨迹信息对应的交互指令,通过所述交互指令进行虚拟现实交互。
采用实施例1提供的一种虚拟现实交互方法,通过第一红外摄像头和第二红外摄像头采集标定物的红外图像,并通过对所采集的红外图像进行特征信息提取以及特征信息的分析,确定该标定物的各标定点的三维运动轨迹信息,从而进行虚拟现实交互,因此解决了现有技术通过三维深度摄像头采集图像进行虚拟现实交互,由于三维深度摄像头自身价格、技术成熟度、使用便捷性等原因的影响,导致基于该摄像头的虚拟现实技术在某些场景下难以实现,迫切需要一种其它虚拟现实技术的问题。
实施例2
实施例1的步骤S13中提到,利用各所述标定点在各所述第一红外图像中对应的特征信息以及各所述标定点在各所述第二红外图像中对应的特征信息,确定各所述标定点的三维运动轨迹信息,其实,通过各标定点在各第一红外图像中对应的特征信息以及在各第二红外图像中对应的特征信息,确定各标定点三维运动轨迹信息的方法有多种。当所述第一红外摄像头和所述第二红外摄像头的镜头处于同一平面时,所述利用各所述标定点在各所述第一红外图像中对应的特征信息以及各所述标定点在各所述第二红外图像中对应的特征信息,确定所述标定物的三维运动轨迹信息可以为,先确定每一张第一红外图像在采集时,该红外图像中的各标定点到该第一红外摄像头和第二红外摄像头的镜头中心连线的距离的信息,然后通过至少两个该距离的信息,确定各所述标定点三维运动轨迹信息,这样就构成了本申请的实施例2,如附图3所述。
步骤S21:通过第一红外摄像头采集标定物的至少两张第一红外图像,同时通过第二红外摄像头采集所述标定物相同数量的第二红外图像,所述第一红外摄 像头和所述第二红外摄像头的镜头处于同一平面。
通常在描述标定物的运动轨迹时,为了充分利用所采集的红外图像,可以使得所述第一红外摄像头采集标定物的至少两张第一红外图像,同时第二红外摄像头采集所述标定物相同数量的第二红外图像,也就是第一红外摄像头和第二红外摄像头同时各采集相同标定物R张红外图像,并且R大于或等于2。
如图4所示,在实际应用中通常将第一红外摄像头和第二红外摄像头固定在同一设备上,并且使得该第一红外摄像头和该第二红外摄像头的镜头处于同一平面,通过调整设备的方向采集标定物的红外图像,并且通常还在该设备上安装红外光发射装置,通过该红外光的发射装置向标定物发射红外光线,由标定物中的标定点反射红外光线。
步骤S22:提取各所述标定点在各所述第一红外图像中对应的特征信息,并且提取各所述标定点在各所述第二红外图像中对应的特征信息。
步骤S22与实施例1中的步骤S12相同,这里就不再赘叙。
步骤S23:针对每一个标定点执行以下操作:
步骤S231:确定与每一张所述第一红外图像同时采集的第二红外图像;
步骤S232:利用所述标定点在所述第一红外图像中对应的特征信息,以及所述标定点在所述第二红外图像中对应的特征信息,确定在采集所述第一红外图像时,所述标定点到所述第一红外摄像头和所述第二红外摄像头的镜头中心的连线的垂直距离信息;
所述标定点到所述第一红外摄像头和所述第二红外摄像头的镜头中心的连线的垂直距离信息指:标定点到第一红外摄像头的镜头中心和第二红外摄像头的镜头中心的连线垂线段的距离信息。
在实际应用中,通常第一红外摄像头和第二红外摄像头之间的距离已知,并且两个摄像头的焦距相同且已知时,利用标定点在第一红外图像中对应的特征信息,以及第二红外图像中对应的特征信息,可以通过利用相似三角形的计算,得出该标定点到第一红外摄像头和所述第二红外摄像头的镜头中心的连线的垂直距离信息;也可以通过第一红外图像中对应的特征信息,以及第二红外图像中对应的特征信息和两个摄像头的焦距的信息,确定两个摄像头对该标定点的视差,通过视差以及第一红外摄像头和第二红外摄像头之间的距离,确定该标定点到第 一红外摄像头和所述第二红外摄像头的镜头中心的连线的垂直距离信息。
步骤S233:通过所述标定点对应的至少两个所述垂直距离信息确定所述标定点三维运动轨迹信息。
步骤S234:通过各所述标定点的三维运动轨迹信息进行虚拟现实交互。
采用实施例2提供的一种虚拟现实交互方法,第一红外摄像头和第二红外摄像头同时采集标定物相同数量的红外图像,并且通过设置该第一红外摄像头和该第二红外摄像头的镜头处于同一平面,从而可以通过各标定点到第一红外摄像头和第二红外摄像头的镜头中心的连线的垂直距离信息,来确定标定物的各标定点的三维运动轨迹信息,从而使得该虚拟现实方法更加易于实现。
最后需要说明的是,本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一非暂态计算机可读存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的非暂态计算机可读存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
实施例3
实施例3提供一种虚拟现实交互装置,用以解决现有技术通过三维深度摄像头采集图像,由于三维深度摄像头自身价格等原因,导致基于该摄像头的虚拟现实技术在特定场景下难以实现的问题。该装置500的具体结构示意图如图5所示,包括下述单元:第一红外摄像单元501、第二红外摄像单元502、提取单元503、确定单元504和交互单元505,其中:
第一红外摄像单元501,用于通过第一红外摄像头采集标定物的至少两张第一红外图像,所述标定物包含至少一个标定点,所述标定点用于提供红外光;
第二红外摄像单元502,用于通过第二红外摄像头采集所述标定物至少两张第二红外图像;
提取单元503,用于提取各所述标定点在各所述第一红外图像中对应的特征信息以及在各所述第二红外图像中对应的特征信息,所述特征信息用于显示各所述标定点在所述第一红外图像或第二红外图像中的位置;
确定单元504,用于利用各所述标定点在所述第一红外图像中对应的特征信 息以及在所述第二红外图像中对应的特征信息,确定各所述标定点的三维运动轨迹信息;
交互单元505,用于通过各所述标定点的三维运动轨迹信息进行虚拟现实交互。
在实际应用中,所述提取单元503还可以包括第一提取子单元5031和第二提取子单元5032,其中:
所述第一提取子单元5031,用于针对每一个标定点确定所述标定点在各所述第一红外图像或各所述第二红外图像中的对应区域;
所述第二提取子单元5032,用于在所述对应区域内采用聚类算法提取所述标定点对应的特征信息。
特别的,所述交互单元505还可以包括第一交互单元5051、第二交互单元5052和第三交互单元5053,其中:
第一交互单元5051,用于通过各所述标定点的三维运动轨迹信息确定所述标定物的三维运动轨迹信息;
第二交互单元5052,用于将所述标定物的三维运动轨迹信息和数据库中的信息进行比对,获取所述数据库中与所述标定物的三维运动轨迹信息对应的交互指令;
第三交互单元5053,用于通过所述交互指令进行虚拟现实交互。
采用实施例3提供的一种虚拟现实交互装置,第一红外摄像单元和第二红外摄像单元,通过红外摄像头采集相同标定物的至少两张红外图像,然后提取单元提取各红外图像中各标定点对应的特征信息,确定单元利用各标定点对应的特征信息确定各标定点的三维运动轨迹信息,交互单元基于各标定点的三维运动轨迹信息进行虚拟现实交互。因此解决了现有技术通过三维深度摄像头采集图像进行虚拟现实交互,由于三维深度摄像头自身价格等原因的影响,导致基于该摄像头的虚拟现实技术在特定场景下难以实现的问题。
另外,需要说明的是本申请实施例中可以通过硬件处理器(hardware processor)来实现上述相关功能模块。
实施例4
实施例4提供一种虚拟现实交互***,用以解决现有技术通过三维深度摄像头采集图像,由于三维深度摄像头自身价格等原因,导致基于该摄像头的虚拟现实技术在特定场景下难以实现的问题。该虚拟现实交互***600的具体结构示意图如图6所示,包括:虚拟现实交互装置601和标定物602,其中:
所述虚拟现实交互装置601包括:第一红外摄像单元、第二红外摄像单元、提取单元、确定单元和交互单元,其中:第一红外摄像单元,用于通过第一红外摄像头采集标定物的至少两张第一红外图像,所述标定物包含至少一个标定点,所述标定点用于提供红外光;第二红外摄像单元,用于通过第二红外摄像头采集所述标定物至少两张第二红外图像;提取单元,用于提取各所述标定点在各所述第一红外图像中对应的特征信息以及在各所述第二红外图像中对应的特征信息,所述特征信息用于显示各所述标定点在所述第一红外图像或第二红外图像中的位置;确定单元,用于利用各所述标定点在所述第一红外图像中对应的特征信息以及在所述第二红外图像中对应的特征信息,确定各所述标定点的三维运动轨迹信息;交互单元,用于通过各所述标定点的三维运动轨迹信息进行虚拟现实交互。
所述标定物602包含至少一个标定点,所述标定点用于反射红外光。
在实际应用中的一种虚拟现实交互***,包括虚拟现实交互头盔和标定手套,虚拟现实交互头盔中有双红外摄像头,用于采集标定手套的红外图像,通常还可以在虚拟现实交互头盔上安装红外光发射装置,标定手套的5个手指上有能够反射红外光的材料,虚拟现实交互头盔上的双红外摄像头在采集红外图像后,可以将采集的红外图像传递给远端的服务器进行处理,也可以在该虚拟现实交互头盔上安装处理设备进行处理。
采用实施例4提供的一种虚拟现实交互***,该***中虚拟现实交互装置通过第一红外摄像单元和第二红外摄像单元中的红外摄像头,采集标定物的红外图像,并将所采集的红外图像进行一系列处理从而进行虚拟现实交互。因此解决了现有技术通过三维深度摄像头采集图像进行虚拟现实交互,由于三维深度摄像头自身价格等原因的影响,导致基于该摄像头的虚拟现实技术在特定场景下难以实现的问题。
另外,需要说明的是本申请实施例中可以通过硬件处理器(hardware processor)来实现上述相关功能模块。
在本申请另一实施例中,还提供一种电子设备,包括前述任一实施例所述的 虚拟现实交互装置。
在本申请另一实施例中,还提供一种非暂态计算机可读存储介质,所述非暂态计算机可读存储介质存储有计算机可执行指令,该计算机可执行指令可执行上述任意方法实施例中的虚拟现实交互方法。
图7是本申请实施例提供的执行虚拟现实交互方法的电子设备的硬件结构示意图,如图7所示,该设备包括:
一个或多个处理器710以及存储器720,图7中以一个处理器710为例。
执行虚拟现实交互方法的设备还可以包括:输入装置730和输出装置740。
处理器710、存储器720、输入装置730和输出装置740可以通过总线或者其他方式连接,图7中以通过总线连接为例。
存储器720作为一种非暂态计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本申请实施例中的虚拟现实交互方法对应的程序指令/模块(例如,附图5所示的第一红外摄像单元501、第二红外摄像单元502、提取单元503、确定单元504和交互单元505)。处理器710通过运行存储在存储器720中的非易失性软件程序、指令以及模块,从而执行电子设备的各种功能应用以及数据处理,即实现上述方法实施例虚拟现实交互方法。
存储器720可以包括存储程序区和存储数据区,其中,存储程序区可存储操作***、至少一个功能所需要的应用程序;存储数据区可存储根据虚拟现实交互装置的使用所创建的数据等。此外,存储器720可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器720可选包括相对于处理器710远程设置的存储器,这些远程存储器可以通过网络连接至虚拟现实交互装置。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
输入装置730可接收输入的数字或字符信息,以及产生与虚拟现实交互装置的用户设置以及功能控制有关的键信号输入。输出装置740可包括显示屏等显示设备。
所述一个或者多个模块存储在所述存储器720中,当被所述一个或者多个处理器710执行时,执行上述任意方法实施例中的虚拟现实交互方法。
上述产品可执行本申请实施例所提供的方法,具备执行方法相应的功能模块 和有益效果。未在本实施例中详尽描述的技术细节,可参见本申请实施例所提供的方法。
本申请实施例的电子设备以多种形式存在,包括但不限于:
(1)移动通信设备:这类设备的特点是具备移动通信功能,并且以提供话音、数据通信为主要目标。这类终端包括:智能手机(例如iPhone)、多媒体手机、功能性手机,以及低端手机等。
(2)超移动个人计算机设备:这类设备属于个人计算机的范畴,有计算和处理功能,一般也具备移动上网特性。这类终端包括:PDA、MID和UMPC设备等,例如iPad。
(3)便携式娱乐设备:这类设备可以显示和播放多媒体内容。该类设备包括:音频、视频播放器(例如iPod),掌上游戏机,电子书,以及智能玩具和便携式车载导航设备。
(4)服务器:提供计算服务的设备,服务器的构成包括处理器、硬盘、内存、***总线等,服务器和通用的计算机架构类似,但是由于需要提供高可靠的服务,因此在处理能力、稳定性、可靠性、安全性、可扩展性、可管理性等方面要求较高。
(5)其他具有数据交互功能的电子装置。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如ROM/RAM、磁碟、光盘等,包括若干信号用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (13)

  1. 一种虚拟现实交互方法,其特征在于,应用于电子设备,包括:
    通过第一红外摄像头采集标定物的至少两张第一红外图像,以及通过第二红外摄像头采集所述标定物至少两张第二红外图像,所述标定物包含至少一个标定点,所述标定点用于提供红外光;
    提取各所述标定点在各所述第一红外图像中对应的特征信息以及在各所述第二红外图像中对应的特征信息,所述特征信息用于显示各所述标定点在所述第一红外图像或第二红外图像中的位置;
    利用各所述标定点在所述第一红外图像中对应的特征信息以及在所述第二红外图像中对应的特征信息,确定各所述标定点的三维运动轨迹信息,并通过各所述标定点的三维运动轨迹信息进行虚拟现实交互。
  2. 根据权利要求1所述的方法,其特征在于,所述提取各所述标定点在各所述第一红外图像中对应的特征信息以及在各所述第二红外图像中对应的特征信息具体为:
    针对每一个标定点执行以下操作:
    确定所述标定点在各所述第一红外图像或各所述第二红外图像中的对应区域;
    在所述对应区域内采用聚类算法提取所述标定点对应的特征信息。
  3. 根据权利要求1所述的方法,其特征在于,通过第一红外摄像头采集标定物的至少两张第一红外图像,以及通过第二红外摄像头采集所述标定物至少两张第二红外图像具体为:通过第一红外摄像头采集标定物的至少两张第一红外图像,同时通过第二红外摄像头采集所述标定物相同数量的第二红外图像,并且所述第一红外摄像头和所述第二红外摄像头的镜头处于同一平面;
    则,所述利用各所述标定点在各所述第一红外图像中对应的特征信息以及各所述标定点在各所述第二红外图像中对应的特征信息,确定所述标定物的三维运动轨迹信息具体为:
    针对每一个标定点执行以下操作:
    确定与每一张所述第一红外图像同时采集的第二红外图像;
    利用所述标定点在所述第一红外图像中对应的特征信息,以及所述标定点在所述第二红外图像中对应的特征信息,确定在采集所述第一红外图像时,所述标定点到所述第一红外摄像头和所述第二红外摄像头的镜头中心的连线的垂直距离信息;
    通过所述标定点对应的至少两个所述垂直距离信息确定所述标定点三维运动轨迹信息。
  4. 根据权利要求1所述的方法,其特征在于,所述通过各所述标定点的三维运动轨迹信息进行虚拟现实交互具体为:
    通过各所述标定点的三维运动轨迹信息确定所述标定物的三维运动轨迹信息;
    将所述标定物的三维运动轨迹信息和数据库中的信息进行比对,获取所述数据库中与所述标定物的三维运动轨迹信息对应的交互指令;
    通过所述交互指令进行虚拟现实交互。
  5. 根据权利要求1所述的方法,其特征在于,所述第一红外摄像头和所述第二红外摄像头具体为在感光器件和镜头之间具有红外滤光片的摄像头。
  6. 根据权利要求1所述的方法,其特征在于,所述方法还包括:向所述标定物发射红外光,标定物通过标定物上的标定点反射红外光。
  7. 一种虚拟现实交互装置,其特征在于,所述装置包括:
    第一红外摄像单元、第二红外摄像单元、提取单元、确定单元和交互单元,其中:
    第一红外摄像单元,用于通过第一红外摄像头采集标定物的至少两张第一红外图像,所述标定物包含至少一个标定点,所述标定点用于提供红外光;
    第二红外摄像单元,用于通过第二红外摄像头采集所述标定物至少两张第二红外图像;
    提取单元,用于提取各所述标定点在各所述第一红外图像中对应的特征信息以及在各所述第二红外图像中对应的特征信息,所述特征信息用于显示 各所述标定点在所述第一红外图像或第二红外图像中的位置;
    确定单元,用于利用各所述标定点在所述第一红外图像中对应的特征信息以及在所述第二红外图像中对应的特征信息,确定各所述标定点的三维运动轨迹信息;
    交互单元,用于通过各所述标定点的三维运动轨迹信息进行虚拟现实交互。
  8. 根据权利要求7所述的装置,其特征在于,所述提取单元包括第一提取子单元和第二提取子单元,其中:
    所述第一提取子单元,用于针对每一个标定点确定所述标定点在各所述第一红外图像或各所述第二红外图像中的对应区域;
    所述第二提取子单元,用于在所述对应区域内采用聚类算法提取所述标定点对应的特征信息。
  9. 根据权利要求7所述的装置,其特征在于,所述交互单元包括第一交互单元、第二交互单元和第三交互单元,其中:
    第一交互单元,用于通过各所述标定点的三维运动轨迹信息确定所述标定物的三维运动轨迹信息;
    第二交互单元,用于将所述标定物的三维运动轨迹信息和数据库中的信息进行比对,获取所述数据库中与所述标定物的三维运动轨迹信息对应的交互指令;
    第三交互单元,用于通过所述交互指令进行虚拟现实交互。
  10. 一种虚拟现实交互***,其特征在于,所述***包括:虚拟现实交互装置和标定物,其中:
    所述虚拟现实交互装置包括第一红外摄像单元、第二红外摄像单元、提取单元、确定单元和交互单元,其中:第一红外摄像单元,用于通过第一红外摄像头采集标定物的至少两张第一红外图像;第二红外摄像单元,用于通过第二红外摄像头采集所述标定物至少两张第二红外图像;提取单元,用于提取各所述标定点在各所述第一红外图像中对应的特征信息以及在各所述第二红外图像中对应的特征信息,所述特征信息用于显示各所述标定点在所述第一红外图像或第二红外图像中的位置;确定单元,用于利用各所述标定 点在所述第一红外图像中对应的特征信息以及在所述第二红外图像中对应的特征信息,确定各所述标定点的三维运动轨迹信息;交互单元,用于通过各所述标定点的三维运动轨迹信息进行虚拟现实交互;
    所述标定物包含至少一个标定点,所述标定点用于反射红外光。
  11. 一种非暂态计算机可读存储介质,其特征在于,所述非暂态计算机可读存储介质存储计算机指令,所述计算机指令用于使所述计算机执行权利要求1-6任一所述方法。
  12. 一种电子设备,其特征在于,包括:
    一个或多个处理器;以及,
    与所述一个或多个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述一个或多个处理器执行的指令,所述指令被所述一个或多个处理器执行,以使所述一个或多个处理器能够:
    通过第一红外摄像头采集标定物的至少两张第一红外图像,以及通过第二红外摄像头采集所述标定物至少两张第二红外图像,所述标定物包含至少一个标定点,所述标定点用于提供红外光;
    提取各所述标定点在各所述第一红外图像中对应的特征信息以及在各所述第二红外图像中对应的特征信息,所述特征信息用于显示各所述标定点在所述第一红外图像或第二红外图像中的位置;
    利用各所述标定点在所述第一红外图像中对应的特征信息以及在所述第二红外图像中对应的特征信息,确定各所述标定点的三维运动轨迹信息,并通过各所述标定点的三维运动轨迹信息进行虚拟现实交互。
  13. 一种计算机程序产品,所述计算机程序产品包括存储在非暂态计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,使所述计算机执行权利要求1-6所述的方法。
PCT/CN2016/096983 2015-12-01 2016-08-26 一种虚拟现实交互方法、装置和*** WO2017092432A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510870209.8A CN105892638A (zh) 2015-12-01 2015-12-01 一种虚拟现实交互方法、装置和***
CN201510870209.8 2015-12-01

Publications (1)

Publication Number Publication Date
WO2017092432A1 true WO2017092432A1 (zh) 2017-06-08

Family

ID=57002403

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/096983 WO2017092432A1 (zh) 2015-12-01 2016-08-26 一种虚拟现实交互方法、装置和***

Country Status (2)

Country Link
CN (1) CN105892638A (zh)
WO (1) WO2017092432A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509173A (zh) * 2018-06-07 2018-09-07 北京德火科技有限责任公司 图像展示***及方法、存储介质、处理器

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105892638A (zh) * 2015-12-01 2016-08-24 乐视致新电子科技(天津)有限公司 一种虚拟现实交互方法、装置和***
CN109313483A (zh) * 2017-01-22 2019-02-05 广东虚拟现实科技有限公司 一种与虚拟现实环境进行交互的装置
US11445094B2 (en) * 2017-08-07 2022-09-13 Apple Inc. Electronic device having a vision system assembly held by a self-aligning bracket assembly
CN110442235B (zh) * 2019-07-16 2023-05-23 广东虚拟现实科技有限公司 定位跟踪方法、装置、终端设备及计算机可读取存储介质
CN111736708B (zh) * 2020-08-25 2020-11-20 歌尔光学科技有限公司 头戴设备及其画面显示***和方法、检测***和方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140247280A1 (en) * 2013-03-01 2014-09-04 Apple Inc. Federated mobile device positioning
CN105068649A (zh) * 2015-08-12 2015-11-18 深圳市埃微信息技术有限公司 基于虚拟现实头盔的双目手势识别装置及方法
CN105892638A (zh) * 2015-12-01 2016-08-24 乐视致新电子科技(天津)有限公司 一种虚拟现实交互方法、装置和***
CN105892633A (zh) * 2015-11-18 2016-08-24 乐视致新电子科技(天津)有限公司 手势识别方法及虚拟现实显示输出设备

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130063560A1 (en) * 2011-09-12 2013-03-14 Palo Alto Research Center Incorporated Combined stereo camera and stereo display interaction
CN103135755B (zh) * 2011-12-02 2016-04-06 深圳泰山在线科技有限公司 交互***及方法
US10234941B2 (en) * 2012-10-04 2019-03-19 Microsoft Technology Licensing, Llc Wearable sensor for tracking articulated body-parts
KR101465894B1 (ko) * 2013-09-13 2014-11-26 성균관대학교산학협력단 손가락에 부착한 마커를 이용하여 제어 명령을 생성하는 이동 단말 및 손가락에 부착한 마커를 이용하여 단말에서 제어 명령을 생성하는 방법
CN104298345B (zh) * 2014-07-28 2017-05-17 浙江工业大学 一种人机交互***的控制方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140247280A1 (en) * 2013-03-01 2014-09-04 Apple Inc. Federated mobile device positioning
CN105068649A (zh) * 2015-08-12 2015-11-18 深圳市埃微信息技术有限公司 基于虚拟现实头盔的双目手势识别装置及方法
CN105892633A (zh) * 2015-11-18 2016-08-24 乐视致新电子科技(天津)有限公司 手势识别方法及虚拟现实显示输出设备
CN105892638A (zh) * 2015-12-01 2016-08-24 乐视致新电子科技(天津)有限公司 一种虚拟现实交互方法、装置和***

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509173A (zh) * 2018-06-07 2018-09-07 北京德火科技有限责任公司 图像展示***及方法、存储介质、处理器

Also Published As

Publication number Publication date
CN105892638A (zh) 2016-08-24

Similar Documents

Publication Publication Date Title
WO2017092432A1 (zh) 一种虚拟现实交互方法、装置和***
CN109635621B (zh) 用于第一人称视角中基于深度学习识别手势的***和方法
CN109754471B (zh) 增强现实中的图像处理方法及装置、存储介质、电子设备
US10488195B2 (en) Curated photogrammetry
US9564175B2 (en) Clustering crowdsourced videos by line-of-sight
WO2019100757A1 (zh) 视频生成方法、装置和电子设备
US9392248B2 (en) Dynamic POV composite 3D video system
US20230274471A1 (en) Virtual object display method, storage medium and electronic device
US20130321589A1 (en) Automated camera array calibration
CN114097248B (zh) 一种视频流处理方法、装置、设备及介质
WO2018000619A1 (zh) 一种数据展示方法、装置、电子设备与虚拟现实设备
WO2018000609A1 (zh) 一种虚拟现实***中分享3d影像的方法和电子设备
WO2021184952A1 (zh) 增强现实处理方法及装置、存储介质和电子设备
CN109002248B (zh) Vr场景截图方法、设备及存储介质
CN104243961A (zh) 多视角影像的显示***及方法
US20140218291A1 (en) Aligning virtual camera with real camera
CN110033423B (zh) 用于处理图像的方法和装置
WO2017133147A1 (zh) 一种实景地图的生成方法、推送方法及其装置
AU2020309094B2 (en) Image processing method and apparatus, electronic device, and storage medium
KR102197615B1 (ko) 증강 현실 서비스를 제공하는 방법 및 증강 현실 서비스를 제공하기 위한 서버
WO2016183954A1 (zh) 运动轨迹的计算方法及装置、终端
JP2017162103A (ja) 点検作業支援システム、点検作業支援方法、点検作業支援プログラム
US20150326847A1 (en) Method and system for capturing a 3d image using single camera
JP2018033107A (ja) 動画の配信装置及び配信方法
US20170169572A1 (en) Method and electronic device for panoramic video-based region identification

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16869743

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16869743

Country of ref document: EP

Kind code of ref document: A1