WO2024108394A1 - Posture acquisition method, apparatus, virtual reality device, and readable storage medium - Google Patents

Posture acquisition method, apparatus, virtual reality device, and readable storage medium Download PDF

Info

Publication number
WO2024108394A1
WO2024108394A1 PCT/CN2022/133552 CN2022133552W WO2024108394A1 WO 2024108394 A1 WO2024108394 A1 WO 2024108394A1 CN 2022133552 W CN2022133552 W CN 2022133552W WO 2024108394 A1 WO2024108394 A1 WO 2024108394A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
virtual reality
reality device
positioning
posture
Prior art date
Application number
PCT/CN2022/133552
Other languages
French (fr)
Chinese (zh)
Inventor
李卫硕
Original Assignee
北京小米移动软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京小米移动软件有限公司 filed Critical 北京小米移动软件有限公司
Priority to PCT/CN2022/133552 priority Critical patent/WO2024108394A1/en
Publication of WO2024108394A1 publication Critical patent/WO2024108394A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals

Definitions

  • the present disclosure relates to the field of data processing technology, and in particular to a posture acquisition method and device, virtual reality equipment, and a readable storage medium.
  • VR devices In order to make the brain feel "immersive", virtual reality devices, or VR devices, need to reduce the time between head movement and the correct changes on the retina, so that the wearer's visual perception and motion perception are matched in time. If visual perception and motion perception are not matched in time, for example, the wearer turns around but the picture does not turn in time, the wearer will feel uncomfortable, affecting the user experience.
  • the present disclosure provides a posture acquisition method and device, a virtual reality device, and a readable storage medium to address the deficiencies of related technologies.
  • a posture acquisition method which is applied to a virtual reality device, and the method includes:
  • the target posture of the virtual reality device at the target moment is acquired according to the reference posture data and the relative posture data.
  • obtaining reference posture data of the virtual reality device includes:
  • the first positioning data and the second positioning data are fused to obtain the reference posture data.
  • positioning the virtual reality device according to the image data to obtain first positioning data includes:
  • the virtual reality device is positioned according to the image data to obtain first positioning data.
  • positioning the virtual reality device according to the image data to obtain first positioning data includes:
  • the fast positioning data, the precise positioning data or the corrected positioning data is determined as the first positioning data.
  • positioning the virtual reality device according to the inertial measurement data to obtain second positioning data includes:
  • the inertial measurement data is integrated to obtain second positioning data of the virtual reality device.
  • obtaining relative posture data of the virtual reality device includes:
  • second movement data of the virtual reality device is determined; the second movement data is used as relative posture data.
  • obtaining inertial measurement data of the virtual reality device between an initial moment and a target moment includes:
  • the measurement data is extended to the target time based on the original inertial measurement data to obtain the inertial measurement data between the initial time and the target time.
  • correcting the inertial measurement data to obtain corrected measurement data includes:
  • the inertial measurement data between the first intermediate moment and the target moment is corrected according to the first movement data to obtain corrected measurement data.
  • acquiring a target posture of the virtual reality device at a target time according to the reference posture data and the relative posture data includes:
  • a weighted value is calculated according to the state data, the observation data and the respective weight values to obtain a target posture of the virtual reality device at the target moment.
  • a posture acquisition device which is applied to a virtual reality device, and the device includes:
  • a reference posture acquisition module used to acquire reference posture data and relative posture data of the virtual reality device
  • a relative posture acquisition module used to acquire relative posture data of the virtual reality device
  • a target posture acquisition module is used to acquire the target posture of the virtual reality device at a target moment according to the reference posture data and the relative posture data.
  • the reference posture acquisition module includes:
  • a first positioning submodule used to position the virtual reality device according to the image data to obtain first positioning data
  • a second positioning submodule used to position the virtual reality device according to the inertial measurement data to obtain second positioning data
  • the reference posture acquisition submodule is used to fuse the first positioning data and the second positioning data to obtain the reference posture data.
  • the first positioning submodule includes:
  • An image data acquisition unit used to acquire image data between the second intermediate moment and the initial moment
  • the first positioning acquisition unit is used to position the virtual reality device according to the image data to obtain first positioning data.
  • the first positioning acquisition unit includes:
  • a fast positioning subunit used for processing the image data based on a preset semi-direct SLAM algorithm to obtain fast positioning data corresponding to the virtual reality device;
  • a precise positioning subunit used for processing the image data based on a preset sliding window algorithm to obtain precise positioning data corresponding to the virtual reality device;
  • a correction positioning subunit used to correct the rapid positioning data using the precise positioning data to obtain corrected positioning data
  • the first positioning subunit is used to determine the rapid positioning data, the precise positioning data or the corrected positioning data as the first positioning data.
  • the second positioning submodule includes:
  • An inertial measurement unit used to obtain inertial measurement data between a second intermediate moment and an initial moment; the second intermediate moment is earlier than the initial moment;
  • the second positioning unit is used to integrate the inertial measurement data to obtain second positioning data of the virtual reality device.
  • the relative posture acquisition module includes:
  • An inertial data acquisition submodule used to acquire inertial measurement data of the virtual reality device between an initial moment and a target moment
  • a correction data acquisition submodule used to correct the inertial measurement data to obtain corrected measurement data
  • the relative posture acquisition submodule is used to determine the second movement data of the virtual reality device based on the preset head prediction model and the corrected measurement data; the second movement data is used as the relative posture data.
  • the inertial data acquisition submodule includes:
  • a raw data acquisition unit used to acquire raw inertial measurement data after the initial moment from a specified position of the virtual reality device
  • the inertial data acquisition unit is used to, when the original inertial measurement data does not include the data at the target time, expand the measurement data to the target time based on the original inertial measurement data to obtain the inertial measurement data between the initial time and the target time.
  • correction data acquisition submodule includes:
  • a rotation data acquisition unit used for acquiring rotation data of the virtual reality device between the initial moment and the target moment according to the inertial measurement data; the rotation data is used as relative posture data;
  • a movement data acquisition unit configured to acquire first movement data of the virtual reality device between the initial moment and a first intermediate moment according to the inertial measurement data; the first intermediate moment is between the initial moment and the target moment;
  • the correction data acquisition unit is used to correct the inertial measurement data between the first intermediate moment and the target moment according to the first movement data to obtain corrected measurement data.
  • the target posture acquisition module includes:
  • a state data acquisition submodule used to acquire the state data at the target moment according to a preset uniform motion model and the reference posture data
  • An observation data acquisition submodule used for superimposing the reference posture data and the relative posture data to obtain observation data of the virtual reality device
  • a weight value acquisition submodule used for acquiring respective weight values of the state data and the observation data according to the accumulated error of the virtual reality device
  • the target posture acquisition submodule is used to calculate the weighted value according to the state data, the observation data and the respective weight values to obtain the target posture of the virtual reality device at the target moment.
  • a virtual reality device including:
  • the memory is used to store a computer program executable by the processor
  • the processor is used to execute the computer program in the memory to implement the above method.
  • a non-transitory computer-readable storage medium is provided, and when an executable computer program in the storage medium is executed by a processor, the method as described above can be implemented.
  • reference posture data and relative posture data of the virtual reality device can be obtained; the target posture of the virtual reality device at the target moment is obtained according to the reference posture data and the relative posture data, so as to achieve the effect of synchronizing the user action with the picture and improve the user experience.
  • Fig. 1 is a flow chart showing a method for acquiring a posture according to an exemplary embodiment.
  • Fig. 2 is a flow chart showing a method of acquiring reference posture data according to an exemplary embodiment.
  • Fig. 3 is a flow chart showing a method of acquiring first positioning data according to an exemplary embodiment.
  • Fig. 4 is a flowchart showing another method of acquiring first positioning data according to an exemplary embodiment.
  • Fig. 5 is a flow chart showing a method of acquiring relative posture data according to an exemplary embodiment.
  • Fig. 6 is a schematic diagram showing a preset head prediction model according to an exemplary embodiment.
  • Fig. 7 is a flow chart showing a method of acquiring a target posture according to an exemplary embodiment.
  • Fig. 8 is a schematic diagram showing first positioning data, reference posture data and target posture according to an exemplary embodiment.
  • Fig. 9 is a block diagram showing a method for acquiring a posture according to an exemplary embodiment.
  • Fig. 10 is a block diagram showing a posture acquisition device according to an exemplary embodiment.
  • Fig. 11 is a block diagram of a virtual reality device according to an exemplary embodiment.
  • VR devices In order to make the brain feel "immersive", virtual reality devices, or VR devices, need to reduce the time between head movement and the correct changes on the retina, so that the wearer's visual perception and motion perception are matched in time. If visual perception and motion perception are not matched in time, for example, the wearer turns around but the picture does not turn in time, the wearer will feel uncomfortable, affecting the user experience.
  • the embodiments of the present disclosure provide a posture acquisition method and device, a virtual reality device, and a readable storage medium, which can be applied to the virtual reality device;
  • the virtual reality device may include a camera and an inertial detection unit.
  • the camera can collect images of the virtual reality device in a specified direction, which will be referred to as image data later;
  • the inertial detection unit can collect inertial detection data of the virtual reality device according to a set period.
  • the inertial detection data may include angular velocity and acceleration in six dimensions. The above six dimensions refer to the XYZ axis direction and the Roll direction, Yaw direction and Pitch direction in the spatial rectangular coordinate system.
  • FIG. 1 shows a posture acquisition method according to an exemplary embodiment.
  • the posture acquisition method includes steps 11 and 12 .
  • step 11 reference posture data and relative posture data of the virtual reality device are obtained.
  • the processor in the virtual reality device can obtain the reference posture data of the virtual reality device at the initial moment and obtain the inertial measurement data of the virtual reality device between the initial moment and the target moment.
  • the processor in the virtual reality device can obtain the reference posture data at each moment in real time or according to a set period.
  • the above-mentioned moments are referred to as the initial moment in the subsequent embodiments, and the initial moment is used as the starting moment for processing data in each embodiment.
  • the processor obtains the reference posture data at the initial moment, see Figure 2, including steps 21 to 23.
  • the processor may position the virtual reality device according to the image data to obtain first positioning data.
  • the processor may acquire image data between the second intermediate moment and the initial moment.
  • the processor may obtain image data between the second intermediate moment and the initial moment.
  • a camera in a virtual reality device may collect image data according to a set period and store it in a specified location, and the processor may read the image data from the specified location.
  • the processor may communicate with the camera to obtain image data output by the camera in real time.
  • the processor may position the virtual reality device according to the image data to obtain first positioning data.
  • obtaining the first positioning data includes steps 41 to 44 .
  • the processor can process the image data based on a preset semi-direct SLAM algorithm to obtain fast positioning data corresponding to the virtual reality device. That is, the processor can locate the position of the virtual reality device based on two adjacent frames of image data to obtain fast positioning data.
  • the processor can process the image data based on a preset sliding window algorithm to obtain accurate positioning data corresponding to the virtual reality device. It is understandable that the speed at which the semi-direct SLAM algorithm acquires positioning data is faster than the speed at which the sliding window algorithm acquires positioning data, but the accuracy of the sliding window algorithm in acquiring positioning data is higher than the accuracy of the semi-direct SLAM algorithm. And it can provide an accurate initial value for the subsequent acquisition of the target posture.
  • the processor can use the precise positioning data to correct the rapid positioning data to obtain corrected positioning data.
  • the speed at which the processor obtains the precise positioning data is slower than the speed of the semi-direct SLAM algorithm. Therefore, after the precise positioning data is obtained, the processor can use the precise positioning data to correct the rapid positioning data, and can use the rapid positioning data to quickly obtain the subsequent target posture, or use the precise positioning data to obtain the accurate subsequent target posture, or can use the precise positioning data to correct the rapid positioning data, so that the accuracy of the corrected positioning data is higher than that of the rapid positioning data.
  • the processor may determine the fast positioning data, the precise positioning data or the corrected positioning data as the first positioning data.
  • the processor may detect whether there is precise positioning data when it is required to obtain the first positioning data. When it is detected that there is precise positioning data at the specified position, the precise positioning data may be used as the first positioning data; when no precise positioning data is detected, fast positioning data may be obtained as the first positioning data.
  • the first positioning data is corrected after the precise positioning data is obtained, and subsequently the corrected positioning data with a higher degree of accuracy is used to continue to obtain the first positioning data, thereby improving the accuracy of the first positioning data.
  • the above-mentioned first positioning data includes the position of the virtual reality device in six dimensions.
  • the processor may position the virtual reality device according to the inertial measurement data to obtain second positioning data.
  • the processor may obtain the inertial measurement data between the second intermediate moment and the initial moment.
  • the inertial detection unit in the virtual reality device may collect the inertial detection data according to a preset period and store it in a specified location, and the processor may read the inertial detection data from the specified location.
  • the processor may communicate with the inertial detection unit to obtain the inertial detection data output by the inertial detection unit in real time.
  • a preset pre-integration algorithm is stored in the virtual reality device.
  • the processor may call the pre-integration algorithm and use the inertial measurement data as input data of the pre-integration algorithm.
  • the pre-integration algorithm may integrate the inertial measurement data to obtain positioning data of the virtual reality device, which is hereinafter referred to as second positioning data to distinguish it from the first positioning data. It is understandable that the second positioning data includes the position of the virtual reality device in six dimensions.
  • the processor may fuse the first positioning data and the second positioning data to obtain the reference posture data.
  • the processor can fuse the first positioning data and the second positioning data to obtain the posture data of the virtual reality device, which is hereinafter referred to as reference posture data.
  • step 21 Considering that the process of obtaining the first positioning data in step 21 uses two adjacent frames of images to achieve positioning, there will be jitter between the two adjacent first positioning data or the two adjacent first positioning data are not smooth. If the image is rendered based on the first positioning data, the image will be jittery, affecting the viewing experience.
  • the virtual reality device is positioned by integration in the process of acquiring the second positioning data in step 22, and the inertial detection data includes acceleration and angular velocity
  • the square of the acceleration is integrated when the movement (i.e., displacement) of the virtual reality device is calculated by using acceleration. The more inertial detection data there is, the lower the accuracy of the integration result will be.
  • the processor can fuse the first positioning data and the second positioning data.
  • the fusion method can be implemented using the ESKF algorithm to obtain the fused positioning data, which is subsequently referred to as the reference posture data.
  • the input data of the ESKF (Error-state Kalman Filter) algorithm is the first positioning data and the second positioning data.
  • the ESKF algorithm first uses the median integral to process the obtained second positioning data to solve the nominal state quantity of the system; then, the covariance matrix of the error state quantity is solved using the second positioning data, combined with the first positioning data output by the SLAM algorithm, and the error state quantity of the system is estimated using KF.
  • the nominal state quantity is corrected using the solved error state quantity to obtain the filtered and smoothed positioning data, that is, the reference posture data.
  • the processor may obtain inertial measurement data of the virtual reality device between the initial moment and the target moment.
  • the target moment is the moment corresponding to the image that needs to be displayed when the virtual reality device renders the image, which can be understood as the moment when the rendered image is on the screen.
  • the difference between the target moment and the initial moment is 20 to 60 ms.
  • the inertial detection unit usually cannot obtain the inertial detection data of the target moment. It is assumed that the inertial detection unit only detects the first intermediate moment, which is between the initial moment and the target moment.
  • the processor can obtain the original inertial measurement data after the initial moment from the specified position of the virtual reality device.
  • the processor can detect whether the above-mentioned original inertial measurement data includes data at the target moment.
  • the processor can expand the measurement data to the target moment based on the above-mentioned original inertial measurement data, and the expansion method includes but is not limited to interpolation filling, machine learning, etc.
  • the processor can predict the inertial detection data between 35ms and 40ms based on the original inertial measurement data between 10ms and 35ms. In this way, the processor can obtain the inertial measurement data between the initial moment and the target moment to ensure the smooth implementation of the target posture prediction process.
  • the inertial measurement data includes angular velocity
  • the rotation data is in a first-order integral relationship with the angular velocity. Therefore, the processor can integrate the rotation data of the virtual reality device between the initial moment and the target moment based on the inertial measurement data between the initial moment and the target moment.
  • the above rotation data can be understood as the rotation angle of the head, for example, the head rotates 90 degrees between 10ms and 40ms.
  • the inertial measurement data includes acceleration, and the movement data and acceleration are in a second-order integral relationship. Therefore, as the integrated inertial detection data increases, the accuracy of the movement data also decreases. Therefore, in this step, only the first half of the inertial detection data between the initial moment and the target moment is used for integration to ensure the accuracy of the movement data. Among them, the first half can be determined according to a certain proportion (such as 30% to 50%). For ease of understanding, in this step, the inertial detection data between the initial moment and the first intermediate moment is directly integrated to obtain the first movement data of the virtual reality device between the initial moment and the first intermediate moment.
  • the processor may correct the inertial measurement data between the first intermediate moment and the target moment according to the first movement data to obtain corrected measurement data.
  • the processor can correct the inertial measurement data between the first intermediate moment and the target moment according to the first movement data, that is, update the inertial measurement data from the posture at the initial moment to the posture at the first intermediate moment to obtain the corrected measurement data. It is understandable that the corrected measurement data can have the accuracy of the first movement data.
  • the processor may determine second movement data of the virtual reality device based on a preset head prediction model and the corrected measurement data; the second movement data is used as relative posture data.
  • a preset head prediction model is stored in the virtual reality device.
  • the preset head prediction model divides the user into a torso and a head.
  • the AB part represents the torso
  • the BC part represents the head part above the neck, so that the posture change of the user can be divided into two parts: the movement of the torso and the rotation of the head, that is, the movement of the head changes with the movement of the torso, and the head part only determines the rotation part (including the rotation of the head itself and the rotation of the head part caused by the rotation of the torso), wherein L represents the movement of the torso, and q represents the rotation of the head.
  • the virtual reality device can call the preset head prediction model, and use the above-mentioned correction measurement data as the input data of the preset head prediction model.
  • the preset head prediction model can process the above-mentioned correction measurement data and obtain the movement data of the virtual reality device, which is subsequently referred to as the second movement data for distinction.
  • the second movement data is obtained on the basis of the correction measurement data, so it can represent the movement change amount of the head part at the target time relative to the initial time (in displacement). Therefore, in this embodiment, the second movement data and the rotation data can be used as the relative posture data of the head part at the target time relative to the initial time, including the displacement and rotation angle of the head part.
  • step 12 a target posture of the virtual reality device at a target moment is acquired according to the reference posture data and the relative posture data.
  • the processor can obtain the target posture of the virtual reality device at the target moment according to the reference posture data and the relative posture data, referring to FIG. 7 , which includes steps 71 to 74 .
  • the processor may acquire the state data at the target moment according to a preset uniform motion model and the reference posture data.
  • a preset uniform motion model is stored in the virtual reality device, and the uniform motion model is used to predict the posture of the virtual reality device at different times according to uniform motion. Therefore, the processor can refer to the posture data as input data of the preset uniform motion model.
  • the preset uniform motion model can process the reference posture data to obtain the state data at the target time.
  • the processor may superimpose the reference posture data and the relative posture data to obtain observation data of the virtual reality device. It is understandable that superimposing the reference posture data and the relative posture data refers to calculating data in various dimensions.
  • the processor can obtain the respective weight values of the state data and the observation data according to the cumulative error of the virtual reality device.
  • the processor can calculate the weighted value according to the state data, the observation data and the respective weight values to obtain the target posture of the virtual reality device at the target time.
  • the effects of the first positioning data, the reference posture data, and the target posture are shown in Figure 8.
  • the blue line B represents the first positioning data
  • the yellow line Y represents the reference posture data
  • the red line R represents the target posture. It can be seen that the solution provided in this embodiment can achieve a better matching degree and the curve is relatively smooth.
  • the processor may send the target pose to the rendering pipeline for use.
  • the solution provided in the embodiment of the present disclosure can obtain the reference posture data and relative posture data of the virtual reality device; the target posture of the virtual reality device at the target moment is obtained according to the reference posture data and the relative posture data, so as to achieve the effect of synchronizing the user action with the picture and improve the user experience.
  • FIG9 which includes a SLAM algorithm module, an inertial measurement module, an ESKF filter module, a posture prediction module, and a prediction filter module.
  • the SLAM algorithm module can process the image collected by the camera to obtain image data or six-dimensional image data, that is, the first positioning data mentioned above.
  • the SLAM algorithm module can combine the semi-direct SLAM algorithm with the sliding window algorithm, and place the two into a parallel thread for processing, thereby ensuring the efficiency of the semi-direct SLAM algorithm and the calculation accuracy.
  • the inertial measurement module can obtain the positioning data of the virtual reality device through a pre-integration algorithm, that is, the second positioning data mentioned above.
  • the inertial measurement module can obtain the inertial measurement data and perform pre-integration using a median integration method or RK4 (i.e., a fourth-order Runge–Kutta method) to obtain first movement data or rotation data.
  • RK4 i.e., a fourth-order Runge–Kutta method
  • the ESKF filtering module can obtain the first positioning data and the second positioning data for ESKF filtering fusion to obtain reference posture data.
  • the reference posture data is relatively smooth, which can eliminate positioning jitter and avoid image jitter in the subsequent display process.
  • the posture prediction module can obtain the relative posture data of the virtual reality device between the initial moment and the target moment according to the inertial detection data and the above-mentioned reference posture data.
  • the prediction filtering module can obtain the target posture of the virtual reality device at the target time according to the preset uniform speed model and the above relative posture data and reference posture data.
  • the present embodiment can filter the first positioning data and the second positioning data, so as to make the curve corresponding to the reference posture data smoother and eliminate posture jitter.
  • the present embodiment can reduce the delay effect of the virtual reality device, reduce the dizziness caused by the image display delay, and improve the user experience.
  • an embodiment of the present disclosure further provides a posture acquisition device, which is applicable to a virtual reality device.
  • the device includes:
  • a reference posture acquisition module 101 is used to acquire reference posture data and relative posture data of the virtual reality device
  • a relative posture acquisition module 102 is used to acquire relative posture data of the virtual reality device
  • the target posture acquisition module 103 is used to acquire the target posture of the virtual reality device at a target moment according to the reference posture data and the relative posture data.
  • the reference posture acquisition module includes:
  • a first positioning submodule used to position the virtual reality device according to the image data to obtain first positioning data
  • a second positioning submodule used to position the virtual reality device according to the inertial measurement data to obtain second positioning data
  • the reference posture acquisition submodule is used to fuse the first positioning data and the second positioning data to obtain the reference posture data.
  • the first positioning submodule includes:
  • An image data acquisition unit used to acquire image data between the second intermediate moment and the initial moment
  • the first positioning acquisition unit is used to position the virtual reality device according to the image data to obtain first positioning data.
  • the first positioning acquisition unit includes:
  • a fast positioning subunit used for processing the image data based on a preset semi-direct SLAM algorithm to obtain fast positioning data corresponding to the virtual reality device;
  • a precise positioning subunit used for processing the image data based on a preset sliding window algorithm to obtain precise positioning data corresponding to the virtual reality device;
  • a correction positioning subunit used to correct the rapid positioning data using the precise positioning data to obtain corrected positioning data
  • the first positioning subunit is used to determine the rapid positioning data, the precise positioning data or the corrected positioning data as the first positioning data.
  • the second positioning submodule includes:
  • An inertial measurement unit used to obtain inertial measurement data between a second intermediate moment and an initial moment; the second intermediate moment is earlier than the initial moment;
  • the second positioning unit is used to integrate the inertial measurement data to obtain second positioning data of the virtual reality device.
  • the relative posture acquisition module includes:
  • An inertial data acquisition submodule used to acquire inertial measurement data of the virtual reality device between an initial moment and a target moment
  • a correction data acquisition submodule used to correct the inertial measurement data to obtain corrected measurement data
  • the relative posture acquisition submodule is used to determine the second movement data of the virtual reality device based on the preset head prediction model and the corrected measurement data; the second movement data is used as the relative posture data.
  • the inertial data acquisition submodule includes:
  • a raw data acquisition unit used to acquire raw inertial measurement data after the initial moment from a specified position of the virtual reality device
  • the inertial data acquisition unit is used to, when the original inertial measurement data does not include the data at the target time, expand the measurement data to the target time based on the original inertial measurement data to obtain the inertial measurement data between the initial time and the target time.
  • correction data acquisition submodule includes:
  • a rotation data acquisition unit used for acquiring rotation data of the virtual reality device between the initial moment and the target moment according to the inertial measurement data; the rotation data is used as relative posture data;
  • a movement data acquisition unit configured to acquire first movement data of the virtual reality device between the initial moment and a first intermediate moment according to the inertial measurement data; the first intermediate moment is between the initial moment and the target moment;
  • a correction data acquisition unit is used to correct the inertial measurement data between the first intermediate moment and the target moment according to the first movement data to obtain corrected measurement data.
  • the target posture acquisition module includes:
  • a state data acquisition submodule used to acquire the state data at the target moment according to a preset uniform motion model and the reference posture data
  • An observation data acquisition submodule used for superimposing the reference posture data and the relative posture data to obtain observation data of the virtual reality device
  • a weight value acquisition submodule used for acquiring respective weight values of the state data and the observation data according to the accumulated error of the virtual reality device
  • the target posture acquisition submodule is used to calculate the weighted value according to the state data, the observation data and the respective weight values to obtain the target posture of the virtual reality device at the target moment.
  • the device embodiment shown in this embodiment matches the content of the above method embodiment, and the content of the above method embodiment can be referred to, which will not be repeated here.
  • Fig. 11 is a block diagram of a virtual reality device according to an exemplary embodiment.
  • the virtual reality device 1100 may be a smart phone, a computer, a digital broadcast terminal, a tablet device, a medical device, a fitness device, a personal digital assistant, etc.
  • the virtual reality device 1100 may include one or more of the following components: a processing component 1102 , a memory 1104 , a power component 1106 , a multimedia component 1108 , an audio component 1110 , an input/output (I/O) interface 1112 , a sensor component 1114 , a communication component 1116 , and an image acquisition component 1118 .
  • the processing component 1102 generally controls the overall operation of the virtual reality device 1100, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 1102 may include one or more processors 1120 to execute computer programs.
  • the processing component 1102 may include one or more modules to facilitate interaction between the processing component 1102 and other components.
  • the processing component 1102 may include a multimedia module to facilitate interaction between the multimedia component 1108 and the processing component 1102.
  • the memory 1104 is configured to store various types of data to support operations on the virtual reality device 1100. Examples of such data include computer programs for any application or method operating on the virtual reality device 1100, contact data, phone book data, messages, pictures, videos, etc.
  • the memory 1104 can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM erasable programmable read-only memory
  • PROM programmable read-only memory
  • ROM read-only memory
  • magnetic memory flash memory
  • flash memory magnetic disk or optical disk.
  • the power supply component 1106 provides power to various components of the virtual reality device 1100.
  • the power supply component 1106 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the virtual reality device 1100.
  • the power supply component 1106 may include a power supply chip, and the controller may communicate with the power supply chip to control the power supply chip to turn on or off a switch device, so that the battery supplies power to the mainboard circuit or not.
  • the multimedia component 1108 includes a screen that provides an output interface between the virtual reality device 1100 and the target object.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input information from the target object.
  • the touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundaries of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
  • the audio component 1110 is configured to output and/or input audio file information.
  • the audio component 1110 includes a microphone (MIC), and when the virtual reality device 1100 is in an operation mode, such as a call mode, a recording mode, and a speech recognition mode, the microphone is configured to receive external audio file information.
  • the received audio file information can be further stored in the memory 1104 or sent via the communication component 1116.
  • the audio component 1110 also includes a speaker for outputting audio file information.
  • the I/O interface 1112 provides an interface between the processing component 1102 and a peripheral interface module, which may be a keyboard, a click wheel, a button, etc.
  • the sensor component 1114 includes one or more sensors for providing various aspects of status assessment for the virtual reality device 1100.
  • the sensor component 1114 can detect the open/closed state of the virtual reality device 1100, the relative positioning of components, such as the display screen and keypad of the virtual reality device 1100, and the sensor component 1114 can also detect the position change of the virtual reality device 1100 or a component, the presence or absence of contact between the target object and the virtual reality device 1100, the orientation or acceleration/deceleration of the virtual reality device 1100, and the temperature change of the virtual reality device 1100.
  • the sensor component 1114 may include a magnetic sensor, a gyroscope, and a magnetic field sensor, wherein the magnetic field sensor includes at least one of the following: a Hall sensor, a thin film magnetoresistive sensor, and a magnetic liquid acceleration sensor.
  • the communication component 1116 is configured to facilitate wired or wireless communication between the virtual reality device 1100 and other devices.
  • the virtual reality device 1100 can access a wireless network based on a communication standard, such as WiFi, 2G, 3G, 4G, 5G, or a combination thereof.
  • the communication component 1116 receives broadcast information or broadcast-related information from an external broadcast management system via a broadcast channel.
  • the communication component 1116 also includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • the virtual reality device 1100 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • controllers microcontrollers, microprocessors, or other electronic components.
  • a virtual reality device comprising:
  • the memory is used to store a computer program executable by the processor
  • the processor is used to execute the computer program in the memory to implement the above method.
  • a non-transitory computer readable storage medium such as a memory 1104 including instructions, and the above executable computer program can be executed by a processor.
  • the readable storage medium can be a ROM, a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure relates to an posture acquisition method, an apparatus, a virtual reality device, and a readable storage medium. The method comprises: acquiring reference posture data and relative posture data of a virtual reality device; and, according to the reference posture data and the relative posture data, acquiring a target posture of the virtual reality device at the target moment. Thus, the effect of synchronizing actions and images of users is achieved, thereby improving the use experience of users.

Description

姿态获取方法和装置、虚拟现实设备、可读存储介质Attitude acquisition method and device, virtual reality equipment, and readable storage medium 技术领域Technical Field
本公开涉及数据处理技术领域,尤其涉及一种姿态获取方法和装置、虚拟现实设备、可读存储介质。The present disclosure relates to the field of data processing technology, and in particular to a posture acquisition method and device, virtual reality equipment, and a readable storage medium.
背景技术Background technique
为了使大脑体会到“身临其境”效果,虚拟现实设备即VR设备需要减少头部移动和在视网膜上产生正确变化之间的时间,从而使得佩戴者身体的视觉感知和运动感知及时匹配。如果视觉感知和运动感知未及时匹配,例如佩戴者转身后而画面没有及时转过来,导致佩戴者产生不适感,影响使用体验。In order to make the brain feel "immersive", virtual reality devices, or VR devices, need to reduce the time between head movement and the correct changes on the retina, so that the wearer's visual perception and motion perception are matched in time. If visual perception and motion perception are not matched in time, for example, the wearer turns around but the picture does not turn in time, the wearer will feel uncomfortable, affecting the user experience.
发明内容Summary of the invention
本公开提供一种姿态获取方法和装置、虚拟现实设备、可读存储介质,以解决相关技术的不足。The present disclosure provides a posture acquisition method and device, a virtual reality device, and a readable storage medium to address the deficiencies of related technologies.
根据本公开实施例的第一方面,提供一种姿态获取方法,应用于虚拟现实设备,所述方法包括:According to a first aspect of an embodiment of the present disclosure, a posture acquisition method is provided, which is applied to a virtual reality device, and the method includes:
获取所述虚拟现实设备的参考姿态数据和相对姿态数据;Acquiring reference posture data and relative posture data of the virtual reality device;
根据所述参考姿态数据和所述相对姿态数据获取所述虚拟现实设备在目标时刻的目标姿态。The target posture of the virtual reality device at the target moment is acquired according to the reference posture data and the relative posture data.
可选地,获取所述虚拟现实设备的参考姿态数据,包括:Optionally, obtaining reference posture data of the virtual reality device includes:
根据图像数据对所述虚拟现实设备进行定位,得到第一定位数据;Positioning the virtual reality device according to the image data to obtain first positioning data;
根据惯性测量数据对所述虚拟现实设备进行定位,得到第二定位数据;Positioning the virtual reality device according to the inertial measurement data to obtain second positioning data;
融合所述第一定位数据和所述第二定位数据,得到所述参考姿态数据。The first positioning data and the second positioning data are fused to obtain the reference posture data.
可选地,根据图像数据对所述虚拟现实设备进行定位,得到第一定位数据,包括:Optionally, positioning the virtual reality device according to the image data to obtain first positioning data includes:
获取第二中间时刻到初始时刻之间的图像数据;Acquire image data between the second intermediate moment and the initial moment;
根据所述图像数据对所述虚拟现实设备进行定位,得到第一定位数据。The virtual reality device is positioned according to the image data to obtain first positioning data.
可选地,根据所述图像数据对所述虚拟现实设备进行定位,得到第一定位数据,包括:Optionally, positioning the virtual reality device according to the image data to obtain first positioning data includes:
基于预设的半直接法SLAM算法处理所述图像数据,得到所述虚拟现实设备对应的快速定位数据;Processing the image data based on a preset semi-direct SLAM algorithm to obtain rapid positioning data corresponding to the virtual reality device;
基于预设的滑动窗口算法处理所述图像数据,得到所述虚拟现实设备对应的精准定位数据;Processing the image data based on a preset sliding window algorithm to obtain precise positioning data corresponding to the virtual reality device;
利用所述精准定位数据校正所述快速定位数据,得到校正定位数据;Correcting the rapid positioning data using the precise positioning data to obtain corrected positioning data;
确定所述快速定位数据、所述精准定位数据或者所述校正定位数据作为所述第一定位数据。The fast positioning data, the precise positioning data or the corrected positioning data is determined as the first positioning data.
可选地,根据惯性测量数据对所述虚拟现实设备进行定位,得到第二定位数据,包括:Optionally, positioning the virtual reality device according to the inertial measurement data to obtain second positioning data includes:
获取第二中间时刻到初始时刻之间的惯性测量数据;所述第二中间时刻早于所述初始时刻;Acquire inertial measurement data between a second intermediate moment and an initial moment; the second intermediate moment is earlier than the initial moment;
对所述惯性测量数据进行积分,得到所述虚拟现实设备的第二定位数据。The inertial measurement data is integrated to obtain second positioning data of the virtual reality device.
可选地,获取所述虚拟现实设备的相对姿态数据,包括:Optionally, obtaining relative posture data of the virtual reality device includes:
获取所述虚拟现实设备在初始时刻和目标时刻之间的惯性测量数据;Acquire inertial measurement data of the virtual reality device between an initial time and a target time;
对所述惯性测量数据进行校正,得到校正测量数据;Correcting the inertial measurement data to obtain corrected measurement data;
基于预设头部预测模型和所述校正测量数据确定所述虚拟现实设备的第二移动数据;所述第二移动数据作为相对姿态数据。Based on the preset head prediction model and the corrected measurement data, second movement data of the virtual reality device is determined; the second movement data is used as relative posture data.
可选地,获取所述虚拟现实设备在初始时刻和目标时刻之间的惯性测量数据,包括:Optionally, obtaining inertial measurement data of the virtual reality device between an initial moment and a target moment includes:
从所述虚拟现实设备的指定位置获取所述初始时刻之后的原始惯性测量数据;Acquire raw inertial measurement data after the initial moment from a specified position of the virtual reality device;
当所述原始惯性测量数据未包括所述目标时刻的数据时,基于所述原始惯性测量数据扩展测量数据至所述目标时刻,得到所述初始时刻和目标时刻之间的惯性测量数据。When the original inertial measurement data does not include data at the target time, the measurement data is extended to the target time based on the original inertial measurement data to obtain the inertial measurement data between the initial time and the target time.
可选地,对所述惯性测量数据进行校正,得到校正测量数据,包括:Optionally, correcting the inertial measurement data to obtain corrected measurement data includes:
根据所述惯性测量数据获取所述虚拟现实设备在所述初始时刻和所述目标时刻之间的转动数据;所述转动数据作为相对姿态数据;Acquire rotation data of the virtual reality device between the initial moment and the target moment according to the inertial measurement data; the rotation data is used as relative posture data;
根据所述惯性测量数据获取所述虚拟现实设备在所述初始时刻到第一中间时刻之间 的第一移动数据;所述第一中间时刻位于所述初始时刻和所述目标时刻之间;Acquire first movement data of the virtual reality device between the initial moment and a first intermediate moment according to the inertial measurement data; the first intermediate moment is between the initial moment and the target moment;
根据所述第一移动数据校正所述第一中间时刻与所述目标时刻之间的惯性测量数据,得到校正测量数据。The inertial measurement data between the first intermediate moment and the target moment is corrected according to the first movement data to obtain corrected measurement data.
可选地,根据所述参考姿态数据和所述相对姿态数据获取所述虚拟现实设备在目标时刻的目标姿态,包括:Optionally, acquiring a target posture of the virtual reality device at a target time according to the reference posture data and the relative posture data includes:
根据预设匀速运动模型和所述参考姿态数据获取所述目标时刻的状态数据;Acquire the state data of the target moment according to the preset uniform motion model and the reference posture data;
叠加所述参考姿态数据和所述相对姿态数据,得到所述虚拟现实设备的观测数据;Superimposing the reference posture data and the relative posture data to obtain observation data of the virtual reality device;
根据所述虚拟现实设备的累积误差获取所述状态数据和所述观测数据的各自的权重值;Obtaining respective weight values of the state data and the observation data according to the accumulated error of the virtual reality device;
根据所述状态数据、所述观测数据以及各自的权重值计算加权值,得到述虚拟现实设备在所述目标时刻的目标姿态。A weighted value is calculated according to the state data, the observation data and the respective weight values to obtain a target posture of the virtual reality device at the target moment.
根据本公开实施例的第二方面,提供一种姿态获取装置,应用于虚拟现实设备,所述装置包括:According to a second aspect of an embodiment of the present disclosure, a posture acquisition device is provided, which is applied to a virtual reality device, and the device includes:
参考姿态获取模块,用于获取所述虚拟现实设备的参考姿态数据和相对姿态数据;A reference posture acquisition module, used to acquire reference posture data and relative posture data of the virtual reality device;
相对姿态获取模块,用于获取所述虚拟现实设备的相对姿态数据;A relative posture acquisition module, used to acquire relative posture data of the virtual reality device;
目标姿态获取模块,用于根据所述参考姿态数据和所述相对姿态数据获取所述虚拟现实设备在目标时刻的目标姿态。A target posture acquisition module is used to acquire the target posture of the virtual reality device at a target moment according to the reference posture data and the relative posture data.
可选地,所述参考姿态获取模块包括:Optionally, the reference posture acquisition module includes:
第一定位子模块,用于根据图像数据对所述虚拟现实设备进行定位,得到第一定位数据;A first positioning submodule, used to position the virtual reality device according to the image data to obtain first positioning data;
第二定位子模块,用于根据惯性测量数据对所述虚拟现实设备进行定位,得到第二定位数据;A second positioning submodule, used to position the virtual reality device according to the inertial measurement data to obtain second positioning data;
参考姿态获取子模块,用于融合所述第一定位数据和所述第二定位数据,得到所述参考姿态数据。The reference posture acquisition submodule is used to fuse the first positioning data and the second positioning data to obtain the reference posture data.
可选地,所述第一定位子模块包括:Optionally, the first positioning submodule includes:
图像数据获取单元,用于获取第二中间时刻到初始时刻之间的图像数据;An image data acquisition unit, used to acquire image data between the second intermediate moment and the initial moment;
第一定位获取单元,用于根据所述图像数据对所述虚拟现实设备进行定位,得到第 一定位数据。The first positioning acquisition unit is used to position the virtual reality device according to the image data to obtain first positioning data.
可选地,所述第一定位获取单元包括:Optionally, the first positioning acquisition unit includes:
快速定位子单元,用于基于预设的半直接法SLAM算法处理所述图像数据,得到所述虚拟现实设备对应的快速定位数据;A fast positioning subunit, used for processing the image data based on a preset semi-direct SLAM algorithm to obtain fast positioning data corresponding to the virtual reality device;
精准定位子单元,用于基于预设的滑动窗口算法处理所述图像数据,得到所述虚拟现实设备对应的精准定位数据;A precise positioning subunit, used for processing the image data based on a preset sliding window algorithm to obtain precise positioning data corresponding to the virtual reality device;
校正定位子单元,用于利用所述精准定位数据校正所述快速定位数据,得到校正定位数据;A correction positioning subunit, used to correct the rapid positioning data using the precise positioning data to obtain corrected positioning data;
第一定位子单元,用于确定所述快速定位数据、所述精准定位数据或者所述校正定位数据作为所述第一定位数据。The first positioning subunit is used to determine the rapid positioning data, the precise positioning data or the corrected positioning data as the first positioning data.
可选地,所述第二定位子模块包括:Optionally, the second positioning submodule includes:
惯性测量单元,用于获取第二中间时刻到初始时刻之间的惯性测量数据;所述第二中间时刻早于所述初始时刻;An inertial measurement unit, used to obtain inertial measurement data between a second intermediate moment and an initial moment; the second intermediate moment is earlier than the initial moment;
第二定位单元,用于对所述惯性测量数据进行积分,得到所述虚拟现实设备的第二定位数据。The second positioning unit is used to integrate the inertial measurement data to obtain second positioning data of the virtual reality device.
可选地,所述相对姿态获取模块包括:Optionally, the relative posture acquisition module includes:
惯性数据获取子模块,用于获取所述虚拟现实设备在初始时刻和目标时刻之间的惯性测量数据;An inertial data acquisition submodule, used to acquire inertial measurement data of the virtual reality device between an initial moment and a target moment;
校正数据获取子模块,用于对所述惯性测量数据进行校正,得到校正测量数据;A correction data acquisition submodule, used to correct the inertial measurement data to obtain corrected measurement data;
相对姿态获取子模块,用于基于预设头部预测模型和所述校正测量数据确定所述虚拟现实设备的第二移动数据;所述第二移动数据作为相对姿态数据。The relative posture acquisition submodule is used to determine the second movement data of the virtual reality device based on the preset head prediction model and the corrected measurement data; the second movement data is used as the relative posture data.
可选地,所述惯性数据获取子模块包括:Optionally, the inertial data acquisition submodule includes:
原始数据获取单元,用于从所述虚拟现实设备的指定位置获取所述初始时刻之后的原始惯性测量数据;A raw data acquisition unit, used to acquire raw inertial measurement data after the initial moment from a specified position of the virtual reality device;
惯性数据获取单元,用于当所述原始惯性测量数据未包括所述目标时刻的数据时,基于所述原始惯性测量数据扩展测量数据至所述目标时刻,得到所述初始时刻和目标时刻之间的惯性测量数据。The inertial data acquisition unit is used to, when the original inertial measurement data does not include the data at the target time, expand the measurement data to the target time based on the original inertial measurement data to obtain the inertial measurement data between the initial time and the target time.
可选地,所述校正数据获取子模块包括:Optionally, the correction data acquisition submodule includes:
转动数据获取单元,用于根据所述惯性测量数据获取所述虚拟现实设备在所述初始时刻和所述目标时刻之间的转动数据;所述转动数据作为相对姿态数据;A rotation data acquisition unit, used for acquiring rotation data of the virtual reality device between the initial moment and the target moment according to the inertial measurement data; the rotation data is used as relative posture data;
移动数据获取单元,用于根据所述惯性测量数据获取所述虚拟现实设备在所述初始时刻到第一中间时刻之间的第一移动数据;所述第一中间时刻位于所述初始时刻和所述目标时刻之间;A movement data acquisition unit, configured to acquire first movement data of the virtual reality device between the initial moment and a first intermediate moment according to the inertial measurement data; the first intermediate moment is between the initial moment and the target moment;
校正数据获取单元,用于根据所述第一移动数据校正所述第一中间时刻与所述目标时刻之间的惯性测量数据,得到校正测量数据。The correction data acquisition unit is used to correct the inertial measurement data between the first intermediate moment and the target moment according to the first movement data to obtain corrected measurement data.
可选地,所述目标姿态获取模块包括:Optionally, the target posture acquisition module includes:
状态数据获取子模块,用于根据预设匀速运动模型和所述参考姿态数据获取所述目标时刻的状态数据;A state data acquisition submodule, used to acquire the state data at the target moment according to a preset uniform motion model and the reference posture data;
观测数据获取子模块,用于叠加所述参考姿态数据和所述相对姿态数据,得到所述虚拟现实设备的观测数据;An observation data acquisition submodule, used for superimposing the reference posture data and the relative posture data to obtain observation data of the virtual reality device;
权重值获取子模块,用于根据所述虚拟现实设备的累积误差获取所述状态数据和所述观测数据的各自的权重值;A weight value acquisition submodule, used for acquiring respective weight values of the state data and the observation data according to the accumulated error of the virtual reality device;
目标姿态获取子模块,用于根据所述状态数据、所述观测数据以及各自的权重值计算加权值,得到述虚拟现实设备在所述目标时刻的目标姿态。The target posture acquisition submodule is used to calculate the weighted value according to the state data, the observation data and the respective weight values to obtain the target posture of the virtual reality device at the target moment.
根据本公开实施例的第三方面,提供一种虚拟现实设备,包括:According to a third aspect of an embodiment of the present disclosure, a virtual reality device is provided, including:
存储器与处理器;Memory and processor;
所述存储器用于存储所述处理器可执行的计算机程序;The memory is used to store a computer program executable by the processor;
所述处理器用于执行所述存储器中的计算机程序,以实现如上述的方法。The processor is used to execute the computer program in the memory to implement the above method.
根据本公开实施例的第四方面,提供一种非暂态计算机可读存储介质,当所述存储介质中的可执行的计算机程序由处理器执行时,能够实现如上述的方法。According to a fourth aspect of an embodiment of the present disclosure, a non-transitory computer-readable storage medium is provided, and when an executable computer program in the storage medium is executed by a processor, the method as described above can be implemented.
本公开的实施例提供的技术方案可以包括以下有益效果:The technical solution provided by the embodiments of the present disclosure may have the following beneficial effects:
本公开实施例提供的方案中可以获取所述虚拟现实设备的参考姿态数据和相对姿态数据;根据所述参考姿态数据和所述相对姿态数据获取所述虚拟现实设备在目标时刻的目标姿态,达到用户动作与画面同步的效果,提升用户使用体验。In the solution provided by the embodiment of the present disclosure, reference posture data and relative posture data of the virtual reality device can be obtained; the target posture of the virtual reality device at the target moment is obtained according to the reference posture data and the relative posture data, so as to achieve the effect of synchronizing the user action with the picture and improve the user experience.
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。It is to be understood that the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the present disclosure.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the present disclosure.
图1是根据一示例性实施例示出的一种姿态获取方法的流程图。Fig. 1 is a flow chart showing a method for acquiring a posture according to an exemplary embodiment.
图2是根据一示例性实施例示出的一种获取参考姿态数据的流程图。Fig. 2 is a flow chart showing a method of acquiring reference posture data according to an exemplary embodiment.
图3是根据一示例性实施例示出的一种获取第一定位数据的流程图。Fig. 3 is a flow chart showing a method of acquiring first positioning data according to an exemplary embodiment.
图4是根据一示例性实施例示出的另一种获取第一定位数据的流程图。Fig. 4 is a flowchart showing another method of acquiring first positioning data according to an exemplary embodiment.
图5是根据一示例性实施例示出的一种获取相对姿态数据的流程图。Fig. 5 is a flow chart showing a method of acquiring relative posture data according to an exemplary embodiment.
图6是根据一示例性实施例示出的一种预设头部预测模型的示意图。Fig. 6 is a schematic diagram showing a preset head prediction model according to an exemplary embodiment.
图7是根据一示例性实施例示出的一种获取目标姿态的流程图。Fig. 7 is a flow chart showing a method of acquiring a target posture according to an exemplary embodiment.
图8是根据一示例性实施例示出的一种第一定位数据、参考姿态数据和目标姿态的示意图。Fig. 8 is a schematic diagram showing first positioning data, reference posture data and target posture according to an exemplary embodiment.
图9是根据一示例性实施例示出的一种姿态获取方法的框图。Fig. 9 is a block diagram showing a method for acquiring a posture according to an exemplary embodiment.
图10是根据一示例性实施例示出的一种姿态获取装置的框图。Fig. 10 is a block diagram showing a posture acquisition device according to an exemplary embodiment.
图11是根据一示例性实施例示出的一种虚拟现实设备的框图。Fig. 11 is a block diagram of a virtual reality device according to an exemplary embodiment.
具体实施方式Detailed ways
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性所描述的实施例并不代表与本公开相一致的所有实施例。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置例子。需要说明的是,在不冲突的情况下,下述的实施例及实施方式中的特征可以相互组合。Exemplary embodiments will be described in detail herein, examples of which are shown in the accompanying drawings. When the following description refers to the drawings, unless otherwise indicated, the same numbers in different drawings represent the same or similar elements. The embodiments described exemplarily below do not represent all embodiments consistent with the present disclosure. Instead, they are merely examples of devices consistent with some aspects of the present disclosure as detailed in the attached claims. It should be noted that the features in the following embodiments and implementations may be combined with each other without conflict.
为了使大脑体会到“身临其境”效果,虚拟现实设备即VR设备需要减少头部移动和在视网膜上产生正确变化之间的时间,从而使得佩戴者身体的视觉感知和运动感知及 时匹配。如果视觉感知和运动感知未及时匹配,例如佩戴者转身后而画面没有及时转过来,导致佩戴者产生不适感,影响使用体验。In order to make the brain feel "immersive", virtual reality devices, or VR devices, need to reduce the time between head movement and the correct changes on the retina, so that the wearer's visual perception and motion perception are matched in time. If visual perception and motion perception are not matched in time, for example, the wearer turns around but the picture does not turn in time, the wearer will feel uncomfortable, affecting the user experience.
为解决上述技术问题,本公开实施例提供了一种姿态获取方法和装置、虚拟现实设备、可读存储介质,可以适用于虚拟现实设备;该虚拟现实设备可以包括摄像头和惯性检测单元。该摄像头可以采集虚拟现实设备的指定方向的图像,后续也称之为图像数据;该惯性检测单元可以按照设定周期采集该虚拟现实设备的惯性检测数据。其中该惯性检测数据可以包括六个维度的角速度和加速度。上述六个维度是指空间直角坐标系下XYZ轴方向以及Roll方向、Yaw方向和Pitch方向。In order to solve the above technical problems, the embodiments of the present disclosure provide a posture acquisition method and device, a virtual reality device, and a readable storage medium, which can be applied to the virtual reality device; the virtual reality device may include a camera and an inertial detection unit. The camera can collect images of the virtual reality device in a specified direction, which will be referred to as image data later; the inertial detection unit can collect inertial detection data of the virtual reality device according to a set period. The inertial detection data may include angular velocity and acceleration in six dimensions. The above six dimensions refer to the XYZ axis direction and the Roll direction, Yaw direction and Pitch direction in the spatial rectangular coordinate system.
图1是根据一示例性实施例示出的一种姿态获取方法,参见图1,一种姿态获取方法,包括步骤11~步骤12。FIG. 1 shows a posture acquisition method according to an exemplary embodiment. Referring to FIG. 1 , the posture acquisition method includes steps 11 and 12 .
在步骤11中,获取所述虚拟现实设备的参考姿态数据和相对姿态数据。In step 11, reference posture data and relative posture data of the virtual reality device are obtained.
本实施例中,虚拟现实设备中处理器可以获取所述虚拟现实设备在初始时刻的参考姿态数据以及获取所述虚拟现实设备在所述初始时刻和目标时刻之间的惯性测量数据。例如,虚拟现实设备中处理器可以按照实时或者按照设定周期获取各个时刻的参考姿态数据。为方便描述,后续各实施例中将上述各个时刻称之为初始时刻,该初始时刻作为各个实施例中处理数据的起始时刻。在一示例中,处理器获取初始时刻的参考姿态数据,参见图2,包括步骤21~步骤23。In this embodiment, the processor in the virtual reality device can obtain the reference posture data of the virtual reality device at the initial moment and obtain the inertial measurement data of the virtual reality device between the initial moment and the target moment. For example, the processor in the virtual reality device can obtain the reference posture data at each moment in real time or according to a set period. For the convenience of description, the above-mentioned moments are referred to as the initial moment in the subsequent embodiments, and the initial moment is used as the starting moment for processing data in each embodiment. In one example, the processor obtains the reference posture data at the initial moment, see Figure 2, including steps 21 to 23.
在步骤21中,处理器可以根据图像数据对所述虚拟现实设备进行定位,得到第一定位数据。In step 21, the processor may position the virtual reality device according to the image data to obtain first positioning data.
参见图3,在步骤31中,处理器可以获取第二中间时刻到初始时刻之间的图像数据。Referring to FIG. 3 , in step 31 , the processor may acquire image data between the second intermediate moment and the initial moment.
本步骤中,处理器可以获取第二中间时刻到初始时刻之间的图像数据。例如虚拟现实设备中摄像头可以按照设定周期采集图像数据并存储到指定位置,处理器可以从指定位置读取上述图像数据。又如,处理器可以与上述摄像头进行通信,实时获取摄像头输出的图像数据。In this step, the processor may obtain image data between the second intermediate moment and the initial moment. For example, a camera in a virtual reality device may collect image data according to a set period and store it in a specified location, and the processor may read the image data from the specified location. For another example, the processor may communicate with the camera to obtain image data output by the camera in real time.
在步骤32中,处理器可以根据所述图像数据对所述虚拟现实设备进行定位,得到第一定位数据。In step 32, the processor may position the virtual reality device according to the image data to obtain first positioning data.
本步骤中,虚拟现实设备内存储预设的半直接法SLAM算法,参见图4,获取第一定位数据包括步骤41~步骤44。In this step, a preset semi-direct SLAM algorithm is stored in the virtual reality device. Referring to FIG. 4 , obtaining the first positioning data includes steps 41 to 44 .
在步骤41中,处理器可以基于预设的半直接法SLAM算法处理所述图像数据,得到所述虚拟现实设备对应的快速定位数据。即处理器可以根据相邻两帧图像数据定位出虚拟现实设备的位置,得到快速定位数据。In step 41, the processor can process the image data based on a preset semi-direct SLAM algorithm to obtain fast positioning data corresponding to the virtual reality device. That is, the processor can locate the position of the virtual reality device based on two adjacent frames of image data to obtain fast positioning data.
在步骤42中,处理器可以基于预设的滑动窗口算法处理所述图像数据,得到所述虚拟现实设备对应的精准定位数据。可理解的是,上述半直接法SLAM算法获取定位数据的速度快于滑动窗口算法获取定位数据的速度,但是滑动窗口算法获取定位数据的精确度高于上述半直接法SLAM算法的精确度。并且可以为后续获取目标姿态提供准确的初始值。In step 42, the processor can process the image data based on a preset sliding window algorithm to obtain accurate positioning data corresponding to the virtual reality device. It is understandable that the speed at which the semi-direct SLAM algorithm acquires positioning data is faster than the speed at which the sliding window algorithm acquires positioning data, but the accuracy of the sliding window algorithm in acquiring positioning data is higher than the accuracy of the semi-direct SLAM algorithm. And it can provide an accurate initial value for the subsequent acquisition of the target posture.
在步骤43中,处理器可以利用所述精准定位数据校正所述快速定位数据,得到校正定位数据。本步骤中,处理器获取精准定位数据的速度慢于上述半直接法SLAM算法的速度。因此,当精准定位数据获得之后,处理器可以利用该精准定位数据来校正上述快速定位数据,既可以利用快速定位数据来快速获取后续的目标姿态,也可以利用精准定位数据来获取准确的后续的目标姿态,或者也可以利用精准定位数据来校正快速定位数据,使校正定位数据的精确度高于快速定位数据。In step 43, the processor can use the precise positioning data to correct the rapid positioning data to obtain corrected positioning data. In this step, the speed at which the processor obtains the precise positioning data is slower than the speed of the semi-direct SLAM algorithm. Therefore, after the precise positioning data is obtained, the processor can use the precise positioning data to correct the rapid positioning data, and can use the rapid positioning data to quickly obtain the subsequent target posture, or use the precise positioning data to obtain the accurate subsequent target posture, or can use the precise positioning data to correct the rapid positioning data, so that the accuracy of the corrected positioning data is higher than that of the rapid positioning data.
在步骤44中,处理器可以确定所述快速定位数据、所述精准定位数据或者所述校正定位数据作为所述第一定位数据。本实施例中,处理器在获取第一定位数据的需求时,可以检测是否有精准定位数据。当检测到指定位置有精准定位数据时可以使用该精准定位数据来作为第一定位数据;当未检测到精准定位数据时可以获取快速定位数据作为第一定位数据。在使用第一定位数据获取参考姿态数据的过程中,当获取到精确定位数据后则校正第一定位数据,后续则使用准确程度较高的校正定位数据继续获取第一定位数据,提升第一定位数据的准确度。In step 44, the processor may determine the fast positioning data, the precise positioning data or the corrected positioning data as the first positioning data. In this embodiment, the processor may detect whether there is precise positioning data when it is required to obtain the first positioning data. When it is detected that there is precise positioning data at the specified position, the precise positioning data may be used as the first positioning data; when no precise positioning data is detected, fast positioning data may be obtained as the first positioning data. In the process of using the first positioning data to obtain reference posture data, the first positioning data is corrected after the precise positioning data is obtained, and subsequently the corrected positioning data with a higher degree of accuracy is used to continue to obtain the first positioning data, thereby improving the accuracy of the first positioning data.
可理解的是,上述第一定位数据包括虚拟现实设备在六个维度上的位置。It is understandable that the above-mentioned first positioning data includes the position of the virtual reality device in six dimensions.
在步骤22中,处理器可以根据惯性测量数据对所述虚拟现实设备进行定位,得到第二定位数据。In step 22, the processor may position the virtual reality device according to the inertial measurement data to obtain second positioning data.
本步骤中,处理器可以获取第二中间时刻到初始时刻之间的惯性测量数据。例如虚拟现实设备中惯性检测单元可以按照预设周期采集已惯性检测数据并存储到指定位置,处理器可以从指定位置读取上述惯性检测数据。又如,处理器可以与上述惯性检测单元进行通信,实时获取惯性检测单元输出的惯性检测数据。In this step, the processor may obtain the inertial measurement data between the second intermediate moment and the initial moment. For example, the inertial detection unit in the virtual reality device may collect the inertial detection data according to a preset period and store it in a specified location, and the processor may read the inertial detection data from the specified location. For another example, the processor may communicate with the inertial detection unit to obtain the inertial detection data output by the inertial detection unit in real time.
本步骤中,虚拟现实设备内存储预设的预积分算法。处理器可以调用上述预积 分算法,并将上述惯性测量数据作为上述预积分算法的输入数据。上述预积分算法可以对惯性测量数据进行积分处理,获得虚拟现实设备的定位数据,后续称之为第二定位数据,以区别于上述第一定位数据。可理解的是,上述第二定位数据包括虚拟现实设备在六个维度上的位置。In this step, a preset pre-integration algorithm is stored in the virtual reality device. The processor may call the pre-integration algorithm and use the inertial measurement data as input data of the pre-integration algorithm. The pre-integration algorithm may integrate the inertial measurement data to obtain positioning data of the virtual reality device, which is hereinafter referred to as second positioning data to distinguish it from the first positioning data. It is understandable that the second positioning data includes the position of the virtual reality device in six dimensions.
在步骤23中,处理器可以融合所述第一定位数据和所述第二定位数据,得到所述参考姿态数据。In step 23, the processor may fuse the first positioning data and the second positioning data to obtain the reference posture data.
本步骤中,处理器可以融合第一定位数据和第二定位数据得到虚拟现实设备的姿态数据,后续称之为参考姿态数据。In this step, the processor can fuse the first positioning data and the second positioning data to obtain the posture data of the virtual reality device, which is hereinafter referred to as reference posture data.
考虑到步骤21中获取第一定位数据的过程中是采用相邻两帧图像来实现定位的,因此相邻两个第一定位数据会存在抖动或者说相邻两个第一定位数据并不是平滑的。如果以该第一定位数据为基础渲染图像,则会造成图像出现抖动,影响到观看体验。Considering that the process of obtaining the first positioning data in step 21 uses two adjacent frames of images to achieve positioning, there will be jitter between the two adjacent first positioning data or the two adjacent first positioning data are not smooth. If the image is rendered based on the first positioning data, the image will be jittery, affecting the viewing experience.
考虑到步骤22中获取第二定位数据的过程中采用积分方式来定位虚拟现实设备,并且惯性检测数据中包括加速度和角速度,那么采用加速度计算虚拟现实设备的移动(即位移)时是对加速度的平方进行积分,那么惯性检测数据越多,积分结果的准确性会降低。Considering that the virtual reality device is positioned by integration in the process of acquiring the second positioning data in step 22, and the inertial detection data includes acceleration and angular velocity, the square of the acceleration is integrated when the movement (i.e., displacement) of the virtual reality device is calculated by using acceleration. The more inertial detection data there is, the lower the accuracy of the integration result will be.
基于上述分析,本步骤中处理器可以融合第一定位数据和第二定位数据,例如融合方式可以采用ESKF算法实现,从而得到融合后的定位数据,后续称之为参考姿态数据。例如,ESKF(Error-state Kalman Filter)算法的输入数据为第一定位数据和第二定位数据。首先,ESKF算法首先利用中值积分对获得的第二定位数据进行处理,求解***名义状态量;然后,利用第二定位数据解算误差状态量的协方差矩阵,结合SLAM算法输出的第一定位数据,利用KF估计***的误差状态量,最后利用求解的误差状态量修正名义状态量得到滤波平滑后的定位数据即得到参考姿态数据。Based on the above analysis, in this step, the processor can fuse the first positioning data and the second positioning data. For example, the fusion method can be implemented using the ESKF algorithm to obtain the fused positioning data, which is subsequently referred to as the reference posture data. For example, the input data of the ESKF (Error-state Kalman Filter) algorithm is the first positioning data and the second positioning data. First, the ESKF algorithm first uses the median integral to process the obtained second positioning data to solve the nominal state quantity of the system; then, the covariance matrix of the error state quantity is solved using the second positioning data, combined with the first positioning data output by the SLAM algorithm, and the error state quantity of the system is estimated using KF. Finally, the nominal state quantity is corrected using the solved error state quantity to obtain the filtered and smoothed positioning data, that is, the reference posture data.
参见图5,在步骤51中,处理器可以获取所述虚拟现实设备在所述初始时刻和目标时刻之间的惯性测量数据。其中,上述目标时刻是虚拟现实设备渲染图像时所需要显示图像对应的时刻,可理解的为渲染图像的上屏时刻。考虑到目标时刻一般远大于初始时刻,例如目标时刻与初始时刻的差值为20~60ms。并且,惯性检测单元通常获取不到目标时刻的惯性检测数据,假设惯性检测单元仅检测到第一中间时刻,该第一中间时刻位于初始时刻与目标时刻之间。Referring to FIG. 5 , in step 51, the processor may obtain inertial measurement data of the virtual reality device between the initial moment and the target moment. The target moment is the moment corresponding to the image that needs to be displayed when the virtual reality device renders the image, which can be understood as the moment when the rendered image is on the screen. Considering that the target moment is generally much greater than the initial moment, for example, the difference between the target moment and the initial moment is 20 to 60 ms. Moreover, the inertial detection unit usually cannot obtain the inertial detection data of the target moment. It is assumed that the inertial detection unit only detects the first intermediate moment, which is between the initial moment and the target moment.
此时,处理器可以从虚拟现实设备的指定位置获取初始时刻之后的原始惯性测 量数据。处理器可以检测上述原始惯性测量数据是否包括目标时刻的数据,当检测到上述原始惯性测量数据未包括目标时刻的数据时,处理器可以基于上述原始惯性测量数据扩展测量数据至目标时刻,扩展方式包括但不限于插值填充、机器学习等。例如,初始时刻为10ms,目标时刻为40ms,第一中间时刻为35ms,那么处理器可以基于第10ms~第35ms之间的原始惯性测量数据预测出第35ms~第40ms之间的惯性检测数据。这样,处理器可以获取到初始时刻和目标时刻之间的惯性测量数据,保证目标姿态预测过程顺利实现。At this time, the processor can obtain the original inertial measurement data after the initial moment from the specified position of the virtual reality device. The processor can detect whether the above-mentioned original inertial measurement data includes data at the target moment. When it is detected that the above-mentioned original inertial measurement data does not include data at the target moment, the processor can expand the measurement data to the target moment based on the above-mentioned original inertial measurement data, and the expansion method includes but is not limited to interpolation filling, machine learning, etc. For example, the initial moment is 10ms, the target moment is 40ms, and the first intermediate moment is 35ms, then the processor can predict the inertial detection data between 35ms and 40ms based on the original inertial measurement data between 10ms and 35ms. In this way, the processor can obtain the inertial measurement data between the initial moment and the target moment to ensure the smooth implementation of the target posture prediction process.
本步骤中,惯性测量数据中包括角速度,转动数据与上述角速度为一阶积分关系,因此,处理器可以根据初始时刻和目标时刻之间的惯性测量数据积分出虚拟现实设备在初始时刻和目标时刻之间的转动数据。考虑到虚拟现实设备通常佩戴在用户的头部,因此上述转动数据可以理解为头部的转动角度,例如头部在第10ms~第40ms之间转动了90度。In this step, the inertial measurement data includes angular velocity, and the rotation data is in a first-order integral relationship with the angular velocity. Therefore, the processor can integrate the rotation data of the virtual reality device between the initial moment and the target moment based on the inertial measurement data between the initial moment and the target moment. Considering that the virtual reality device is usually worn on the user's head, the above rotation data can be understood as the rotation angle of the head, for example, the head rotates 90 degrees between 10ms and 40ms.
本步骤中惯性测量数据中包括加速度,移动数据与加速度为二阶积分关系,因此,随着积分惯性检测数据的增多,移动数据的准确度也随之降低。因此,本步骤中仅使用初始时刻与目标时刻之间前半部分惯性检测数据进行积分,以保证移动数据的准确度。其中,前半部分可以按照一定比例(如30%~50%)来确定。为方便理解,本步骤中直接对初始时刻与第一中间时刻之间的惯性检测数据进行积分来获取虚拟现实设备在初始时刻到第一中间时刻之间的第一移动数据。In this step, the inertial measurement data includes acceleration, and the movement data and acceleration are in a second-order integral relationship. Therefore, as the integrated inertial detection data increases, the accuracy of the movement data also decreases. Therefore, in this step, only the first half of the inertial detection data between the initial moment and the target moment is used for integration to ensure the accuracy of the movement data. Among them, the first half can be determined according to a certain proportion (such as 30% to 50%). For ease of understanding, in this step, the inertial detection data between the initial moment and the first intermediate moment is directly integrated to obtain the first movement data of the virtual reality device between the initial moment and the first intermediate moment.
在步骤52中,处理器可以根据所述第一移动数据校正所述第一中间时刻与所述目标时刻之间的惯性测量数据,得到校正测量数据。In step 52, the processor may correct the inertial measurement data between the first intermediate moment and the target moment according to the first movement data to obtain corrected measurement data.
本实施例中,处理器可以根据第一移动数据校正第一中间时刻与目标时刻之间的惯性测量数据,即将惯性测量数据从初始时刻的姿态更新为第一中间时刻的姿态,得到校正测量数据。可理解的是,校正测量数据可以具有第一移动数据的准确度。In this embodiment, the processor can correct the inertial measurement data between the first intermediate moment and the target moment according to the first movement data, that is, update the inertial measurement data from the posture at the initial moment to the posture at the first intermediate moment to obtain the corrected measurement data. It is understandable that the corrected measurement data can have the accuracy of the first movement data.
在步骤53中,处理器可以基于预设头部预测模型和所述校正测量数据确定所述虚拟现实设备的第二移动数据;所述第二移动数据作为相对姿态数据。In step 53, the processor may determine second movement data of the virtual reality device based on a preset head prediction model and the corrected measurement data; the second movement data is used as relative posture data.
本实施例中,虚拟现实设备中存储预设头部预测模型。该预设头部预测模型是将用户划分为躯干部分和头部部分。参见图6,其中AB部分表示躯干部分,BC部分表示颈部以上的头部部分,这样用户的姿态变化可以拆分为两部分:躯干的移动和头部的转动,即头部的移动是跟随躯干部分的移动而变化的,而头部部分仅决定转动部分(包 括头部自身转动和躯干部分转动引起的头部部分转动),其中L表示躯干的移动,q表示头部的转动。因此,本实施例中虚拟现实设备可以调用预设头部预测模型,并将上述校正测量数据作为该预设头部预测模型的输入数据。该预设头部预测模型可以处理上述校正测量数据并获取虚拟现实设备的移动数据,后续称之为第二移动数据以示区别。可理解的是,第二移动数据是在校正测量数据的基础上获得的,因此其可以表示头部部分在目标时刻相对于初始时刻(在位移上)的移动变化量。因此,本实施例中可以将第二移动数据和转动数据作为头部部分在目标时刻相对于初始时刻的相对姿态数据,包括头部部分的位移和转动角度。In this embodiment, a preset head prediction model is stored in the virtual reality device. The preset head prediction model divides the user into a torso and a head. Referring to FIG6 , the AB part represents the torso, and the BC part represents the head part above the neck, so that the posture change of the user can be divided into two parts: the movement of the torso and the rotation of the head, that is, the movement of the head changes with the movement of the torso, and the head part only determines the rotation part (including the rotation of the head itself and the rotation of the head part caused by the rotation of the torso), wherein L represents the movement of the torso, and q represents the rotation of the head. Therefore, in this embodiment, the virtual reality device can call the preset head prediction model, and use the above-mentioned correction measurement data as the input data of the preset head prediction model. The preset head prediction model can process the above-mentioned correction measurement data and obtain the movement data of the virtual reality device, which is subsequently referred to as the second movement data for distinction. It can be understood that the second movement data is obtained on the basis of the correction measurement data, so it can represent the movement change amount of the head part at the target time relative to the initial time (in displacement). Therefore, in this embodiment, the second movement data and the rotation data can be used as the relative posture data of the head part at the target time relative to the initial time, including the displacement and rotation angle of the head part.
在步骤12中,根据所述参考姿态数据和所述相对姿态数据获取所述虚拟现实设备在目标时刻的目标姿态。In step 12, a target posture of the virtual reality device at a target moment is acquired according to the reference posture data and the relative posture data.
本实施例中,处理器可以根据参考姿态数据和所述相对姿态数据获取所述虚拟现实设备在所述目标时刻的目标姿态,参见图7,包括步骤71~步骤74。In this embodiment, the processor can obtain the target posture of the virtual reality device at the target moment according to the reference posture data and the relative posture data, referring to FIG. 7 , which includes steps 71 to 74 .
在步骤71中,处理器可以根据预设匀速运动模型和所述参考姿态数据获取所述目标时刻的状态数据。In step 71, the processor may acquire the state data at the target moment according to a preset uniform motion model and the reference posture data.
本步骤中,虚拟现实设备内存储预设匀速运动模型,该匀速运动模型是指按照匀速运动来预测虚拟现实设备在不同时刻的姿态。因此,处理器可以参考姿态数据作为上述预设匀速运动模型的输入数据。上述预设匀速运动模型可以处理上述参考姿态数据,以获得目标时刻的状态数据。In this step, a preset uniform motion model is stored in the virtual reality device, and the uniform motion model is used to predict the posture of the virtual reality device at different times according to uniform motion. Therefore, the processor can refer to the posture data as input data of the preset uniform motion model. The preset uniform motion model can process the reference posture data to obtain the state data at the target time.
在步骤72中,处理器可以叠加所述参考姿态数据和所述相对姿态数据,得到所述虚拟现实设备的观测数据。可理解的是,参考姿态数据和相对姿态数据叠加是指各个维度上的数据进行计算。In step 72, the processor may superimpose the reference posture data and the relative posture data to obtain observation data of the virtual reality device. It is understandable that superimposing the reference posture data and the relative posture data refers to calculating data in various dimensions.
在步骤73中,处理器可以根据所述虚拟现实设备的累积误差获取所述状态数据和所述观测数据的各自的权重值。在运动过程中虚拟现实设备的实实际姿态与预测姿态会存在一定的偏差,运动一段时间后上述偏差会累积,即在一个周期的误差的基础上继续叠加新的误差,后续称之为累积误差。可理解的是,上述累积误差包括状态数据引起的误差和观测数据引起的误差,分别称之为状态误差和观测误差。因此,处理器可以获取状态数据和观测数据的各自的权重值。例如累积误差为C,状态误差为A,观测误差为B,那么状态误差的权重值a=A/C,观测误差的权重值b=B/C。In step 73, the processor can obtain the respective weight values of the state data and the observation data according to the cumulative error of the virtual reality device. During the movement process, there will be a certain deviation between the actual posture and the predicted posture of the virtual reality device. After a period of movement, the above deviation will accumulate, that is, a new error will continue to be superimposed on the basis of a cycle of errors, which will be referred to as the cumulative error later. It can be understood that the above cumulative error includes the error caused by the state data and the error caused by the observation data, which are respectively referred to as the state error and the observation error. Therefore, the processor can obtain the respective weight values of the state data and the observation data. For example, if the cumulative error is C, the state error is A, and the observation error is B, then the weight value of the state error is a=A/C, and the weight value of the observation error is b=B/C.
在步骤74中,处理器可以根据所述状态数据、所述观测数据以及各自的权重值 计算加权值,得到述虚拟现实设备在所述目标时刻的目标姿态。本步骤中,处理器可以计算出目标姿态,即C1=a*A+b*B。可理解的是,本步骤中通过设置预设匀速模型对相对姿态数据进行滤波,可以稳定预测时长和预测精度。In step 74, the processor can calculate the weighted value according to the state data, the observation data and the respective weight values to obtain the target posture of the virtual reality device at the target time. In this step, the processor can calculate the target posture, that is, C1 = a*A+b*B. It can be understood that in this step, by setting a preset uniform speed model to filter the relative posture data, the prediction duration and prediction accuracy can be stabilized.
本实施例中,第一定位数据、参考姿态数据、目标姿态的效果如图8所示,参见图8,蓝色线B表示第一定位数据、黄色线Y表示参考姿态数据,并且红色线R表示目标姿态,可见本实施例提供的方案可以实现较好的匹配度并且曲线比较平滑。In this embodiment, the effects of the first positioning data, the reference posture data, and the target posture are shown in Figure 8. Referring to Figure 8, the blue line B represents the first positioning data, the yellow line Y represents the reference posture data, and the red line R represents the target posture. It can be seen that the solution provided in this embodiment can achieve a better matching degree and the curve is relatively smooth.
在一实施例中,处理器在计算出虚拟现实设备的目标姿态之后,可以将该目标姿态发送给渲染管线使用。In one embodiment, after calculating the target pose of the virtual reality device, the processor may send the target pose to the rendering pipeline for use.
至此,本公开实施例提供的方案中可以获取所述虚拟现实设备的参考姿态数据和相对姿态数据;根据所述参考姿态数据和所述相对姿态数据获取所述虚拟现实设备在目标时刻的目标姿态,达到用户动作与画面同步的效果,提升用户使用体验。At this point, the solution provided in the embodiment of the present disclosure can obtain the reference posture data and relative posture data of the virtual reality device; the target posture of the virtual reality device at the target moment is obtained according to the reference posture data and the relative posture data, so as to achieve the effect of synchronizing the user action with the picture and improve the user experience.
下面结合附图描述本公开实施例提供的一种姿态获取方法的方案,参见图9,包括SLAM算法模块、惯性测量模块、ESKF滤波模块、位姿预测模块和预测滤波模块。其中,The following describes a posture acquisition method provided by an embodiment of the present disclosure in conjunction with the accompanying drawings, referring to FIG9 , which includes a SLAM algorithm module, an inertial measurement module, an ESKF filter module, a posture prediction module, and a prediction filter module.
SLAM算法模块可以对摄像头采集图像进行处理,得到图像数据或者说六维度图像数据,即上述的第一定位数据。SLAM算法模块可以将基于半直接法SLAM算法与滑动窗口算法相结合,并且将两者各自置入一个并行线程处理,从而保证半直接法SLAM算法的效率,又保证计算精度。The SLAM algorithm module can process the image collected by the camera to obtain image data or six-dimensional image data, that is, the first positioning data mentioned above. The SLAM algorithm module can combine the semi-direct SLAM algorithm with the sliding window algorithm, and place the two into a parallel thread for processing, thereby ensuring the efficiency of the semi-direct SLAM algorithm and the calculation accuracy.
惯性测量模块可以通过预积分算法获取虚拟现实设备的定位数据,即上述的第二定位数据。该惯性测量模块可以获取惯性测量数据,并采用中值积分方法或者RK4(即四阶Runge–Kutta方法)进行预积分,得到第一移动数据或者转动数据。The inertial measurement module can obtain the positioning data of the virtual reality device through a pre-integration algorithm, that is, the second positioning data mentioned above. The inertial measurement module can obtain the inertial measurement data and perform pre-integration using a median integration method or RK4 (i.e., a fourth-order Runge–Kutta method) to obtain first movement data or rotation data.
ESKF滤波模块可以获取第一定位数据和第二定位数据进行ESKF滤波融合,得到参考姿态数据。该参考姿态数据较为平滑,可以消除定位抖动,避免后续显示过程中图像的抖动。The ESKF filtering module can obtain the first positioning data and the second positioning data for ESKF filtering fusion to obtain reference posture data. The reference posture data is relatively smooth, which can eliminate positioning jitter and avoid image jitter in the subsequent display process.
位姿预测模块可以根据惯性检测数据和上述参考姿态数据获取虚拟现实设备在初始时刻与目标时刻之间的相对姿态数据。The posture prediction module can obtain the relative posture data of the virtual reality device between the initial moment and the target moment according to the inertial detection data and the above-mentioned reference posture data.
预测滤波模块可以根据预设匀速模型和上述相对姿态数据、参考姿态数据获取虚拟现实设备在目标时刻的目标姿态。The prediction filtering module can obtain the target posture of the virtual reality device at the target time according to the preset uniform speed model and the above relative posture data and reference posture data.
这样,本实施例可以对第一定位数据和第二定位数据进行滤波处理,从而使参考姿态数据对应的曲线更平滑,消除位姿抖动。并且,本实施例中通过预测目标姿态,可以降低虚拟现实设备的延迟效果,降低图像显示延迟带来的眩晕感,提升使用体验。In this way, the present embodiment can filter the first positioning data and the second positioning data, so as to make the curve corresponding to the reference posture data smoother and eliminate posture jitter. In addition, by predicting the target posture, the present embodiment can reduce the delay effect of the virtual reality device, reduce the dizziness caused by the image display delay, and improve the user experience.
在本公开实施例提供的一种姿态获取方法的基础上,本公开实施例还提供了一种姿态获取装置,适用于虚拟现实设备,参见图10,所述装置包括:Based on a posture acquisition method provided in an embodiment of the present disclosure, an embodiment of the present disclosure further provides a posture acquisition device, which is applicable to a virtual reality device. Referring to FIG. 10 , the device includes:
参考姿态获取模块101,用于获取所述虚拟现实设备的参考姿态数据和相对姿态数据;A reference posture acquisition module 101 is used to acquire reference posture data and relative posture data of the virtual reality device;
相对姿态获取模块102,用于获取所述虚拟现实设备的相对姿态数据;A relative posture acquisition module 102 is used to acquire relative posture data of the virtual reality device;
目标姿态获取模块103,用于根据所述参考姿态数据和所述相对姿态数据获取所述虚拟现实设备在目标时刻的目标姿态。The target posture acquisition module 103 is used to acquire the target posture of the virtual reality device at a target moment according to the reference posture data and the relative posture data.
可选地,所述参考姿态获取模块包括:Optionally, the reference posture acquisition module includes:
第一定位子模块,用于根据图像数据对所述虚拟现实设备进行定位,得到第一定位数据;A first positioning submodule, used to position the virtual reality device according to the image data to obtain first positioning data;
第二定位子模块,用于根据惯性测量数据对所述虚拟现实设备进行定位,得到第二定位数据;A second positioning submodule, used to position the virtual reality device according to the inertial measurement data to obtain second positioning data;
参考姿态获取子模块,用于融合所述第一定位数据和所述第二定位数据,得到所述参考姿态数据。The reference posture acquisition submodule is used to fuse the first positioning data and the second positioning data to obtain the reference posture data.
可选地,所述第一定位子模块包括:Optionally, the first positioning submodule includes:
图像数据获取单元,用于获取第二中间时刻到初始时刻之间的图像数据;An image data acquisition unit, used to acquire image data between the second intermediate moment and the initial moment;
第一定位获取单元,用于根据所述图像数据对所述虚拟现实设备进行定位,得到第一定位数据。The first positioning acquisition unit is used to position the virtual reality device according to the image data to obtain first positioning data.
可选地,所述第一定位获取单元包括:Optionally, the first positioning acquisition unit includes:
快速定位子单元,用于基于预设的半直接法SLAM算法处理所述图像数据,得到所述虚拟现实设备对应的快速定位数据;A fast positioning subunit, used for processing the image data based on a preset semi-direct SLAM algorithm to obtain fast positioning data corresponding to the virtual reality device;
精准定位子单元,用于基于预设的滑动窗口算法处理所述图像数据,得到所述虚拟现实设备对应的精准定位数据;A precise positioning subunit, used for processing the image data based on a preset sliding window algorithm to obtain precise positioning data corresponding to the virtual reality device;
校正定位子单元,用于利用所述精准定位数据校正所述快速定位数据,得到校 正定位数据;A correction positioning subunit, used to correct the rapid positioning data using the precise positioning data to obtain corrected positioning data;
第一定位子单元,用于确定所述快速定位数据、所述精准定位数据或者所述校正定位数据作为所述第一定位数据。The first positioning subunit is used to determine the rapid positioning data, the precise positioning data or the corrected positioning data as the first positioning data.
可选地,所述第二定位子模块包括:Optionally, the second positioning submodule includes:
惯性测量单元,用于获取第二中间时刻到初始时刻之间的惯性测量数据;所述第二中间时刻早于所述初始时刻;An inertial measurement unit, used to obtain inertial measurement data between a second intermediate moment and an initial moment; the second intermediate moment is earlier than the initial moment;
第二定位单元,用于对所述惯性测量数据进行积分,得到所述虚拟现实设备的第二定位数据。The second positioning unit is used to integrate the inertial measurement data to obtain second positioning data of the virtual reality device.
可选地,所述相对姿态获取模块包括:Optionally, the relative posture acquisition module includes:
惯性数据获取子模块,用于获取所述虚拟现实设备在初始时刻和目标时刻之间的惯性测量数据;An inertial data acquisition submodule, used to acquire inertial measurement data of the virtual reality device between an initial moment and a target moment;
校正数据获取子模块,用于对所述惯性测量数据进行校正,得到校正测量数据;A correction data acquisition submodule, used to correct the inertial measurement data to obtain corrected measurement data;
相对姿态获取子模块,用于基于预设头部预测模型和所述校正测量数据确定所述虚拟现实设备的第二移动数据;所述第二移动数据作为相对姿态数据。The relative posture acquisition submodule is used to determine the second movement data of the virtual reality device based on the preset head prediction model and the corrected measurement data; the second movement data is used as the relative posture data.
可选地,所述惯性数据获取子模块包括:Optionally, the inertial data acquisition submodule includes:
原始数据获取单元,用于从所述虚拟现实设备的指定位置获取所述初始时刻之后的原始惯性测量数据;A raw data acquisition unit, used to acquire raw inertial measurement data after the initial moment from a specified position of the virtual reality device;
惯性数据获取单元,用于当所述原始惯性测量数据未包括所述目标时刻的数据时,基于所述原始惯性测量数据扩展测量数据至所述目标时刻,得到所述初始时刻和目标时刻之间的惯性测量数据。The inertial data acquisition unit is used to, when the original inertial measurement data does not include the data at the target time, expand the measurement data to the target time based on the original inertial measurement data to obtain the inertial measurement data between the initial time and the target time.
可选地,所述校正数据获取子模块包括:Optionally, the correction data acquisition submodule includes:
转动数据获取单元,用于根据所述惯性测量数据获取所述虚拟现实设备在所述初始时刻和所述目标时刻之间的转动数据;所述转动数据作为相对姿态数据;A rotation data acquisition unit, used for acquiring rotation data of the virtual reality device between the initial moment and the target moment according to the inertial measurement data; the rotation data is used as relative posture data;
移动数据获取单元,用于根据所述惯性测量数据获取所述虚拟现实设备在所述初始时刻到第一中间时刻之间的第一移动数据;所述第一中间时刻位于所述初始时刻和所述目标时刻之间;A movement data acquisition unit, configured to acquire first movement data of the virtual reality device between the initial moment and a first intermediate moment according to the inertial measurement data; the first intermediate moment is between the initial moment and the target moment;
校正数据获取单元,用于根据所述第一移动数据校正所述第一中间时刻与所述 目标时刻之间的惯性测量数据,得到校正测量数据。A correction data acquisition unit is used to correct the inertial measurement data between the first intermediate moment and the target moment according to the first movement data to obtain corrected measurement data.
可选地,所述目标姿态获取模块包括:Optionally, the target posture acquisition module includes:
状态数据获取子模块,用于根据预设匀速运动模型和所述参考姿态数据获取所述目标时刻的状态数据;A state data acquisition submodule, used to acquire the state data at the target moment according to a preset uniform motion model and the reference posture data;
观测数据获取子模块,用于叠加所述参考姿态数据和所述相对姿态数据,得到所述虚拟现实设备的观测数据;An observation data acquisition submodule, used for superimposing the reference posture data and the relative posture data to obtain observation data of the virtual reality device;
权重值获取子模块,用于根据所述虚拟现实设备的累积误差获取所述状态数据和所述观测数据的各自的权重值;A weight value acquisition submodule, used for acquiring respective weight values of the state data and the observation data according to the accumulated error of the virtual reality device;
目标姿态获取子模块,用于根据所述状态数据、所述观测数据以及各自的权重值计算加权值,得到述虚拟现实设备在所述目标时刻的目标姿态。The target posture acquisition submodule is used to calculate the weighted value according to the state data, the observation data and the respective weight values to obtain the target posture of the virtual reality device at the target moment.
需要说明的是,本实施例中示出的装置实施例与上述方法实施例的内容相匹配,可以参考上述方法实施例的内容,在此不再赘述。It should be noted that the device embodiment shown in this embodiment matches the content of the above method embodiment, and the content of the above method embodiment can be referred to, which will not be repeated here.
图11是根据一示例性实施例示出的一种虚拟现实设备的框图。例如,虚拟现实设备1100可以是智能手机,计算机,数字广播终端,平板设备,医疗设备,健身设备,个人数字助理等。Fig. 11 is a block diagram of a virtual reality device according to an exemplary embodiment. For example, the virtual reality device 1100 may be a smart phone, a computer, a digital broadcast terminal, a tablet device, a medical device, a fitness device, a personal digital assistant, etc.
参照图11,虚拟现实设备1100可以包括以下一个或多个组件:处理组件1102,存储器1104,电源组件1106,多媒体组件1108,音频组件1110,输入/输出(I/O)的接口1112,传感器组件1114,通信组件1116,图像采集组件1118。11 , the virtual reality device 1100 may include one or more of the following components: a processing component 1102 , a memory 1104 , a power component 1106 , a multimedia component 1108 , an audio component 1110 , an input/output (I/O) interface 1112 , a sensor component 1114 , a communication component 1116 , and an image acquisition component 1118 .
处理组件1102通常控制虚拟现实设备1100的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件1102可以包括一个或多个处理器1120来执行计算机程序。此外,处理组件1102可以包括一个或多个模块,便于处理组件1102和其他组件之间的交互。例如,处理组件1102可以包括多媒体模块,以方便多媒体组件1108和处理组件1102之间的交互。The processing component 1102 generally controls the overall operation of the virtual reality device 1100, such as operations associated with display, phone calls, data communications, camera operations, and recording operations. The processing component 1102 may include one or more processors 1120 to execute computer programs. In addition, the processing component 1102 may include one or more modules to facilitate interaction between the processing component 1102 and other components. For example, the processing component 1102 may include a multimedia module to facilitate interaction between the multimedia component 1108 and the processing component 1102.
存储器1104被配置为存储各种类型的数据以支持在虚拟现实设备1100的操作。这些数据的示例包括用于在虚拟现实设备1100上操作的任何应用程序或方法的计算机程序,联系人数据,电话簿数据,消息,图片,视频等。存储器1104可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可 编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。The memory 1104 is configured to store various types of data to support operations on the virtual reality device 1100. Examples of such data include computer programs for any application or method operating on the virtual reality device 1100, contact data, phone book data, messages, pictures, videos, etc. The memory 1104 can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk.
电源组件1106为虚拟现实设备1100的各种组件提供电力。电源组件1106可以包括电源管理***,一个或多个电源,及其他与为虚拟现实设备1100生成、管理和分配电力相关联的组件。电源组件1106可以包括电源芯片,控制器可以电源芯片通信,从而控制电源芯片导通或者断开开关器件,使电池向主板电路供电或者不供电。The power supply component 1106 provides power to various components of the virtual reality device 1100. The power supply component 1106 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the virtual reality device 1100. The power supply component 1106 may include a power supply chip, and the controller may communicate with the power supply chip to control the power supply chip to turn on or off a switch device, so that the battery supplies power to the mainboard circuit or not.
多媒体组件1108包括在虚拟现实设备1100和目标对象之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示屏(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自目标对象的输入信息。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与触摸或滑动操作相关的持续时间和压力。The multimedia component 1108 includes a screen that provides an output interface between the virtual reality device 1100 and the target object. In some embodiments, the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input information from the target object. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundaries of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
音频组件1110被配置为输出和/或输入音频文件信息。例如,音频组件1110包括一个麦克风(MIC),当虚拟现实设备1100处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频文件信息。所接收的音频文件信息可以被进一步存储在存储器1104或经由通信组件1116发送。在一些实施例中,音频组件1110还包括一个扬声器,用于输出音频文件信息。The audio component 1110 is configured to output and/or input audio file information. For example, the audio component 1110 includes a microphone (MIC), and when the virtual reality device 1100 is in an operation mode, such as a call mode, a recording mode, and a speech recognition mode, the microphone is configured to receive external audio file information. The received audio file information can be further stored in the memory 1104 or sent via the communication component 1116. In some embodiments, the audio component 1110 also includes a speaker for outputting audio file information.
I/O接口1112为处理组件1102和***接口模块之间提供接口,上述***接口模块可以是键盘,点击轮,按钮等。The I/O interface 1112 provides an interface between the processing component 1102 and a peripheral interface module, which may be a keyboard, a click wheel, a button, etc.
传感器组件1114包括一个或多个传感器,用于为虚拟现实设备1100提供各个方面的状态评估。例如,传感器组件1114可以检测到虚拟现实设备1100的打开/关闭状态,组件的相对定位,例如组件为虚拟现实设备1100的显示屏和小键盘,传感器组件1114还可以检测虚拟现实设备1100或一个组件的位置改变,目标对象与虚拟现实设备1100接触的存在或不存在,虚拟现实设备1100方位或加速/减速和虚拟现实设备1100的温度变化。本示例中,传感器组件1114可以包括磁力传感器、陀螺仪和磁场传感器,其中磁场传感器包括以下至少一种:霍尔传感器、薄膜磁致电阻传感器、磁性液体加速度传感器。The sensor component 1114 includes one or more sensors for providing various aspects of status assessment for the virtual reality device 1100. For example, the sensor component 1114 can detect the open/closed state of the virtual reality device 1100, the relative positioning of components, such as the display screen and keypad of the virtual reality device 1100, and the sensor component 1114 can also detect the position change of the virtual reality device 1100 or a component, the presence or absence of contact between the target object and the virtual reality device 1100, the orientation or acceleration/deceleration of the virtual reality device 1100, and the temperature change of the virtual reality device 1100. In this example, the sensor component 1114 may include a magnetic sensor, a gyroscope, and a magnetic field sensor, wherein the magnetic field sensor includes at least one of the following: a Hall sensor, a thin film magnetoresistive sensor, and a magnetic liquid acceleration sensor.
通信组件1116被配置为便于虚拟现实设备1100和其他设备之间有线或无线方式的通信。虚拟现实设备1100可以接入基于通信标准的无线网络,如WiFi,2G、3G、 4G、5G,或它们的组合。在一个示例性实施例中,通信组件1116经由广播信道接收来自外部广播管理***的广播信息或广播相关信息。在一个示例性实施例中,通信组件1116还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。The communication component 1116 is configured to facilitate wired or wireless communication between the virtual reality device 1100 and other devices. The virtual reality device 1100 can access a wireless network based on a communication standard, such as WiFi, 2G, 3G, 4G, 5G, or a combination thereof. In an exemplary embodiment, the communication component 1116 receives broadcast information or broadcast-related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 1116 also includes a near field communication (NFC) module to facilitate short-range communication. For example, the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
在示例性实施例中,虚拟现实设备1100可以被一个或多个应用专用集成电路(ASIC)、数字信息处理器(DSP)、数字信息处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现。In an exemplary embodiment, the virtual reality device 1100 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components.
在示例性实施例中,还提供了一种虚拟现实设备,包括:In an exemplary embodiment, a virtual reality device is also provided, comprising:
存储器与处理器;Memory and processor;
所述存储器用于存储所述处理器可执行的计算机程序;The memory is used to store a computer program executable by the processor;
所述处理器用于执行所述存储器中的计算机程序,以实现如上述的方法。The processor is used to execute the computer program in the memory to implement the above method.
在示例性实施例中,还提供了一种非暂态计算机可读存储介质,例如包括指令的存储器1104,上述可执行的计算机程序可由处理器执行。其中,可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as a memory 1104 including instructions, and the above executable computer program can be executed by a processor. The readable storage medium can be a ROM, a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, etc.
本领域技术人员在考虑说明书及实践这里公开的公开后,将容易想到本公开的其它实施方案。本公开旨在涵盖任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。Those skilled in the art will readily appreciate other embodiments of the present disclosure after considering the specification and practicing the disclosure disclosed herein. The present disclosure is intended to cover any variations, uses, or adaptations that follow the general principles of the present disclosure and include common knowledge or customary techniques in the art that are not disclosed in the present disclosure. The description and examples are to be considered exemplary only, and the true scope and spirit of the present disclosure are indicated by the following claims.
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。It should be understood that the present disclosure is not limited to the exact structures that have been described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

  1. 一种姿态获取方法,其特征在于,应用于虚拟现实设备,所述方法包括:A posture acquisition method, characterized in that it is applied to a virtual reality device, and the method comprises:
    获取所述虚拟现实设备的参考姿态数据和相对姿态数据;Acquiring reference posture data and relative posture data of the virtual reality device;
    根据所述参考姿态数据和所述相对姿态数据获取所述虚拟现实设备在目标时刻的目标姿态。The target posture of the virtual reality device at the target moment is acquired according to the reference posture data and the relative posture data.
  2. 根据权利要求1所述的方法,其特征在于,获取所述虚拟现实设备的参考姿态数据,包括:The method according to claim 1, characterized in that obtaining the reference posture data of the virtual reality device comprises:
    根据图像数据对所述虚拟现实设备进行定位,得到第一定位数据;Positioning the virtual reality device according to the image data to obtain first positioning data;
    根据惯性测量数据对所述虚拟现实设备进行定位,得到第二定位数据;Positioning the virtual reality device according to the inertial measurement data to obtain second positioning data;
    融合所述第一定位数据和所述第二定位数据,得到所述参考姿态数据。The first positioning data and the second positioning data are fused to obtain the reference posture data.
  3. 根据权利要求2所述的方法,其特征在于,根据图像数据对所述虚拟现实设备进行定位,得到第一定位数据,包括:The method according to claim 2, characterized in that positioning the virtual reality device according to the image data to obtain the first positioning data comprises:
    获取第二中间时刻到初始时刻之间的图像数据;Acquire image data between the second intermediate moment and the initial moment;
    根据所述图像数据对所述虚拟现实设备进行定位,得到第一定位数据。The virtual reality device is positioned according to the image data to obtain first positioning data.
  4. 根据权利要求3所述的方法,其特征在于,根据所述图像数据对所述虚拟现实设备进行定位,得到第一定位数据,包括:The method according to claim 3, characterized in that positioning the virtual reality device according to the image data to obtain first positioning data comprises:
    基于预设的半直接法SLAM算法处理所述图像数据,得到所述虚拟现实设备对应的快速定位数据;Processing the image data based on a preset semi-direct SLAM algorithm to obtain rapid positioning data corresponding to the virtual reality device;
    基于预设的滑动窗口算法处理所述图像数据,得到所述虚拟现实设备对应的精准定位数据;Processing the image data based on a preset sliding window algorithm to obtain precise positioning data corresponding to the virtual reality device;
    利用所述精准定位数据校正所述快速定位数据,得到校正定位数据;Correcting the rapid positioning data using the precise positioning data to obtain corrected positioning data;
    确定所述快速定位数据、所述精准定位数据或者所述校正定位数据作为所述第一定位数据。The fast positioning data, the precise positioning data or the corrected positioning data is determined as the first positioning data.
  5. 根据权利要求2所述的方法,其特征在于,根据惯性测量数据对所述虚拟现实设备进行定位,得到第二定位数据,包括:The method according to claim 2, characterized in that positioning the virtual reality device according to the inertial measurement data to obtain the second positioning data comprises:
    获取第二中间时刻到初始时刻之间的惯性测量数据;所述第二中间时刻早于所述初始时刻;Acquire inertial measurement data between a second intermediate moment and an initial moment; the second intermediate moment is earlier than the initial moment;
    对所述惯性测量数据进行积分,得到所述虚拟现实设备的第二定位数据。The inertial measurement data is integrated to obtain second positioning data of the virtual reality device.
  6. 根据权利要求1所述的方法,其特征在于,获取所述虚拟现实设备的相对姿态数据,包括:The method according to claim 1, characterized in that obtaining the relative posture data of the virtual reality device comprises:
    获取所述虚拟现实设备在初始时刻和目标时刻之间的惯性测量数据;Acquire inertial measurement data of the virtual reality device between an initial time and a target time;
    对所述惯性测量数据进行校正,得到校正测量数据;Correcting the inertial measurement data to obtain corrected measurement data;
    基于预设头部预测模型和所述校正测量数据确定所述虚拟现实设备的第二移动数据;所述第二移动数据作为相对姿态数据。Based on the preset head prediction model and the corrected measurement data, second movement data of the virtual reality device is determined; the second movement data is used as relative posture data.
  7. 根据权利要求6所述的方法,其特征在于,获取所述虚拟现实设备在初始时刻和目标时刻之间的惯性测量数据,包括:The method according to claim 6, characterized in that obtaining inertial measurement data of the virtual reality device between an initial moment and a target moment comprises:
    从所述虚拟现实设备的指定位置获取所述初始时刻之后的原始惯性测量数据;Acquire raw inertial measurement data after the initial moment from a specified position of the virtual reality device;
    当所述原始惯性测量数据未包括所述目标时刻的数据时,基于所述原始惯性测量数据扩展测量数据至所述目标时刻,得到所述初始时刻和目标时刻之间的惯性测量数据。When the original inertial measurement data does not include data at the target time, the measurement data is extended to the target time based on the original inertial measurement data to obtain the inertial measurement data between the initial time and the target time.
  8. 根据权利要求6所述的方法,其特征在于,对所述惯性测量数据进行校正,得到校正测量数据,包括:The method according to claim 6, characterized in that correcting the inertial measurement data to obtain corrected measurement data comprises:
    根据所述惯性测量数据获取所述虚拟现实设备在所述初始时刻和所述目标时刻之间的转动数据;所述转动数据作为相对姿态数据;Acquire rotation data of the virtual reality device between the initial moment and the target moment according to the inertial measurement data; the rotation data is used as relative posture data;
    根据所述惯性测量数据获取所述虚拟现实设备在所述初始时刻到第一中间时刻之间的第一移动数据;所述第一中间时刻位于所述初始时刻和所述目标时刻之间;Acquire first movement data of the virtual reality device between the initial moment and a first intermediate moment according to the inertial measurement data; the first intermediate moment is between the initial moment and the target moment;
    根据所述第一移动数据校正所述第一中间时刻与所述目标时刻之间的惯性测量数据,得到校正测量数据。The inertial measurement data between the first intermediate moment and the target moment is corrected according to the first movement data to obtain corrected measurement data.
  9. 根据权利要求1所述的方法,其特征在于,根据所述参考姿态数据和所述相对姿态数据获取所述虚拟现实设备在目标时刻的目标姿态,包括:The method according to claim 1, characterized in that obtaining a target posture of the virtual reality device at a target time according to the reference posture data and the relative posture data comprises:
    根据预设匀速运动模型和所述参考姿态数据获取所述目标时刻的状态数据;Acquire the state data of the target moment according to the preset uniform motion model and the reference posture data;
    叠加所述参考姿态数据和所述相对姿态数据,得到所述虚拟现实设备的观测数据;Superimposing the reference posture data and the relative posture data to obtain observation data of the virtual reality device;
    根据所述虚拟现实设备的累积误差获取所述状态数据和所述观测数据的各自的权重值;Obtaining respective weight values of the state data and the observation data according to the accumulated error of the virtual reality device;
    根据所述状态数据、所述观测数据以及各自的权重值计算加权值,得到述虚拟现实设备在所述目标时刻的目标姿态。A weighted value is calculated according to the state data, the observation data and the respective weight values to obtain a target posture of the virtual reality device at the target moment.
  10. 一种姿态获取装置,其特征在于,应用于虚拟现实设备,所述装置包括:A posture acquisition device, characterized in that it is applied to a virtual reality device, and the device comprises:
    参考姿态获取模块,用于获取所述虚拟现实设备的参考姿态数据和相对姿态数据;A reference posture acquisition module, used to acquire reference posture data and relative posture data of the virtual reality device;
    相对姿态获取模块,用于获取所述虚拟现实设备的相对姿态数据;A relative posture acquisition module, used to acquire relative posture data of the virtual reality device;
    目标姿态获取模块,用于根据所述参考姿态数据和所述相对姿态数据获取所述虚拟现实设备在目标时刻的目标姿态。A target posture acquisition module is used to acquire the target posture of the virtual reality device at a target moment according to the reference posture data and the relative posture data.
  11. 一种虚拟现实设备,其特征在于,包括:A virtual reality device, comprising:
    存储器与处理器;Memory and processor;
    所述存储器用于存储所述处理器可执行的计算机程序;The memory is used to store a computer program executable by the processor;
    所述处理器用于执行所述存储器中的计算机程序,以实现如权利要求1~9任一项所述的方法。The processor is used to execute the computer program in the memory to implement the method according to any one of claims 1 to 9.
  12. 一种非暂态计算机可读存储介质,其特征在于,当所述存储介质中的可执行的计算机程序由处理器执行时,能够实现如权利要求1~9任一项所述的方法。A non-transitory computer-readable storage medium, characterized in that when an executable computer program in the storage medium is executed by a processor, the method according to any one of claims 1 to 9 can be implemented.
PCT/CN2022/133552 2022-11-22 2022-11-22 Posture acquisition method, apparatus, virtual reality device, and readable storage medium WO2024108394A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/133552 WO2024108394A1 (en) 2022-11-22 2022-11-22 Posture acquisition method, apparatus, virtual reality device, and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/133552 WO2024108394A1 (en) 2022-11-22 2022-11-22 Posture acquisition method, apparatus, virtual reality device, and readable storage medium

Publications (1)

Publication Number Publication Date
WO2024108394A1 true WO2024108394A1 (en) 2024-05-30

Family

ID=91194959

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/133552 WO2024108394A1 (en) 2022-11-22 2022-11-22 Posture acquisition method, apparatus, virtual reality device, and readable storage medium

Country Status (1)

Country Link
WO (1) WO2024108394A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106095102A (en) * 2016-06-16 2016-11-09 深圳市金立通信设备有限公司 The method of a kind of virtual reality display interface process and terminal
WO2018090692A1 (en) * 2016-11-15 2018-05-24 北京当红齐天国际文化发展集团有限公司 Spatial positioning based virtual reality dizziness prevention system and method
CN110163909A (en) * 2018-02-12 2019-08-23 北京三星通信技术研究有限公司 For obtaining the method, apparatus and storage medium of equipment pose
US20200082548A1 (en) * 2018-09-06 2020-03-12 Disney Enterprises, Inc. Dead reckoning positional prediction for augmented reality and virtual reality applications
CN113074726A (en) * 2021-03-16 2021-07-06 深圳市慧鲤科技有限公司 Pose determination method and device, electronic equipment and storage medium
CN113436310A (en) * 2020-03-23 2021-09-24 南京科沃斯机器人技术有限公司 Scene establishing method, system and device and self-moving robot
CN113534948A (en) * 2020-04-16 2021-10-22 三星电子株式会社 Augmented reality AR device and method of predicting gestures therein
CN115342806A (en) * 2022-07-14 2022-11-15 歌尔股份有限公司 Positioning method and device of head-mounted display equipment, head-mounted display equipment and medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106095102A (en) * 2016-06-16 2016-11-09 深圳市金立通信设备有限公司 The method of a kind of virtual reality display interface process and terminal
WO2018090692A1 (en) * 2016-11-15 2018-05-24 北京当红齐天国际文化发展集团有限公司 Spatial positioning based virtual reality dizziness prevention system and method
CN110163909A (en) * 2018-02-12 2019-08-23 北京三星通信技术研究有限公司 For obtaining the method, apparatus and storage medium of equipment pose
US20200082548A1 (en) * 2018-09-06 2020-03-12 Disney Enterprises, Inc. Dead reckoning positional prediction for augmented reality and virtual reality applications
CN113436310A (en) * 2020-03-23 2021-09-24 南京科沃斯机器人技术有限公司 Scene establishing method, system and device and self-moving robot
CN113534948A (en) * 2020-04-16 2021-10-22 三星电子株式会社 Augmented reality AR device and method of predicting gestures therein
CN113074726A (en) * 2021-03-16 2021-07-06 深圳市慧鲤科技有限公司 Pose determination method and device, electronic equipment and storage medium
CN115342806A (en) * 2022-07-14 2022-11-15 歌尔股份有限公司 Positioning method and device of head-mounted display equipment, head-mounted display equipment and medium

Similar Documents

Publication Publication Date Title
EP3540571A1 (en) Method and device for editing virtual scene, and non-transitory computer-readable storage medium
US11460916B2 (en) Interface interaction apparatus and method
US8942510B2 (en) Apparatus and method for switching a display mode
US10521009B2 (en) Method and apparatus for facilitating interaction with virtual reality equipment
CN110546601B (en) Information processing device, information processing method, and program
US9823779B2 (en) Method and device for controlling a head-mounted display by a terminal device
EP3291548A1 (en) Method and apparatus for testing a virtual reality head display device
KR102116826B1 (en) Photo synthesis methods, devices, programs and media
CN107657590A (en) Image processing method and device
CN112202962B (en) Screen brightness adjusting method and device and storage medium
WO2020080107A1 (en) Information processing device, information processing method, and program
US11245763B2 (en) Data processing method, computer device and storage medium
WO2023184816A1 (en) Cloud desktop display method and apparatus, device and storage medium
US10444831B2 (en) User-input apparatus, method and program for user-input
WO2017005070A1 (en) Display control method and device
WO2023040288A1 (en) Display device and device control method
CN109831817B (en) Terminal control method, device, terminal and storage medium
WO2024108394A1 (en) Posture acquisition method, apparatus, virtual reality device, and readable storage medium
CN107340868B (en) Data processing method and device and VR equipment
JP2016192137A (en) Information processing device, information processing method and program
JPWO2020044949A1 (en) Information processing equipment, information processing methods, and programs
CN113873157B (en) Shooting method, shooting device, electronic equipment and readable storage medium
WO2014117675A1 (en) Information processing method and electronic device
CN112188133A (en) Video acquisition method and device, electronic equipment and storage medium
WO2023240447A1 (en) Head movement detection method, apparatus, device, and storage medium