CN112764546A - Virtual character displacement control method and device and terminal equipment - Google Patents

Virtual character displacement control method and device and terminal equipment Download PDF

Info

Publication number
CN112764546A
CN112764546A CN202110127026.2A CN202110127026A CN112764546A CN 112764546 A CN112764546 A CN 112764546A CN 202110127026 A CN202110127026 A CN 202110127026A CN 112764546 A CN112764546 A CN 112764546A
Authority
CN
China
Prior art keywords
image data
depth
depth information
virtual character
step line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110127026.2A
Other languages
Chinese (zh)
Other versions
CN112764546B (en
Inventor
陈元
刘伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Ziyuan Technology Co ltd
Original Assignee
Chongqing Ziyuan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Ziyuan Technology Co ltd filed Critical Chongqing Ziyuan Technology Co ltd
Priority to CN202110127026.2A priority Critical patent/CN112764546B/en
Publication of CN112764546A publication Critical patent/CN112764546A/en
Application granted granted Critical
Publication of CN112764546B publication Critical patent/CN112764546B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention is suitable for the technical field of data processing of virtual reality, and provides a method, a device and a terminal device for controlling the displacement of a virtual character, wherein the method comprises the following steps: acquiring walking image data of a real person by an image acquisition device, wherein the walking image data comprises first-step line image data based on a left foot and second-step line image data based on a right foot; calculating depth information of walking image data, the depth information including first depth information based on a left foot and second depth information based on a right foot; when the depth information of the current frame is obtained, calculating and obtaining a first depth difference based on a left foot and a second depth difference based on a right foot according to the first depth information and the second depth information of the current frame and the first depth information and the second depth information of adjacent frames; and controlling the displacement of the virtual character to be projected by the projector according to the first depth difference and the second depth difference. The virtual character displacement control method provided by the invention is not influenced by the precision of the sensor.

Description

Virtual character displacement control method and device and terminal equipment
Technical Field
The invention relates to the technical field of data processing of virtual reality, in particular to a method and a device for controlling the displacement of a virtual character and terminal equipment.
Background
The virtual reality technology is a computer simulation system capable of creating and experiencing a virtual world, which utilizes a computer to generate a simulation environment, is a system simulation of multi-source information fusion interactive three-dimensional dynamic views and entity behaviors, and enables a user to be immersed in the environment.
Generally speaking, the space presented to the user through the virtual reality technology does not mean that the user can walk freely, but the movement of the virtual character is driven by capturing the action of the real character on the motion platform, and the real character does not have actual physical movement, so that the user has low immersion and is easy to dizzy.
However, at present, there are also virtual reality devices for users to walk at will, such as a universal motion platform, so that a real person and a virtual person have the same motion track, and at present, the motion of the virtual person is controlled by capturing the motion of the real person on the motion platform, and the motion of the leg is mainly detected by a sensor bound to the leg, so as to control the motion of the person, but the accuracy of the sensor greatly affects the control effect, and the sensor data can be used only by a large amount of calculation and verification.
Disclosure of Invention
The invention mainly aims to provide a method, a device and a terminal device for controlling the displacement of a virtual character, and aims to solve the problems that in the prior art, the motion of a leg is detected through a sensor bound on the leg, the motion mode of the character is controlled by the accuracy of the sensor, and the character can be used only when being checked.
In order to achieve the above object, a first aspect of the embodiments of the present invention provides a method for controlling a virtual character displacement, including:
acquiring walking image data of a real person by an image acquisition device, wherein the walking image data comprises first step line image data based on a left foot and second step line image data based on a right foot, and the first step line image data and the second step line image data are sorted by taking a frame as a unit;
calculating depth information of the walking image data, the depth information including first depth information based on a left foot and second depth information based on a right foot;
when the depth information of the current frame is obtained, calculating and obtaining a first depth difference based on a left foot and a second depth difference based on a right foot according to the first depth information and the second depth information of the current frame and the first depth information and the second depth information of adjacent frames;
and controlling the displacement of the virtual character to be projected by the projector according to the first depth difference and the second depth difference.
With reference to the first aspect of the present invention, in a first embodiment of the present invention, the image capturing device includes a first infrared camera based on a left foot and a second infrared camera based on a right foot;
before the step of acquiring the walking image data of the real person through the image acquisition device, the step comprises the following steps:
and calibrating the positions of the first infrared camera, the second infrared camera and the projector.
With reference to the first aspect of the present invention, in a second aspect of the present invention, before calculating depth information of the walking image data, the method includes:
and calculating a phase main value of the first-step line image data and a phase main value of the second-step line image data through sinusoidal grating projection, acquiring an absolute phase of the first-step line image data according to the phase main value of the first-step line image data, and acquiring an absolute phase of the second-step line image data according to the phase main value of the second-step line image data.
With reference to the second embodiment of the first aspect of the present invention, in a third embodiment of the present invention, the calculating depth information of the walking image data includes:
performing stereo matching according to the absolute phase of the first-step line image data and the absolute phase of the second-step line image data, and obtaining a depth value based on a stereo matching result;
performing target detection based on the stereo matching result to obtain a left foot coordinate and a right foot coordinate;
obtaining a depth map based on the left foot and a depth map based on the right foot according to the coordinates of the left foot and the right foot and the depth values;
and obtaining first depth information based on the left foot and second depth information based on the right foot according to the depth map based on the left foot and the depth map based on the right foot.
With reference to the third embodiment of the first aspect of the present invention, in a fourth embodiment of the present invention, stereo matching is performed according to the absolute phase of the image data of the first step line and the absolute phase of the image data of the second step line, and a formula is as follows:
Figure BDA0002923824800000031
wherein the content of the first and second substances,
Figure BDA0002923824800000032
d=xleft-xrightrepresenting the disparity value, Z the depth value, D the baseline distance, f the focal length of the first infrared camera and the second infrared camera.
With reference to the third embodiment of the first aspect of the present invention, in a fifth embodiment of the present invention, a method for performing target detection based on a stereo matching result to obtain left foot coordinates and right foot coordinates includes:
positioning the stereo matching result through a YOLO model to obtain a rectangular frame coordinate value comprising the left foot part and a rectangular frame coordinate value comprising the right foot part;
the coordinate value of the rectangular frame of the left foot part is the coordinate of the left foot part, and the coordinate value of the rectangular frame of the right foot part is the coordinate of the right foot part.
With reference to the first to fifth aspects of the present invention, in a sixth aspect of the present invention, a method for controlling an amount of displacement of a virtual character to be projected by a projector based on the first depth difference and the second depth difference, the method includes:
if the first depth difference is larger than the second depth difference, taking the first depth difference as the displacement of the virtual character;
and if the second depth difference is larger than the first depth difference, taking the second depth difference as the displacement of the virtual character.
A second aspect of the embodiments of the present invention provides a virtual human body displacement control apparatus, including:
a walking image data acquisition module for acquiring walking image data of a real person by an image acquisition device, the walking image data including first-step line image data based on a left foot and second-step line image data based on a right foot, the first-step line image data and the second-step line image data being sorted in units of frames;
a depth information calculation module for calculating depth information of the walking image data, the depth information including first depth information based on a left foot and second depth information based on a right foot;
the depth difference calculating module is used for calculating and obtaining a first depth difference based on a left foot and a second depth difference based on a right foot according to the first depth information and the second depth information of the current frame and the first depth information and the second depth information of adjacent frames when the depth information of the current frame is obtained;
and the displacement control module is used for controlling the displacement of the virtual character to be projected by the projector according to the first depth difference and the second depth difference.
A third aspect of embodiments of the present invention provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the method provided in the first aspect when executing the computer program.
A fourth aspect of embodiments of the present invention provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method as provided in the first aspect above.
The embodiment of the invention provides a virtual character displacement control method, wherein image acquisition equipment is used for acquiring first-step line image data and second-step line image data of a real character during walking, depth information can be calculated based on the first-step line image data and the second-step line image data, namely the distance between two feet of the real character and the image acquisition equipment, and further, a depth difference is calculated so as to control the displacement of the virtual character to be projected by a projector, so that the real character and the virtual character have the same motion track. The virtual character displacement control method provided by the invention is not influenced by the accuracy of the sensor, and the data of the sensor is not required to be verified, so that a better control effect can be achieved.
Drawings
Fig. 1 is a schematic flow chart illustrating an implementation of a virtual character displacement control method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a virtual character displacement control device according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Suffixes such as "module", "part", or "unit" used to denote elements are used herein only for the convenience of description of the present invention, and have no specific meaning in themselves. Thus, "module" and "component" may be used in a mixture.
As shown in fig. 1, an embodiment of the present invention provides a virtual character displacement control method, including, but not limited to, the following steps:
s101, acquiring walking image data of the real person through an image acquisition device.
Wherein the walking image data includes first-step line image data based on a left foot and second-step line image data based on a right foot, and the first-step line image data and the second-step line image data are sorted in units of frames.
In an embodiment of the present invention, the image capturing device includes a first infrared camera based on a left foot and a second infrared camera based on a right foot, and before the first infrared camera, the second infrared camera, and the projector are used, internal and external parameters of the three are calibrated, and therefore, before the step S101, the method includes:
and calibrating the positions of the first infrared camera, the second infrared camera and the projector.
In a specific application, the camera internal parameter determines the relation between the coordinates of a certain point in a camera coordinate system and the pixel coordinates of an imaging plane, the camera external parameter is the rotation amount and the translation amount of the camera relative to a world coordinate system, the relative position and the posture between two cameras and a projector are determined, and the coordinate of the certain point in one camera can be converted into the coordinate in other camera coordinate systems or the world coordinate system.
The specific calibration method can adopt a Zhangyingyou calibration method, so that the required calibration precision can be achieved, and the principle is as follows:
Figure BDA0002923824800000061
wherein, A is an internal reference matrix, R, T is a rotation matrix and a translation vector of the camera relative to a world coordinate system, Zc is a target point depth under the camera coordinate system, and w is a projector.
And S102, calculating the depth information of the walking image data.
Wherein the depth information comprises first depth information based on a left foot and second depth information based on a right foot.
S103, when the depth information of the current frame is obtained, calculating to obtain a first depth difference based on the left foot and a second depth difference based on the right foot according to the first depth information and the second depth information of the current frame and the first depth information and the second depth information of adjacent frames.
In step S103, the adjacent frame is preferably a frame previous to the current frame, so that the method for controlling the virtual character displacement according to the embodiment of the present invention has real-time performance.
In the above steps S102 and S103, the depth and the depth difference can be finally calculated by calculating the depth and the parallax corresponding to the pixel points and then using the triangulation method.
In practical applications, the key to calculate the depth difference is pixel matching, and in the conventional pixel matching method, the features of each pixel in each camera need to be calculated one by one for comparison, but such an algorithm has a large amount of calculation and is inefficient.
In the embodiment of the invention, the sinusoidal grating projection is utilized, namely, the programmable sinusoidal grating fringe pattern is projected by utilizing the projector in a time-sharing manner, and the stereo matching is carried out, so that the pixel matching is realized through the stereo matching, the situations of repeated textures, overexposure of photos, low brightness, low texture discrimination and the like are avoided, and the efficiency is improved.
Therefore, before calculating the depth information of the walking image data in step S102, matching basis data of stereo matching needs to be acquired, which includes:
and calculating a phase main value of the first-step line image data and a phase main value of the second-step line image data through sinusoidal grating projection, acquiring an absolute phase of the first-step line image data according to the phase main value of the first-step line image data, and acquiring an absolute phase of the second-step line image data according to the phase main value of the second-step line image data.
In one embodiment, the phase principal value is calculated as follows:
it should be noted that, in this embodiment, the calculation of the phase main value of one piece of walking image data is taken as an example for description, and the calculation of the phase main value of another piece of walking image data and the subsequent calculation of the absolute main value are the same, and are not described again.
Firstly, in this embodiment, the used sinusoidal grating projection is a projector that projects 4 programmable sinusoidal grating fringe patterns in a time-sharing manner, and the initial phases of the gratings are 0, pi/2, pi, and 3 pi/2, respectively, that is, the phase difference of the 4 grating patterns is pi/2, and then the sinusoidal grating fringe patterns are projected to an area to be detected, preferably, the area to be detected is the left foot coordinate or the right foot coordinate in the following target detection result, the sinusoidal grating in the sinusoidal grating fringe patterns is modulated in height by an object to be detected, that is, the left foot or the right foot of a real person in the first-step line image data or the second-step line image data, and the phase change generated by the sinusoidal grating is represented by a gray value:
Figure BDA0002923824800000071
Figure BDA0002923824800000072
Figure BDA0002923824800000073
Figure BDA0002923824800000074
wherein, I in the formulab(x, y) is background light intensity, Im(x, y) is the modulation intensity,
Figure BDA0002923824800000075
is the phase principal value, I1、I2、I3And I4Which shows a 4-fold shift in a period perpendicular to the grating strips.
When the sine-stripe is captured, the phase principal value is required to be obtained, i.e. in the above formula
Figure BDA0002923824800000076
The formula is as follows:
Figure BDA0002923824800000081
the phase principal value can be solved according to the formula to provide data for subsequent absolute phase acquisition.
In the phase principal value calculation process, the phase value is obtained by using an arctan function through a four-step phase shift algorithm, and the obtained phase value is an unreal phase value, so that the absolute phase calculation process, i.e., the phase unwrapping process, is as follows:
firstly, an image coded by a binary gray code is adopted, namely, only stripes with black and white colors are included to level grating stripes in the image subjected to sinusoidal grating projection processing, and the process is as follows:
projection log2nThe method comprises the following steps of taking a picture, wherein n is the number of fringe levels, and each point in the raster original image is assigned with Fi (x, y) according to the formula:
Figure BDA0002923824800000082
where t is 0,1,2, and 3 are each 4 phase shifts, as can be seen from the above equation,
Figure BDA0002923824800000083
and when the value is larger than the preset value, the value of i is obtained. Because the pitches and positions of the sinusoidal grating and the gray code are the same, in the binary gray code coded image, the center of a gray code stripe coincides with the center of a sinusoidal stripe, after the first-step line image data or the second-step line image data are collected, the central line of the stripe is extracted, the coded value corresponding to each pixel in the center of the stripe is obtained by comparing the pixel value of the gray code according to the position of the center of the stripe, and then the stripe level K can be obtained by solving, and the formula of the phase true value psi is as follows:
Figure BDA0002923824800000084
wherein the content of the first and second substances,
Figure BDA0002923824800000085
the phase principal value is obtained in the previous step.
Then, in step S102, the depth information of the walking image data is calculated, and the detailed steps are as follows:
performing stereo matching according to the absolute phase of the first-step line image data and the absolute phase of the second-step line image data, and obtaining a depth value based on a stereo matching result;
performing target detection based on the stereo matching result to obtain a left foot coordinate and a right foot coordinate;
obtaining a depth map based on the left foot and a depth map based on the right foot according to the coordinates of the left foot and the right foot and the depth values;
and obtaining first depth information based on the left foot and second depth information based on the right foot according to the depth map based on the left foot and the depth map based on the right foot.
The calculation formula of stereo matching is exemplarily as follows:
Figure BDA0002923824800000091
wherein the content of the first and second substances,
Figure BDA0002923824800000092
d=xleft-xrightrepresenting the disparity value, Z the depth value, D the baseline distance, f the focal length of the first infrared camera and the second infrared camera.
The implementation of target detection based on the stereo matching result is exemplarily as follows:
positioning the stereo matching result through a YOLO model to obtain a rectangular frame coordinate value comprising the left foot part and a rectangular frame coordinate value comprising the right foot part;
the coordinate value of the rectangular frame of the left foot part is the coordinate of the left foot part, and the coordinate value of the rectangular frame of the right foot part is the coordinate of the right foot part.
To this end, the step S103 may be completed, wherein the first depth difference based on the left foot and the second depth difference based on the right foot are obtained through calculation according to the first depth information and the second depth information of the current frame and the first depth information and the second depth information of the adjacent frames.
And S104, controlling the displacement of the virtual character to be projected by the projector according to the first depth difference and the second depth difference.
Based on the steps S101 to S103, the step S104 includes:
if the first depth difference is larger than the second depth difference, taking the first depth difference as the displacement of the virtual character;
and if the second depth difference is larger than the first depth difference, taking the second depth difference as the displacement of the virtual character.
As shown in fig. 2, an embodiment of the present invention provides a virtual human body displacement control device 20, including:
a walking image data acquisition module 21 configured to acquire walking image data of a real person by an image acquisition device, the walking image data including first-step line image data based on a left foot and second-step line image data based on a right foot, the first-step line image data and the second-step line image data being sorted in units of frames;
a depth information calculation module 22 for calculating depth information of the walking image data, the depth information including first depth information based on a left foot and second depth information based on a right foot;
the depth difference calculating module 23 is configured to, when the depth information of the current frame is obtained, calculate and obtain a first depth difference based on the left foot and a second depth difference based on the right foot according to the first depth information and the second depth information 24 of the current frame and the first depth information and the second depth information of adjacent frames;
and the displacement control module is used for controlling the displacement of the virtual character to be projected by the projector according to the first depth difference and the second depth difference.
The embodiment of the present invention further provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, and when the processor executes the computer program, the steps of the virtual character displacement control method in the foregoing embodiments are implemented.
An embodiment of the present invention further provides a storage medium, where the storage medium is a computer-readable storage medium, and a computer program is stored on the storage medium, where the computer program, when executed by a processor, implements the steps of the virtual character displacement control method in the foregoing embodiments.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the foregoing embodiments illustrate the present invention in detail, those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A virtual character displacement control method is characterized by comprising the following steps:
acquiring walking image data of a real person by an image acquisition device, wherein the walking image data comprises first step line image data based on a left foot and second step line image data based on a right foot, and the first step line image data and the second step line image data are sorted by taking a frame as a unit;
calculating depth information of the walking image data, the depth information including first depth information based on a left foot and second depth information based on a right foot;
when the depth information of the current frame is obtained, calculating and obtaining a first depth difference based on a left foot and a second depth difference based on a right foot according to the first depth information and the second depth information of the current frame and the first depth information and the second depth information of adjacent frames;
and controlling the displacement of the virtual character to be projected by the projector according to the first depth difference and the second depth difference.
2. The virtual character displacement control method as claimed in claim 1, wherein said image capturing device includes a first infrared camera based on a left foot and a second infrared camera based on a right foot;
before the step of acquiring the walking image data of the real person through the image acquisition device, the step comprises the following steps:
and calibrating the positions of the first infrared camera, the second infrared camera and the projector.
3. The virtual character displacement control method according to claim 2, wherein before calculating the depth information of the walking image data, the method comprises:
and calculating a phase main value of the first-step line image data and a phase main value of the second-step line image data through sinusoidal grating projection, acquiring an absolute phase of the first-step line image data according to the phase main value of the first-step line image data, and acquiring an absolute phase of the second-step line image data according to the phase main value of the second-step line image data.
4. The virtual character displacement control method according to claim 3, wherein calculating depth information of the walking image data includes:
performing stereo matching according to the absolute phase of the first-step line image data and the absolute phase of the second-step line image data, and obtaining a depth value based on a stereo matching result;
performing target detection based on the stereo matching result to obtain a left foot coordinate and a right foot coordinate;
obtaining a depth map based on the left foot and a depth map based on the right foot according to the coordinates of the left foot and the right foot and the depth values;
and obtaining first depth information based on the left foot and second depth information based on the right foot according to the depth map based on the left foot and the depth map based on the right foot.
5. The virtual character displacement control method of claim 4, wherein stereo matching is performed based on the absolute phase of the image data of the first step line and the absolute phase of the image data of the second step line, and the formula is:
Figure FDA0002923824790000021
wherein the content of the first and second substances,
Figure FDA0002923824790000022
d=xleft-xrightrepresenting the disparity value, Z the depth value, D the baseline distance, f the focal length of the first infrared camera and the second infrared camera.
6. The virtual character displacement control method as claimed in claim 4, wherein the obtaining of the left foot coordinates and the right foot coordinates by performing the object detection based on the stereo matching result comprises:
positioning the stereo matching result through a YOLO model to obtain a rectangular frame coordinate value comprising the left foot part and a rectangular frame coordinate value comprising the right foot part;
the coordinate value of the rectangular frame of the left foot part is the coordinate of the left foot part, and the coordinate value of the rectangular frame of the right foot part is the coordinate of the right foot part.
7. The virtual character displacement control method according to any one of claims 1 to 6, wherein controlling the displacement amount of the virtual character to be projected by the projector based on the first depth difference and the second depth difference comprises:
if the first depth difference is larger than the second depth difference, taking the first depth difference as the displacement of the virtual character;
and if the second depth difference is larger than the first depth difference, taking the second depth difference as the displacement of the virtual character.
8. A virtual human body displacement control device is characterized by comprising:
a walking image data acquisition module for acquiring walking image data of a real person by an image acquisition device, the walking image data including first-step line image data based on a left foot and second-step line image data based on a right foot, the first-step line image data and the second-step line image data being sorted in units of frames;
a depth information calculation module for calculating depth information of the walking image data, the depth information including first depth information based on a left foot and second depth information based on a right foot;
the depth difference calculating module is used for calculating and obtaining a first depth difference based on a left foot and a second depth difference based on a right foot according to the first depth information and the second depth information of the current frame and the first depth information and the second depth information of adjacent frames when the depth information of the current frame is obtained;
and the displacement control module is used for controlling the displacement of the virtual character to be projected by the projector according to the first depth difference and the second depth difference.
9. A terminal device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the virtual character displacement control method according to any one of claims 1 to 7 when executing the computer program.
10. A storage medium which is a computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the virtual character displacement control of any one of claims 1 to 7. Various steps in the method.
CN202110127026.2A 2021-01-29 2021-01-29 Virtual character displacement control method and device and terminal equipment Active CN112764546B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110127026.2A CN112764546B (en) 2021-01-29 2021-01-29 Virtual character displacement control method and device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110127026.2A CN112764546B (en) 2021-01-29 2021-01-29 Virtual character displacement control method and device and terminal equipment

Publications (2)

Publication Number Publication Date
CN112764546A true CN112764546A (en) 2021-05-07
CN112764546B CN112764546B (en) 2022-08-09

Family

ID=75703755

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110127026.2A Active CN112764546B (en) 2021-01-29 2021-01-29 Virtual character displacement control method and device and terminal equipment

Country Status (1)

Country Link
CN (1) CN112764546B (en)

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012014714A1 (en) * 2010-07-27 2012-02-02 オムロンヘルスケア株式会社 Gait change determination device
US20150341616A1 (en) * 2014-05-22 2015-11-26 Disney Enterprises, Inc. Parallax based monoscopic rendering
WO2016203285A1 (en) * 2015-06-18 2016-12-22 Intellect Motion Llc. Systems and methods for virtually displaying real movements of objects in a 3d-space
CN107066081A (en) * 2016-12-23 2017-08-18 歌尔科技有限公司 The interaction control method and device and virtual reality device of a kind of virtual reality system
WO2017170440A1 (en) * 2016-03-28 2017-10-05 Necソリューションイノベータ株式会社 Measurement device, measurement method, and computer readable recording medium
CN107357426A (en) * 2017-07-03 2017-11-17 南京江南博睿高新技术研究院有限公司 A kind of motion sensing control method for virtual reality device
CN107707835A (en) * 2017-09-11 2018-02-16 广东欧珀移动通信有限公司 Image processing method and device, electronic installation and computer-readable recording medium
EP3321888A1 (en) * 2015-07-08 2018-05-16 Korea University Research and Business Foundation Projected image generation method and device, and method for mapping image pixels and depth values
CN108778123A (en) * 2016-03-31 2018-11-09 日本电气方案创新株式会社 Gait analysis device, gait analysis method and computer readable recording medium storing program for performing
CN109545323A (en) * 2018-10-31 2019-03-29 贵州医科大学附属医院 A kind of ankle rehabilitation system with VR simulation walking
CN109598796A (en) * 2017-09-30 2019-04-09 深圳超多维科技有限公司 Real scene is subjected to the method and apparatus that 3D merges display with dummy object
US20190340773A1 (en) * 2018-08-30 2019-11-07 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for a synchronous motion of a human body model
CN110766738A (en) * 2019-05-08 2020-02-07 叠境数字科技(上海)有限公司 Virtual shoe fitting method based on multi-view depth sensor
CN110942479A (en) * 2018-09-25 2020-03-31 Oppo广东移动通信有限公司 Virtual object control method, storage medium, and electronic device
CN111721236A (en) * 2020-05-24 2020-09-29 深圳奥比中光科技有限公司 Three-dimensional measurement system and method and computer equipment
WO2020259506A1 (en) * 2019-06-27 2020-12-30 华为技术有限公司 Method and device for determining distortion parameters of camera

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130123669A1 (en) * 2010-07-27 2013-05-16 Omron Healthcare Co Ltd Gait change determination device
WO2012014714A1 (en) * 2010-07-27 2012-02-02 オムロンヘルスケア株式会社 Gait change determination device
US20150341616A1 (en) * 2014-05-22 2015-11-26 Disney Enterprises, Inc. Parallax based monoscopic rendering
WO2016203285A1 (en) * 2015-06-18 2016-12-22 Intellect Motion Llc. Systems and methods for virtually displaying real movements of objects in a 3d-space
EP3321888A1 (en) * 2015-07-08 2018-05-16 Korea University Research and Business Foundation Projected image generation method and device, and method for mapping image pixels and depth values
WO2017170440A1 (en) * 2016-03-28 2017-10-05 Necソリューションイノベータ株式会社 Measurement device, measurement method, and computer readable recording medium
CN108778123A (en) * 2016-03-31 2018-11-09 日本电气方案创新株式会社 Gait analysis device, gait analysis method and computer readable recording medium storing program for performing
CN107066081A (en) * 2016-12-23 2017-08-18 歌尔科技有限公司 The interaction control method and device and virtual reality device of a kind of virtual reality system
CN107357426A (en) * 2017-07-03 2017-11-17 南京江南博睿高新技术研究院有限公司 A kind of motion sensing control method for virtual reality device
CN107707835A (en) * 2017-09-11 2018-02-16 广东欧珀移动通信有限公司 Image processing method and device, electronic installation and computer-readable recording medium
CN109598796A (en) * 2017-09-30 2019-04-09 深圳超多维科技有限公司 Real scene is subjected to the method and apparatus that 3D merges display with dummy object
US20190340773A1 (en) * 2018-08-30 2019-11-07 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for a synchronous motion of a human body model
CN110942479A (en) * 2018-09-25 2020-03-31 Oppo广东移动通信有限公司 Virtual object control method, storage medium, and electronic device
CN109545323A (en) * 2018-10-31 2019-03-29 贵州医科大学附属医院 A kind of ankle rehabilitation system with VR simulation walking
CN110766738A (en) * 2019-05-08 2020-02-07 叠境数字科技(上海)有限公司 Virtual shoe fitting method based on multi-view depth sensor
WO2020259506A1 (en) * 2019-06-27 2020-12-30 华为技术有限公司 Method and device for determining distortion parameters of camera
CN111721236A (en) * 2020-05-24 2020-09-29 深圳奥比中光科技有限公司 Three-dimensional measurement system and method and computer equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
薛亚昕: "基于步态分析的下肢康复训练评估方法", 《中国优秀博硕士学位论文全文数据库(硕士)医药卫生科技辑》 *

Also Published As

Publication number Publication date
CN112764546B (en) 2022-08-09

Similar Documents

Publication Publication Date Title
KR101791590B1 (en) Object pose recognition apparatus and method using the same
CN110383343B (en) Inconsistency detection system, mixed reality system, program, and inconsistency detection method
KR101259835B1 (en) Apparatus and method for generating depth information
CN107481304B (en) Method and device for constructing virtual image in game scene
Landau et al. Simulating kinect infrared and depth images
CN109751973B (en) Three-dimensional measuring device, three-dimensional measuring method, and storage medium
CN101697233B (en) Structured light-based three-dimensional object surface reconstruction method
KR100407436B1 (en) Method and apparatus for capturing stereoscopic images using image sensors
KR101816170B1 (en) Apparatus and method for obtaining 3D depth information
US20200057831A1 (en) Real-time generation of synthetic data from multi-shot structured light sensors for three-dimensional object pose estimation
CN107452034B (en) Image processing method and device
US20090296984A1 (en) System and Method for Three-Dimensional Object Reconstruction from Two-Dimensional Images
JP6883608B2 (en) Depth data processing system that can optimize depth data by aligning images with respect to depth maps
CN107592449B (en) Three-dimensional model establishing method and device and mobile terminal
CA2650557A1 (en) System and method for three-dimensional object reconstruction from two-dimensional images
CN107610171B (en) Image processing method and device
JP2014119442A (en) Three-dimentional measurement device and control method thereof
CN107517346B (en) Photographing method and device based on structured light and mobile device
CN107194985A (en) A kind of three-dimensional visualization method and device towards large scene
CN110738730A (en) Point cloud matching method and device, computer equipment and storage medium
CN114792345B (en) Calibration method based on monocular structured light system
CN111275776A (en) Projection augmented reality method and device and electronic equipment
CN107493452B (en) Video picture processing method and device and terminal
CN112764546B (en) Virtual character displacement control method and device and terminal equipment
CN111105489A (en) Data synthesis method and apparatus, storage medium, and electronic apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant