CN117115400A - Method, device, computer equipment and storage medium for displaying whole body human body actions in real time - Google Patents

Method, device, computer equipment and storage medium for displaying whole body human body actions in real time Download PDF

Info

Publication number
CN117115400A
CN117115400A CN202311194281.4A CN202311194281A CN117115400A CN 117115400 A CN117115400 A CN 117115400A CN 202311194281 A CN202311194281 A CN 202311194281A CN 117115400 A CN117115400 A CN 117115400A
Authority
CN
China
Prior art keywords
user
real
characteristic
human body
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311194281.4A
Other languages
Chinese (zh)
Inventor
张志男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Red Arrow Technology Co ltd
Original Assignee
Shenzhen Red Arrow Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Red Arrow Technology Co ltd filed Critical Shenzhen Red Arrow Technology Co ltd
Priority to CN202311194281.4A priority Critical patent/CN117115400A/en
Publication of CN117115400A publication Critical patent/CN117115400A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a method, a device, computer equipment and a storage medium for displaying human body actions of a whole body in real time, wherein the method comprises the following steps: acquiring a real-time picture, and confirming whether a characteristic user meeting a characteristic standard exists in the real-time picture; if the characteristic users meeting the characteristic standard exist in the real-time picture, generating a human body posture model according to the characteristic users; controlling the human body posture model to synchronize actions of the characteristic users in the real-time picture, and generating a virtual picture; synchronizing the virtual picture to a VR device. The application can display the whole body action of the human body in the VR equipment in real time, thereby improving the convenience.

Description

Method, device, computer equipment and storage medium for displaying whole body human body actions in real time
Technical Field
The present application relates to the field of virtual reality technologies, and in particular, to a method, an apparatus, a computer device, and a storage medium for displaying a whole body human motion in real time.
Background
A large number of virtual reality devices (VR devices for short) all need a whole-body tracking system, so that a user can see the virtual body in the VR devices, and the immersion sense is further enhanced. Most current tracking systems simulate the user's upper body virtual form by tracking the head display and hand motion controller of the VR device. However, with the development of VR technology, users are increasingly required to see the whole-body virtual form in VR equipment, so that the users can adjust the posture in time, but the current tracking system cannot simulate the lower-body virtual form, so that the whole-body virtual form cannot be displayed in the VR equipment, and the use experience of the users is reduced.
Disclosure of Invention
The embodiment of the application provides a method, a device, computer equipment and a storage medium for displaying human body actions of a whole body in real time, which aim to solve the problem that the prior VR equipment cannot display virtual forms of the whole body.
In a first aspect, an embodiment of the present application provides a method for displaying a motion of a whole body in real time, the method including:
acquiring a real-time picture, and confirming whether a characteristic user meeting a characteristic standard exists in the real-time picture;
if the characteristic users meeting the characteristic standard exist in the real-time picture, generating a human body posture model according to the characteristic users;
controlling the human body posture model to synchronize actions of the characteristic users in the real-time picture, and generating a virtual picture;
synchronizing the virtual picture to a VR device.
In a second aspect, an embodiment of the present application further provides a device for displaying motion of a whole body human body in real time, where the device includes:
the first acquisition unit is used for acquiring a real-time picture and confirming whether a characteristic user meeting a characteristic standard exists in the real-time picture;
the first generation unit is used for generating a human body posture model according to the characteristic users if the characteristic users meeting the characteristic standard exist in the real-time picture;
the first synchronization unit is used for controlling the human body posture model to synchronize the actions of the characteristic users in the real-time picture and generating a virtual picture;
and the first sending unit is used for synchronizing the virtual picture to the VR equipment.
In a third aspect, an embodiment of the present application further provides a computer device, which includes a memory and a processor, where the memory stores a computer program, and the processor implements the method when executing the computer program.
In a fourth aspect, embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements the above method.
The embodiment of the application provides a method, a device, computer equipment and a storage medium for displaying human body actions of a whole body in real time. Wherein the method comprises the following steps: acquiring a real-time picture, and confirming whether a characteristic user meeting a characteristic standard exists in the real-time picture; if the characteristic users meeting the characteristic standard exist in the real-time picture, generating a human body posture model according to the characteristic users; controlling the human body posture model to synchronize actions of the characteristic users in the real-time picture, and generating a virtual picture; synchronizing the virtual picture to a VR device. According to the embodiment of the application, the real-time picture can be obtained, when the characteristic users meeting the characteristic standard exist in the real-time picture, the human body posture model matched with the characteristic users can be generated, the human body posture model is controlled to simulate the actions of the characteristic users in the real-time picture, a virtual picture is generated, and the virtual picture is sent to the VR equipment, so that the users can see the whole virtual form in the VR equipment, and the use experience of the users is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for displaying motion of a whole body in real time according to an embodiment of the present application;
FIG. 2 is a schematic block diagram of an apparatus for displaying whole-body human motion in real time provided by an embodiment of the present application;
fig. 3 is a schematic block diagram of a computer device provided by an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be understood that the terms "comprises" and "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for displaying a motion of a whole body in real time according to an embodiment of the application. The method for displaying the whole body human body actions in real time is applied to computer equipment, wherein the computer equipment is provided with a camera, and the camera is used for shooting a user and sending a shot real-time picture to the computer equipment. As shown in fig. 1, the method includes steps S110 to S140.
S110, acquiring a real-time picture, and confirming whether a characteristic user meeting a characteristic standard exists in the real-time picture.
In the embodiment of the application, the real-time picture can be obtained by shooting by equipment with an image capturing function, such as a video camera and a camera, taking the camera as an example. When using VR devices, a relatively independent space is generally provided for a user to use VR devices, and one or more cameras may be installed in the space, preferably, a plurality of cameras may be installed in the space, so as to capture all actions of the user. The camera can be a camera with a tracking function, and the position of the lens can be adjusted along with the movement of a user. When the user starts using the VR device, the camera will start and send a real-time picture to the connected control device, e.g. a computer. Of course, the camera may be provided with a wireless communication module, for example, a WIFI module, so that the captured real-time image may be transmitted to a communication device such as a mobile phone through the WIFI module. After receiving the real-time picture, the control equipment extracts all information in the real-time picture and judges whether a characteristic user meeting the characteristic standard exists in the real-time picture. All actions of all persons, including upper body actions and lower body actions, are recorded on a real-time picture shot by the camera.
Typically, VR devices have a relatively independent space in which there are some people, such as staff, or co-workers of the user who do not wear VR devices, in addition to the user who wears VR devices. Therefore, it is necessary to screen the person on the real-time screen and confirm the characteristic player. The feature criteria may be a determination of whether the player has a VR device to wear or whether the player has a corresponding identification to wear, e.g., a bracelet may be worn for each feature user, with the corresponding identification on the bracelet, and the feature user wearing the bracelet. Preferably, the feature user may be screened by whether the user is wearing a VR device.
In some embodiments, for example, the step S110 may include the following steps:
confirming whether a user carries VR equipment or not in the real-time picture;
if the VR equipment carried by the user exists in the real-time picture, confirming whether the VR equipment is consistent with preset VR equipment or not;
and if the VR equipment is consistent with the preset VR equipment, the user is identified as the characteristic user.
In the embodiment of the application, the information contained in the real-time picture is more disordered, and the personnel are more generally, so that the characteristic users can be screened by wearing the VR equipment. When the VR equipment worn by the user exists in the real-time picture, whether the model of the worn VR equipment is consistent with the preset VR equipment can be further judged, and if the model of the VR equipment worn by the user is consistent with the preset VR equipment, the user can be identified as a characteristic user. The number of the feature users can be one or more, when a plurality of feature users exist, the feature users can be further classified, for example, the feature users are classified into a first feature user, a second feature user and a third feature user, and one of the feature users is set as a main user so as to be convenient for connection with other feature users.
And S120, if the characteristic users meeting the characteristic standard exist in the real-time picture, generating a human body posture model according to the characteristic users.
In the embodiment of the application, when the characteristic users meeting the characteristic standard exist in the real-time picture, the human body posture model can be generated according to the characteristic users. The human body posture model can be to generate a human body skeleton diagram by adopting the existing algorithm, wherein the human body skeleton diagram is consistent with the characteristics of the corresponding characteristic users. If there are multiple feature users, a human body posture model can be generated for each feature user. The generated human body posture model is a static model, and the size and shape of the model are consistent with those of the characteristic users corresponding to the human body posture model.
In some embodiments, for example, in this embodiment, the step of if there is a feature user in the real-time screen that meets the feature standard may include the following steps:
confirming whether the characteristic user is in a preset area or not;
and if the characteristic user is in the preset area, entering the step of generating a human body posture model according to the characteristic user.
In the embodiment of the application, after the characteristic user is confirmed, whether the user is in the preset area can be judged. Since most VR devices need to be better used in a specific area, it can be further determined whether the feature user is in a preset area. When the feature user is in the preset area, the feature user is indicated to start to use, a human body posture model of the feature user can be generated, when the feature user is not in the preset area, the feature user is indicated to not use the VR equipment, the human body posture model of the feature user can be temporarily not generated, and occupation of resources is reduced.
S130, controlling the human body posture model to synchronize actions of the characteristic users in the real-time picture, and generating a virtual picture.
In the embodiment of the application, the human body posture model is a static model, and the size and shape of the model are consistent with those of the characteristic users. After the human body posture model is generated, the human body posture model can be controlled to simulate the actions of the corresponding characteristic users. For example, when the feature user lifts the right hand, the human body posture model corresponding to the feature user lifts the right hand synchronously, when the feature user stretches out of the left foot, the human body posture model corresponding to the feature user stretches out of the left foot synchronously, and when the feature user lifts off, the human body posture model corresponding to the feature user lifts off synchronously. And then generating a virtual picture by a series of actions executed by the human body posture model, wherein the virtual picture is a picture of the human body posture model simulation characteristic user. When a plurality of characteristic users exist, each characteristic user generates a corresponding human body posture model, and each human body posture model simulates the action of the corresponding characteristic user.
And S140, synchronizing the virtual picture to the VR device.
In the embodiment of the application, the generated virtual picture is sent to the VR equipment, so that the virtual gesture of the characteristic user corresponding to the VR equipment can be presented on the VR equipment, and the user can conveniently and timely adjust the action of the user. When a plurality of feature users exist, the virtual picture corresponding to each feature user can be synthesized into one virtual picture and synchronously sent to all feature users, or the virtual picture corresponding to each feature user can be independently sent to the corresponding feature user. It is further noted that the virtual picture may be presented in any one of the virtual display areas provided by the VR device, e.g., the virtual picture may be presented in the form of a window in the lower left corner of the virtual display area.
In some embodiments, for example, the method for displaying a whole-body human motion in real time according to the embodiment of the present application further includes the following steps:
if a plurality of feature users meeting the feature standard exist in the real-time picture, confirming whether an authorization request is received or not;
if the authorization request is received, setting one of the characteristic users as a main user, and setting other characteristic users as non-main users;
generating a first human body posture model according to the main user, and generating a second human body posture model according to the non-main user;
controlling the first human body posture model to synchronize the actions of the main user in the real-time picture, and controlling the second human body posture model to synchronize the actions of the non-main user in the real-time picture, and generating a synthetic virtual picture;
and respectively sending the synthesized virtual pictures to VR equipment corresponding to the master user and VR equipment corresponding to the non-master user.
In the embodiment of the application, when a plurality of feature users meeting the feature standard exist in the real-time picture, whether the authorization request is received can be judged. In a scenario where most multiple people use VR devices simultaneously, the number of people is typically two to three, two for example. When two characteristic users meeting the characteristic standard exist in the real-time picture, one user can be set as a main user, the other user can be set as a non-main user, and then the connection between the main user and the non-main user can be established, so that the VR scene is switched from a single scene to a multi-person scene, and the two parties share the virtual picture under the multi-person scene. And then generating a first human body posture model according to the main user, generating a second human body posture model according to the non-main user, controlling the first human body posture model to simulate the action of the main user, controlling the second human body posture model to simulate the action of the non-main user, and generating a virtual picture. The virtual frame may be a virtual frame corresponding to a first human body posture model, for example, a first frame, and a second human body posture model corresponding to another virtual frame, for example, a second frame, where the first frame is sent to a VR device corresponding to the first human body posture model, and the second frame is sent to a VR device corresponding to the second human body posture model. The first screen and the second screen may be combined into a third screen, and the third screen may be sent to the VR device corresponding to the first human body posture model and the VR device corresponding to the second human body posture model, respectively.
In some embodiments, for example, the method for displaying a whole-body human motion in real time according to the embodiment of the present application further includes the following steps: and if the VR equipment corresponding to the characteristic user is detected to leave the characteristic user in the real-time picture, stopping generating the human body posture model.
In the embodiment of the application, when the VR device is taken off from the head by the characteristic user, the generation of the human body posture model can be stopped, and the waste of resources is avoided. In addition, when the real-time picture shot by the camera is sent to the control equipment, the control equipment can perform mosaic processing on the head portrait of the user in the real-time picture, so that privacy leakage is prevented.
Fig. 2 is a schematic block diagram of an apparatus 100 for displaying a whole-body human motion in real time according to an embodiment of the present application. As shown in fig. 2, the present application also provides a device 100 for displaying the motion of the whole body in real time, corresponding to the above method for displaying the motion of the whole body in real time. The apparatus 100 for displaying the motion of the whole body in real time includes means for performing the above-described method for displaying the motion of the whole body in real time. Specifically, referring to fig. 2, the apparatus 100 for displaying motion of a whole body in real time includes a first obtaining unit 110, a first generating unit 120, a first synchronizing unit 130, and a first transmitting unit 140.
The first obtaining unit 110 is configured to obtain a real-time frame and determine whether a feature user meeting a feature standard exists in the real-time frame; the first generating unit 120 is configured to generate a human body posture model according to the feature user if the feature user meeting the feature standard exists in the real-time frame; the first synchronization unit 130 is configured to control the human body posture model to synchronize the actions of the feature user in the real-time frame, and generate a virtual frame; the first sending unit 140 is configured to synchronize the virtual picture to a VR device.
In some embodiments, for example, the first obtaining unit 110 includes a first confirmation unit, a second confirmation unit, and a first identification unit.
The first confirming unit is used for confirming whether a user carries the VR equipment or not in the real-time picture; the second confirmation unit is used for confirming whether the VR equipment is consistent with a preset VR equipment or not if the VR equipment carried by the user exists in the real-time picture; and the first identification unit is used for identifying the user as the characteristic user if the VR equipment is consistent with the preset VR equipment.
The application also provides a device for displaying the whole body human body actions in real time, which is characterized in that a third confirmation unit and a first access unit are added on the basis of the embodiment.
The third confirming unit is used for confirming whether the characteristic user is in a preset area or not; and the first entering unit is used for entering the step of generating a human body posture model according to the characteristic user if the characteristic user is in the preset area.
The device for displaying the whole body human body motion in real time is added with a fourth confirmation unit, a first setting unit, a second generating unit, a second synchronizing unit and a second transmitting unit on the basis of the embodiment.
The fourth confirmation unit is used for confirming whether an authorization request is received or not if a plurality of feature users meeting the feature standard exist in the real-time picture; the first setting unit is used for setting one of the characteristic users as a main user and setting other characteristic users as non-main users if the authorization request is received; the second generation unit is used for generating a first human body gesture model according to the main user and generating a second human body gesture model according to the non-main user; the second synchronization unit is used for controlling the first human body posture model to synchronize the actions of the main user in the real-time picture, controlling the second human body posture model to synchronize the actions of the non-main user in the real-time picture, and generating a synthetic virtual picture; and the second sending unit is used for respectively sending the synthesized virtual pictures to the VR equipment corresponding to the master user and the VR equipment corresponding to the non-master user.
The application also provides a device for displaying the whole-body human body motion in real time, which is characterized in that a first detection unit is added on the basis of the embodiment.
The first detection unit is used for stopping the generation of the human body posture model if the VR equipment corresponding to the characteristic user is detected to leave the characteristic user in the real-time picture.
It should be noted that, as those skilled in the art can clearly understand, the specific implementation process of the device and each unit for displaying the motion of the whole body in real time can refer to the corresponding description in the foregoing method embodiment, and for convenience and brevity of description, the detailed description is omitted herein.
The above-described means for displaying the motion of the whole body in real time may be implemented in the form of a computer program which can be run on a computer device as shown in fig. 3.
Referring to fig. 3, fig. 3 is a schematic block diagram of a computer device according to an embodiment of the present application. With reference to FIG. 3, the computer device 500 includes a processor 502, memory, and an interface 507, which are connected by a system bus 501, wherein the memory may include a non-volatile storage medium 503 and an internal memory 504.
The non-volatile storage medium 503 may store an operating system 5031 and a computer program 5032. The computer program 5032, when executed, causes the processor 502 to perform a method of displaying whole body human actions in real time.
The processor 502 is used to provide computing and control capabilities to support the operation of the overall computer device 500.
The internal memory 504 provides an environment for the execution of a computer program 5032 in the non-volatile storage medium 503, which computer program 5032, when executed by the processor 502, causes the processor 502 to perform a method of displaying whole body human actions in real time.
The interface 505 is used to communicate with other devices. It will be appreciated by those skilled in the art that the architecture shown in fig. 3 is merely a block diagram of some of the architecture relevant to the present inventive arrangements and is not limiting of the computer device 500 to which the present inventive arrangements may be implemented, and that a particular computer device 500 may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
It should be appreciated that in embodiments of the present application, the processor 502 may be a Central processing unit (Central ProcessingUnit, CPU), and the processor 502 may also be other general purpose processors, digital signal processors (Figital Signal Processor, FSP), application specific integrated circuits (Application Specific IntegrateF Circuit, ASIC), off-the-shelf Programmable gate arrays (FPGAs) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. Wherein the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Those skilled in the art will appreciate that all or part of the flow in a method embodying the above described embodiments may be accomplished by computer programs instructing the relevant hardware. The computer program may be stored in a storage medium that is a computer readable storage medium. The computer program is executed by at least one processor in the computer system to implement the flow steps of the embodiments of the method described above.
Accordingly, the present application also provides a storage medium. The storage medium may be a computer readable storage medium. The storage medium stores a computer program. The computer program, when executed by a processor, implements any of the embodiments of the training method of domain separation based speech conversion models described above.
The storage medium may be a U-disk, a removable hard disk, a read-only memory (ROM), a magnetic disk, or an optical disk, or other various computer-readable storage media that may store program codes.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the device embodiments described above are merely illustrative. For example, the division of each unit is only one logic function division, and there may be another division manner in actual implementation. For example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs. The units in the device of the embodiment of the application can be combined, divided and deleted according to actual needs. In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The integrated unit may be stored in a storage medium if implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application is essentially or partly contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device to perform all or part of the steps of the method according to the embodiments of the present application.
In the foregoing embodiments, the descriptions of the embodiments are focused on, and for those portions of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
While the application has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the application. Therefore, the protection scope of the application is subject to the protection scope of the claims.

Claims (10)

1. A method of displaying a whole-body human action in real time, the method comprising:
acquiring a real-time picture, and confirming whether a characteristic user meeting a characteristic standard exists in the real-time picture;
if the characteristic users meeting the characteristic standard exist in the real-time picture, generating a human body posture model according to the characteristic users;
controlling the human body posture model to synchronize actions of the characteristic users in the real-time picture, and generating a virtual picture;
synchronizing the virtual picture to a VR device.
2. The method of claim 1, wherein the step of confirming whether the feature user conforming to the feature standard exists in the real-time screen comprises:
confirming whether a user carries VR equipment or not in the real-time picture;
and if the VR equipment carried by the user exists in the real-time picture, identifying the user as the characteristic user.
3. The method of claim 2, wherein after the step of carrying the VR device by the user if present in the real-time picture, further comprises:
confirming whether the VR equipment is consistent with preset VR equipment or not;
and if the VR equipment is consistent with the preset VR equipment, entering the step of identifying the user as the characteristic user.
4. The method according to claim 1, wherein after the step of providing the real-time screen with the feature user satisfying the feature criterion, the method further comprises:
confirming whether the characteristic user is in a preset area or not;
and if the characteristic user is in the preset area, entering the step of generating a human body posture model according to the characteristic user.
5. The method of claim 1, wherein the method further comprises:
if a plurality of feature users meeting the feature standard exist in the real-time picture, confirming whether an authorization request is received or not;
and if the authorization request is received, setting one of the characteristic users as a main user, and setting other characteristic users as non-main users.
6. The method of claim 5, wherein the method further comprises:
generating a first human body posture model according to the main user, and generating a second human body posture model according to the non-main user;
controlling the first human body posture model to synchronize the actions of the main user in the real-time picture, and controlling the second human body posture model to synchronize the actions of the non-main user in the real-time picture, and generating a synthetic virtual picture;
and respectively sending the synthesized virtual pictures to VR equipment corresponding to the master user and VR equipment corresponding to the non-master user.
7. The method of claim 1, wherein the method further comprises:
and if the VR equipment corresponding to the characteristic user is detected to leave the characteristic user in the real-time picture, stopping generating the human body posture model.
8. A device for displaying motion of a whole body human body in real time, the device comprising:
the first acquisition unit is used for acquiring a real-time picture and confirming whether a characteristic user meeting a characteristic standard exists in the real-time picture;
the first generation unit is used for generating a human body posture model according to the characteristic users if the characteristic users meeting the characteristic standard exist in the real-time picture;
the first synchronization unit is used for controlling the human body posture model to synchronize the actions of the characteristic users in the real-time picture and generating a virtual picture;
and the first sending unit is used for synchronizing the virtual picture to the VR equipment.
9. A computer device comprising a memory and a processor coupled to the memory; the memory is used for storing a computer program; the processor is configured to execute a computer program stored in the memory to perform the steps of the method according to any one of claims 1-7.
10. A computer readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, can implement the steps of the method according to any of claims 1-7 on a computer device with a capacitive touch screen.
CN202311194281.4A 2023-09-15 2023-09-15 Method, device, computer equipment and storage medium for displaying whole body human body actions in real time Pending CN117115400A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311194281.4A CN117115400A (en) 2023-09-15 2023-09-15 Method, device, computer equipment and storage medium for displaying whole body human body actions in real time

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311194281.4A CN117115400A (en) 2023-09-15 2023-09-15 Method, device, computer equipment and storage medium for displaying whole body human body actions in real time

Publications (1)

Publication Number Publication Date
CN117115400A true CN117115400A (en) 2023-11-24

Family

ID=88810931

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311194281.4A Pending CN117115400A (en) 2023-09-15 2023-09-15 Method, device, computer equipment and storage medium for displaying whole body human body actions in real time

Country Status (1)

Country Link
CN (1) CN117115400A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106843456A (en) * 2016-08-16 2017-06-13 深圳超多维光电子有限公司 A kind of display methods, device and virtual reality device followed the trail of based on attitude
US20180342109A1 (en) * 2017-05-25 2018-11-29 Thomson Licensing Determining full-body pose for a virtual reality environment
US10242501B1 (en) * 2016-05-03 2019-03-26 WorldViz, Inc. Multi-user virtual and augmented reality tracking systems
CN109951628A (en) * 2017-12-21 2019-06-28 广东欧珀移动通信有限公司 Model building method, photographic method, device, storage medium and terminal
CN110349527A (en) * 2019-07-12 2019-10-18 京东方科技集团股份有限公司 Virtual reality display methods, apparatus and system, storage medium
CN112130660A (en) * 2020-08-14 2020-12-25 青岛小鸟看看科技有限公司 Interaction method and system based on virtual reality all-in-one machine
CN113608613A (en) * 2021-07-30 2021-11-05 建信金融科技有限责任公司 Virtual reality interaction method and device, electronic equipment and computer readable medium
CN114241168A (en) * 2021-12-01 2022-03-25 歌尔光学科技有限公司 Display method, display device, and computer-readable storage medium
CN116524081A (en) * 2023-01-17 2023-08-01 小沃科技有限公司 Virtual reality picture adjustment method, device, equipment and medium
WO2023160356A1 (en) * 2022-02-25 2023-08-31 凝动医疗技术服务(上海)有限公司 Method and system for enhancing user experience of virtual reality system
CN117425870A (en) * 2021-06-02 2024-01-19 元平台技术有限公司 Dynamic mixed reality content in virtual reality

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10242501B1 (en) * 2016-05-03 2019-03-26 WorldViz, Inc. Multi-user virtual and augmented reality tracking systems
CN106843456A (en) * 2016-08-16 2017-06-13 深圳超多维光电子有限公司 A kind of display methods, device and virtual reality device followed the trail of based on attitude
US20180342109A1 (en) * 2017-05-25 2018-11-29 Thomson Licensing Determining full-body pose for a virtual reality environment
CN109951628A (en) * 2017-12-21 2019-06-28 广东欧珀移动通信有限公司 Model building method, photographic method, device, storage medium and terminal
CN110349527A (en) * 2019-07-12 2019-10-18 京东方科技集团股份有限公司 Virtual reality display methods, apparatus and system, storage medium
CN112130660A (en) * 2020-08-14 2020-12-25 青岛小鸟看看科技有限公司 Interaction method and system based on virtual reality all-in-one machine
CN117425870A (en) * 2021-06-02 2024-01-19 元平台技术有限公司 Dynamic mixed reality content in virtual reality
CN113608613A (en) * 2021-07-30 2021-11-05 建信金融科技有限责任公司 Virtual reality interaction method and device, electronic equipment and computer readable medium
CN114241168A (en) * 2021-12-01 2022-03-25 歌尔光学科技有限公司 Display method, display device, and computer-readable storage medium
WO2023160356A1 (en) * 2022-02-25 2023-08-31 凝动医疗技术服务(上海)有限公司 Method and system for enhancing user experience of virtual reality system
CN116700471A (en) * 2022-02-25 2023-09-05 凝动医疗技术服务(上海)有限公司 Method and system for enhancing user experience of virtual reality system
CN116524081A (en) * 2023-01-17 2023-08-01 小沃科技有限公司 Virtual reality picture adjustment method, device, equipment and medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HYEJIN KIM: ""TeleGate: Immersive Multi-User Collaboration for Mixed Reality 360°Video"", 《2021 IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES ABSTRACTS AND WORKSHOPS (VRW)》, 6 May 2021 (2021-05-06) *
余兴尧: ""虚拟现实适人性显示优化关键技术研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》, 15 July 2021 (2021-07-15) *

Similar Documents

Publication Publication Date Title
US11315336B2 (en) Method and device for editing virtual scene, and non-transitory computer-readable storage medium
EP3096208B1 (en) Image processing for head mounted display devices
RU2668408C2 (en) Devices, systems and methods of virtualising mirror
JP6627861B2 (en) Image processing system, image processing method, and program
JP6918455B2 (en) Image processing equipment, image processing methods and programs
US7999843B2 (en) Image processor, image processing method, recording medium, computer program, and semiconductor device
KR20190124766A (en) Mixed Reality Viewer System and Methods
CN114365197A (en) Placing virtual content in an environment with multiple physical participants
CN106797460A (en) The reconstruction of 3 D video
CN102959616A (en) Interactive reality augmentation for natural interaction
CN108304063A (en) Information processing unit, information processing method and computer-readable medium
CN109582122B (en) Augmented reality information providing method and device and electronic equipment
US20150172634A1 (en) Dynamic POV Composite 3D Video System
JP2008027086A (en) Facial expression inducing device, facial expression inducing method, and facial expression inducing system
US20200316462A1 (en) Head Mounted Display and Method
CN111670431B (en) Information processing device, information processing method, and program
CN112019826A (en) Projection method, system, device, electronic equipment and storage medium
JP2016213674A (en) Display control system, display control unit, display control method, and program
JP2023017920A (en) Image processing device
JP2014182597A (en) Virtual reality presentation system, virtual reality presentation device, and virtual reality presentation method
US20150019657A1 (en) Information processing apparatus, information processing method, and program
WO2024131479A1 (en) Virtual environment display method and apparatus, wearable electronic device and storage medium
CN111818382B (en) Screen recording method and device and electronic equipment
JPWO2009119288A1 (en) Communication system and communication program
CN113596323A (en) Intelligent group photo method, device, mobile terminal and computer program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination