CN110431513A - Media content sending method, device and storage medium - Google Patents

Media content sending method, device and storage medium Download PDF

Info

Publication number
CN110431513A
CN110431513A CN201880003415.0A CN201880003415A CN110431513A CN 110431513 A CN110431513 A CN 110431513A CN 201880003415 A CN201880003415 A CN 201880003415A CN 110431513 A CN110431513 A CN 110431513A
Authority
CN
China
Prior art keywords
virtual
virtual carrier
client
real
carrier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201880003415.0A
Other languages
Chinese (zh)
Other versions
CN110431513B (en
Inventor
陈志浩
张振毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Publication of CN110431513A publication Critical patent/CN110431513A/en
Application granted granted Critical
Publication of CN110431513B publication Critical patent/CN110431513B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

This application discloses a kind of media content sending methods, which comprises shows Virtual Space, includes at least one virtual carrier and the corresponding virtual controlling body of first client in the Virtual Space;When the virtual controlling body grabs a virtual carrier at least one described virtual carrier, when the state of the virtual carrier is first state, obtain the media content of input, the mark of the media content and the virtual carrier is sent to data content server, so that the data content server is associated with the mark of the virtual carrier by the media content, and the state for notifying the application server that the virtual carrier is arranged is the second state;It is release conditions that the virtual controlling body, which controls the virtual carrier,.Disclosed herein as well is corresponding media content sending method, device and the storage mediums for being applied to application server.

Description

Media content transmitting method, device and storage medium Technical Field
The present application relates to the field of virtual reality technologies, and in particular, to a media content sending method and apparatus in a virtual reality environment, and a storage medium.
Background
Virtual Reality (Virtual Reality VR) technology utilizes a computer or other intelligent computing equipment and combines with photoelectric sensing technology to generate a vivid Virtual environment within a specific range of visual, audio and tactile integration. The virtual space generated by the virtual reality technology provides visual, auditory and tactile experiences for a user, so that the experience of the virtual space being immersive is generated. Because the virtual reality system can overcome the limitation of physical conditions and create diversified scenes to adapt to diversified application requirements, the virtual reality technology is widely applied in many fields. For example, the virtual reality technology can be applied in the field of games, such as shooting games and tennis games combined with VR, and the immersive scene provided by the virtual reality technology increases the interest of the games. Virtual reality technology is receiving increasing attention.
Technical content
The embodiment of the application provides a media content sending method, which is applied to a first client side and comprises the following steps:
displaying a virtual space, wherein the virtual space comprises at least one virtual carrier and a virtual control body corresponding to the first client;
when the virtual control body grabs a virtual carrier in the at least one virtual carrier, when the state of the virtual carrier is a first state, acquiring input media content, sending the media content and the identifier of the virtual carrier to a data content server, so that the data content server associates the media content with the identifier of the virtual carrier, and informing the application server of setting the state of the virtual carrier to be a second state;
and the virtual control body controls the virtual carrier to be in a release state.
The embodiment of the present application further provides a media content sending method, applied to an application server, including:
sending information of at least one virtual carrier and information of a virtual control body associated with a first client to the first client, wherein the information of the at least one virtual carrier comprises: the identification and the state of each virtual carrier and real-time position data in the virtual space, wherein the information of the virtual control body comprises initial position information of the virtual control body; enabling the first client to display the virtual space according to the information of the at least one virtual carrier and the information of the virtual control body, when the virtual control body grabs one virtual carrier in the at least one virtual carrier and the state of the virtual carrier is the first state, acquiring input media content, and sending the media content and the identifier of the virtual carrier to a data content server so that the data content server associates the media content with the identifier of the virtual carrier;
receiving a notification message sent by the first client, and setting the state of the virtual carrier to be a second state according to the notification message;
and sending the information of the virtual carrier to a second client so as to enable the second client to acquire the media content associated with the identification of the virtual carrier from the data content server.
The application example also provides a media content sending device, which comprises:
the display module is used for displaying a virtual space, and the virtual space comprises at least one virtual carrier and a virtual control body corresponding to the first client;
a media content sending module, configured to, when the virtual controller captures a virtual carrier of the at least one virtual carrier, obtain input media content when the state of the virtual carrier is a first state, send the media content and the identifier of the virtual carrier to a data content server, so that the data content server associates the media content with the identifier of the virtual carrier, and notify the application server to set the state of the virtual carrier to a second state
And the release module is used for controlling the virtual carrier to be in a release state by the virtual control body.
The application example also provides a media content sending device, which comprises:
a first information sending module, configured to send, to a first client, information of at least one virtual carrier and information of a virtual control entity associated with the first client, where the information of the at least one virtual carrier includes: the identification and the state of each virtual carrier and real-time position data in the virtual space, wherein the information of the virtual control body comprises initial position information of the virtual control body; enabling the first client to display the virtual space according to the information of the at least one virtual carrier and the information of the virtual control body, when the virtual control body grabs one virtual carrier in the at least one virtual carrier and the state of the virtual carrier is the first state, acquiring input media content, and sending the media content and the identifier of the virtual carrier to a data content server so that the data content server associates the media content with the identifier of the virtual carrier;
the message receiving module is used for receiving a notification message sent by the first client and setting the state of the virtual carrier to be a second state according to the notification message;
and the second information sending module is used for sending the information of the virtual carrier to a second client so as to enable the second client to obtain the media content associated with the identifier of the virtual carrier from the data content server.
The present examples also provide a computer-readable storage medium storing computer-readable instructions that can cause at least one processor to perform the method as described above for the first client.
The present examples also provide a computer-readable storage medium storing computer-readable instructions that can cause at least one processor to perform the method as described above for an application server.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a system architecture diagram to which an example of the present application relates;
fig. 2 is a flowchart of a media content transmitting method applied to a first client according to an example of the present application;
FIG. 3A is a schematic diagram of the selection of a virtual carrier in a virtual space;
FIG. 3B is a schematic diagram of grabbing a virtual carrier in a virtual space;
FIG. 3C is a schematic illustration of associating media content to a virtual carrier in a virtual space;
FIG. 3D is a schematic illustration of releasing a virtual carrier in a virtual space;
fig. 4A is a schematic structural diagram of a first VR device headset 400;
fig. 4B is a schematic diagram of a first VR device controller 410;
fig. 5 is a flowchart illustrating a method for acquiring media content associated with a virtual carrier after the virtual carrier is selected according to an embodiment of the present application;
fig. 6 is a detailed flowchart of acquiring media content associated with a virtual carrier according to an example of the present application;
FIG. 7 is a flow chart illustrating interaction with a virtual carrier according to an embodiment of the present application;
fig. 8 is a schematic diagram illustrating interactive information carried by a virtual carrier according to an embodiment of the present application;
FIG. 9 is a schematic flow chart illustrating a monitoring gesture "pan-pan" operation according to an embodiment of the present application;
fig. 10 is a flowchart illustrating a media content sending method applied to an application server according to an example of the present application;
FIG. 11 is a message interaction diagram of a virtual voice ball delivering voice in accordance with an example of the present application;
FIG. 12 is a block diagram of an exemplary media content delivery device according to the present application;
FIG. 13 is a schematic diagram of another example of a media content delivery apparatus according to the present application;
and
FIG. 14 is a block diagram of a computing device in an example of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. Based on
The embodiments of the present invention, and all other embodiments obtained by a person of ordinary skill in the art without any inventive work, belong to the scope of protection of the present invention.
The media content sending method can be applied to a VR system. Fig. 1 shows a VR system 100, comprising: a first client 101, a second client 102, a first VR device 103, a second VR device 104, an application server 105 and a data content server 106. In VR system 100, a plurality of second clients 102 and corresponding second VR devices may be included. The first client 101 and the second client 102 are connected to the application server 105 and the data content server 106 via the internet.
The first client 101 and the second client 102 are VR clients (i.e., VR APPs). The first VR device 103 and the second VR device 104 may include user-operable controllers (controllers) and wearable equipment (e.g., various VR head-mounted devices, VR body-sensing devices, etc.). The first client 101 can perform information interaction with the first VR device 103 to provide an immersive VR image for the user and complete a corresponding operating function, and the second client 102 can perform information interaction with the second VR device 103 to provide an immersive VR image for the user and complete a corresponding operating function. In fig. 1, the first VR device is a separate component from the first client 101 and the second VR device is a separate component from the second client 102, and in some examples, the first VR device is integrated with the first client 101 and the second VR device is integrated with the second client 102. The VR client can show corresponding VR image data for the user according to the position information and motion information of the user in the virtual space, which are provided by the wearable equipment, so that an immersive experience is brought to the user. The VR client may also perform corresponding operations in response to instructions from the user operating the controller, such as: and grabbing the virtual object in the virtual space. The VR client may generate VR panoramic image data according to the position data and the motion data of the virtual object in the virtual space, such as: panoramic pictures, panoramic videos, VR games, etc. The application server 105 is a VR application server (VR server for short), and real-time position data, motion data, and state data of a virtual object in a virtual space are stored in the VR server, and corresponding processing can be performed in response to a request from a VR client. For example, in response to a login request of a VR client, real-time position data, motion data, and state data of a virtual object in a virtual space are sent to the VR client. The data content server 106 is configured to receive media content uploaded by the VR client, the media content being associated with a virtual carrier in a virtual space. The data content server 106 may also send media content to the VR client in response to requests by the VR client.
Here, the terminal device where the VR client is located refers to a terminal device with a data calculation processing function, and includes but is not limited to a smart phone (with a communication module installed), a palm computer, a tablet computer, and the like. The terminal device may also be equipped with the VR device as a single piece. These communication terminals are all installed with an operating system, including but not limited to: the Android operating system, the Symbian operating system, the Windows mobile operating system, and the apple iPhone OS operating system, among others. The above-described VR Head Mounted Device (HMD) includes a screen that can Display a real-time picture. The data Content server may be a CDN (Content Delivery network) server.
The present application provides a media content sending method 200, applied to a VR client, as shown in fig. 2, the method includes the following steps:
s201: and displaying a virtual space, wherein the virtual space comprises at least one virtual carrier and a virtual control body corresponding to the first client.
The virtual space comprises one or more virtual carriers and one or more virtual roles. The one or more virtual carriers are used for transmitting information in a virtual space, for example, a client corresponding to one virtual character makes one virtual carrier associated information, and when the virtual character releases the virtual carrier and another virtual character captures the virtual carrier, the client corresponding to the other virtual character can acquire the information associated with the virtual carrier. And one virtual role corresponds to one client, and the VR equipment associated with the client is controlled to control the corresponding virtual role to complete corresponding operation. The virtual control body is associated with the first client, for example, the virtual control body is a virtual double hand of a virtual character associated with the first client. The clients (including the first client) are developed clients based on a VR-enabled three-dimensional rendering engine (e.g., UE 4). The first client side comprises virtual carriers and 3D models of virtual roles, and can acquire information of the virtual carriers and information of the virtual roles from the application server and set the virtual carriers and the 3D models of the virtual roles in the virtual space according to the information of the virtual carriers and the information of the virtual roles. And then all the mesh data to be rendered onto the screen are generated through coordinate transformation. Rendering the grid data to generate a virtual reality image. And sending the generated virtual reality image to a display screen of a head-mounted device of a first VR device associated with a first client side for displaying, so as to display the corresponding virtual space. When the first client performs the rendering, only two hands of the first virtual character associated with the first client may be rendered, and the body of the first virtual character is ignored. The virtual control body is the two hands of the first virtual character. For example, in the VR game scenario 300, a virtual space as shown in fig. 3A is shown, in which four virtual players 303 are included. Wherein the current player only shows the player's virtual hand 301, i.e. the virtual control volume described above. The virtual control body corresponds to a controller (also referred to as a handle) of the first VR device, and the virtual control body can be controlled to perform a desired operation by controlling the controller. Also included in the virtual space is a virtual speech ball 302, i.e., the virtual carrier described above. After the virtual player in the virtual space can capture the virtual voice ball 302, the virtual voice ball is associated with voice, and then the virtual voice ball is released, for example, thrown out, and the virtual player receiving the virtual voice ball 302 acquires the voice associated with the virtual voice ball. The transfer of speech is achieved by a virtual speech ball 302. The number of the virtual voice balls in the virtual space may be one or more, and is not particularly limited herein.
S202: when the virtual control body grabs a virtual carrier in the at least one virtual carrier, when the state of the virtual carrier is a first state, acquiring input media content, sending the media content and the identifier of the virtual carrier to a data content server, so that the data content server associates the media content with the identifier of the virtual carrier, and informing the application server of setting the state of the virtual carrier to be a second state.
The virtual carrier has different states, the first state of the virtual carrier is the initial state of the virtual carrier, namely the state that the virtual carrier is not associated with the media content, and the second state of the virtual carrier is the state that the virtual carrier is associated with the media content. When the virtual carrier state is different, the color of the virtual carrier in the virtual space is different. For example, in the virtual space of the VR game shown in fig. 3A, the virtual voice ball in the first state (initial state) is, for example, gold. In fig. 3B, after the virtual speech ball is captured and associated with the media content, the virtual speech ball in the second state (the state after the media content is associated) is, for example, red. When the captured state of the virtual carrier is the first state, that is, the initial state, the virtual carrier is associated with media content, where the media content may be media content selected by the first client to be locally stored, or media content captured through a peripheral device associated with the first client, for example, voice recorded by the recording component 404. The user may record a session or sing a song by himself, and the recording means may be a microphone. The first client comprises a voice real-time transmission part cooperating with the recording part, for example, a real-time voice transmission third-party library captures the voice in the recording part, for example, a microphone, in real time in each frame, converts the voice into a PCM data format, and stores the PCM data format in a cache of a terminal where the first client is located. In the process of associating the virtual carrier with the media content, in the virtual space, a special effect of the recording can be added above the virtual carrier. As shown in fig. 3C, a special effect 306 is added to the speech ball. After the voice data recording is finished, the PCM data in the cache is uploaded to the data content server 106 and stored as a PCM file, and the stored PCM file is stored in association with the voice ball identifier. The data content server 106 may be a CDN server, and the CDN server is used to ensure that the client can update the required voice file at any location in time. After the voice ball is recorded, the state of the voice ball is changed, the first client notifies the application server 105, and the state of the voice ball is set to the second state by the application server 105, and the second state is the recorded state. The first client communicates with the application server 105 and the data content server 106 via a network synchronization unit, such as a module in the phantom engine for network synchronization.
S203: and the virtual control body controls the virtual carrier to be in a release state.
After the media content corresponding to the virtual carrier is acquired and uploaded to the data content server 106, the user can release the virtual carrier through the control controller, for example, the user can perform a throwing motion by holding the controller with a hand, so as to achieve the purpose of controlling the virtual controller in the virtual space to throw the virtual carrier. In addition, various modes of releasing the virtual carrier, such as kicking out the virtual carrier, throwing out the virtual carrier, beating out the virtual carrier and the like, can be realized. When the virtual carrier is thrown out in the virtual space by the virtual role corresponding to the first client, the position data of the virtual carrier can be changed, the first client sends the real-time position data of the virtual carrier to the application server, and the application server updates the real-time position data of the virtual carrier to the second client, so that the second client updates the position of the virtual carrier in the virtual space according to the real-time position data of the virtual carrier. And meanwhile, when the state of the virtual carrier changes, the color of the virtual carrier also changes, and the application server updates the state of the virtual carrier to the second client, so that the second client updates the color of the virtual carrier in the virtual space according to the state of the virtual carrier. For example, in the VR game scenario described above, when the virtual voice ball is thrown, other players can catch the virtual voice ball and obtain the voice associated with the virtual voice ball. Specifically, the application server 105 updates the state of the voice ball and the identifier of the voice ball to the second client in real time, and when the player corresponding to the second client selects the voice ball and the state of the virtual voice ball is the second state, the CDN server obtains the media content associated with the voice ball according to the identifier of the voice ball. Thereby realizing the transmission and the communication of the media content through the voice ball.
By adopting the method for sending the media content, when the virtual carrier is captured, the media content is input and uploaded to the data content server, so that the virtual carrier is associated with the media content, and the virtual carrier is released in the virtual space. The virtual carrier in the virtual space is used as a carrier of the media content, the media content related to the virtual carrier is transmitted among the plurality of clients along with the transmission of the virtual carrier in the virtual space, the interaction of the media content has 3D stereoscopic immersion, and meanwhile, the media content is related to the virtual carrier in the virtual space, so that the reality of VR social contact is improved.
In some examples, in the step 201, when executing the presentation virtual space, the method includes the steps of:
s301: receiving information of the at least one virtual carrier and information of the virtual control body, which are sent by an application server, wherein the information of the at least one virtual carrier comprises: the identification and the state of each virtual carrier and real-time position data in the virtual space, wherein the information of the virtual control body comprises initial position information of the virtual control body.
S302: and displaying the virtual space according to the information of the at least one virtual carrier and the information of the virtual control body.
The first client acquires information of each virtual carrier and each virtual character in the virtual space from the application server, and then displays the virtual space according to the acquired information of each virtual carrier and the acquired information of each virtual character. In some examples, when a user logs in to the first client 101, the first client 101 sends a login request to the application server 105, and the application server 105 sends information of each virtual carrier and information of each virtual character in the virtual space to the first client 101. The virtual control body is a virtual control body, for example, the virtual control body is both hands of the first virtual role. The information of the virtual carrier includes: identification of the virtual carrier, a state, and real-time location data in the virtual space. The virtual carrier can be a virtual voice ball in a virtual space, and the virtual voice ball has the morphological characteristics and the elastic physical properties of the ball and can meet the entertainment purpose of common balls. The information of the virtual character includes an initial position of the virtual character in the virtual space.
And when the first client displays the virtual space, displaying each virtual carrier at the corresponding position of the virtual space according to the position information of each virtual carrier. And simultaneously, displaying each virtual character in the virtual space according to the initial position information of each virtual character. Subsequently, the first client receives the real-time position data of each virtual carrier and each virtual character in the virtual space sent by the application server, and further updates each virtual carrier and each virtual character in the virtual space. When the first client displays the virtual space, for the first virtual character associated with the first client, only two hands (virtual control bodies) of the first virtual character may be displayed in the virtual space. When the virtual control body is displayed in the virtual space, the position data of the virtual control body can be determined according to the position data of the virtual character, and then the virtual control body is displayed in the virtual space. The initial position data of the first virtual character may be position data of a center of the first virtual character, for example, in a VR game, a center point of the virtual character is a center point of a virtual player, and a position of the virtual control body is determined according to the position data of the center of the virtual character.
In some examples, in the step 202, when the step of obtaining the input media content when the state of the virtual carrier is the first state while the virtual control body grabs a virtual carrier of the at least one virtual carrier is executed, the method includes the steps of:
s401: acquiring real-time position data of a controller associated with the first client;
s402: receiving real-time position data of the virtual carrier sent by an application server;
s403: when the virtual carrier is determined to be selected according to the real-time position data of the virtual carrier and the real-time position data of the controller, the position data and/or the motion data of the virtual control body and the virtual carrier in the virtual space are updated in response to a first interactive message sent by the controller, and the virtual control body grabs the virtual carrier; and when the state of the virtual carrier is a first state, acquiring the input media content.
A first VR device associated with a first client includes a controller, and the first client obtains location data for the controller. The first VR device further comprises a locator for acquiring position data of the controller, the locator can be located by infrared optics, specifically, the locator comprises a plurality of infrared emission cameras for covering an indoor location space, infrared reflection points are placed on the controller, and position information of the controller in the space is determined by capturing images of the reflection points reflected back to the cameras. And the positioner sends the acquired real-time position data of the controller to the first client. In addition, the positioner can also adopt an image tracker, a (group of) camera shoots the controller, and then the position of the controller is determined by image processing technology. In addition, the real-time position data of the controller can be obtained through ultrasonic three-dimensional space positioning, specifically, an ultrasonic tracker can be arranged on the controller to send out high-frequency ultrasonic pulses, three receivers arranged on a ceiling of a real space receive signals, the distance can be calculated through delay, the real-time position of the controller is determined, and the receiver sends the real-time position data of the controller to the first client. Meanwhile, the position data of the role corresponding to the first client in the real space can be obtained, and the virtual control body associated with the first virtual role in the virtual space is updated according to the change of the position data of the role corresponding to the first client relative to the initial position data. The initial position data of the first avatar may be sent by the application server 105 when the user logs in to the first client.
Real-time location data of the virtual carrier is obtained from the application server 105. According to the acquired position data of the role corresponding to the first client in the real space, the relative position data of the controller relative to the role can be determined. Further, the relative position data of the virtual control body with respect to the first virtual character in the virtual space is determined based on the relative position data. And determining the position data of the virtual control body in the virtual space according to the relative position data. The controller corresponding to the first client is used for controlling the virtual control body corresponding to the first client, and when the position of the controller moves, the position of the virtual control body correspondingly moves, so that a user can control the virtual control body by controlling the controller.
And when the spatial relation between the virtual control body and the virtual carrier meets a preset condition, selecting the virtual carrier. And updating the position data and/or the motion data of the virtual control body and the virtual carrier in the virtual space in response to the first interactive message sent by the controller so as to enable the virtual control body to grab the virtual carrier. The first VR device may comprise a head-mounted device 400, as described in fig. 4A, that primarily includes a communication component 401, a play component 402, a display screen 403, and a recording component 404. The playing component 402, the display screen 403 and the recording component 404 are connected to the communication component 401, wherein the display screen 403 is a virtual reality display screen. The first VR device further includes a controller 410. as shown in fig. 4B, the controller 410 includes interactive keys 411 and function keys 412. After the virtual carrier is selected, a user clicks an interactive key 411 on the controller 410, the controller responds to the click of the user on the interactive key 411 and sends first interactive information to the first client, and the first client updates the virtual control body and position data and/or motion data of the virtual carrier in the virtual space according to the first interactive information, so that the virtual control body can capture the virtual carrier. Specifically, skeletal animation data of the virtual control body related to the holding is stored in the first client, and after the first client receives the first interactive message, the skeletal animation data is called, so that the virtual control body performs the holding action corresponding to the skeletal animation data in the virtual space. After receiving the first interactive message, the first client attaches the virtual carrier to a preset position point on the virtual control body, and after the virtual carrier is attached to the position point, the virtual carrier moves and rotates along with the position point. In order for a virtual carrier (e.g., a virtual voice ball) to have a process of flying into a virtual controller (e.g., a virtual hand), a flying special effect is played on the path between the virtual carrier and the virtual controller. For example, in the VR game scenario shown in fig. 3A, when the user clicks the interactive key 411, the virtual hand 301 is caused to perform a ball-holding action in the virtual space according to the skeletal animation data of the ball-holding of the virtual hand. Meanwhile, a special effect that the virtual voice ball flies into the palm of the virtual hand is added, so that the virtual voice ball 302 flies into the virtual hand 301 in a virtual reality scene, as shown in fig. 3B.
In some examples, in step 203, when the virtual controller is executed to control the virtual carrier to be in the release state, the method includes the steps of: and updating the position data and the motion data of the virtual carrier and the virtual control body in the virtual space in response to a second interactive message sent by a controller associated with the first client, and displaying the virtual control body to release the virtual carrier according to the position data and the motion data of the virtual carrier and the virtual control body in the virtual space.
When the user holds the controller to make a throwing motion, the user presses the interactive key 411 on the controller, at which point the virtual carrier is released, e.g. thrown, in the corresponding virtual space. The first client acquires real-time position data of the controller in the process, after the user presses the interactive key 411, the controller sends a second interactive message to the first client, and the first client responds to the second interactive message and updates the position of the virtual control body in the virtual space according to the acquired real-time position data of the controller. And meanwhile, determining the position data and the motion data (such as the speed and the acceleration at the throwing moment) of the virtual carrier according to the acquired real-time position data of the controller. And then updating the position of the virtual carrier in the virtual space according to the position data and the motion data of the virtual carrier at the throwing moment. In the virtual space, the virtual control body throws out the virtual carrier, for example, in a VR game scenario, as shown in fig. 3D, a virtual hand 301 throws out a virtual voice ball 302.
In some examples, the virtual space further includes a first virtual character corresponding to the first client; in step S301, when the receiving of the information of the at least one virtual carrier and the information of the virtual controller sent by the application server is executed, the method includes the steps of: and when the spatial relationship between the first virtual character and each virtual carrier in the at least one virtual carrier meets a preset condition, the application server sends the information of the at least one virtual carrier to the first client.
The virtual character is an object of a user of the control controller in a virtual space, a user associated with one client corresponds to one virtual character, and the virtual control body in the virtual space can correspond to a hand of the virtual object, for example, in a VR game, one player corresponds to one virtual character in the virtual space, the virtual control body corresponds to a hand of a virtual player, and one virtual character corresponds to one client.
In this example, the application server 105 sends the information of the virtual carrier to the VR client only when the spatial relationship between the virtual character corresponding to the VR client and the virtual carrier satisfies a predetermined condition. The predetermined condition may be a predetermined distance range, the application server 105 determines a distance between the position of the virtual carrier and the position of the first virtual character, information of the virtual carrier whose distance from the first virtual character exceeds the predetermined distance range is not sent to the first client, and information of the virtual carrier whose distance from the first virtual character is within the predetermined distance range is sent to the first client. The application server 105 stores position data of each virtual character, for example, position data of a center point of the virtual character. And when the distance is within the preset distance range, the first virtual character is considered not to see the virtual carrier, otherwise, the first virtual character is considered to see the virtual carrier. For example, in the VR game scenario described above, the application server sends the information of the virtual voice ball to the VR client corresponding to the virtual character only when the spatial position relationship between the virtual voice ball and the virtual character satisfies a predetermined condition.
In some examples, wherein the virtual carrier is determined to be selected when the virtual control body is determined to collide with the virtual carrier according to the real-time position data of the virtual carrier and the real-time position data of the controller.
Real-time position data of a virtual control body is determined according to the real-time position data of the controller, for example, in the VR game scene shown in fig. 3A, real-time position data of a virtual hand 301 in a virtual space is determined according to the real-time position data of the controller, the virtual control body is a virtual hand 301 of a virtual character, and the virtual carrier is a virtual voice ball 302. And when the virtual hand 301 and the virtual voice ball 302 are determined to collide according to the position data of the virtual hand and the position data of the virtual voice ball, determining to select the virtual carrier. When the virtual hand 301 is judged to collide with the virtual voice ball 302, methods such as ray detection, volume sweep, overlap test and the like can be adopted.
In some examples, wherein the virtual space further comprises a virtual ray associated with the virtual control volume, the virtual ray issued from the virtual control volume; the method further comprises the steps of:
s11: and determining real-time position data of the virtual ray in the virtual space according to the real-time position data of the controller.
The virtual ray is sent out from the virtual control body, the real-time position data of the virtual control body is determined according to the acquired real-time position data of the controller, and the real-time position data of the virtual ray can be determined according to the real-time position data of the virtual control body because the virtual ray is sent out by the virtual control body.
S12: and when the virtual ray is determined to collide with the virtual carrier according to the real-time position data of the virtual ray and the real-time position data of the virtual control body, determining to select the virtual carrier.
In this example, the virtual carrier is determined to be selected when a virtual ray collides with the virtual carrier. For example, in the VR game scenario shown in fig. 3A, a virtual ray 304 is issued on a virtual hand 301, and when the virtual ray 304 collides with a virtual voice ball 302, it is determined that the virtual voice ball is selected.
In some examples, in the step 202, the obtaining the input media content includes the following steps:
s21: responding to a first function message sent by a controller associated with the first client, and starting to receive externally input media data through a data acquisition device associated with the first client; and stopping receiving the media data in response to a second function message sent from the controller.
When the user presses the function key 412 on the controller 410, the controller sends first function information to the first client, and when the first client receives the first function information, the first client starts receiving externally input media data through its associated data acquisition device, for example, acquiring voice data through the recording part 404 on the head-mounted device 400 associated with the first client. Specifically, the first client sends a control message to the headset 400, so that the headset 400 starts the recording component to record the voice input by the user, and the user can record a conversation or sing a song by himself. The recording component may be a microphone. The first client includes a Voice real-time transmission component cooperating with the recording component 404, for example, a real-time Voice transmission third party library (Apollo Voice) may capture Voice in the recording component 404, for example, a microphone, at each frame in real time, convert the Voice into a PCM data format, and store the PCM data format in a cache of the terminal where the first client is located.
In the process of collecting the media data, the user presses the function key 412 all the time, when the user releases the function key 412, the controller sends the second function information to the first client, and the first client stops receiving the voice data when receiving the second function information. Specifically, the first client sends a control message to the head-mounted device 400, so that the head-mounted device 400 turns off the recording component and stops recording.
S22: media content is generated from the received media data.
In step S201, the real-time voice transmission unit in the first client captures PCM data of each frame in the recording unit 404 in real time, and forms voice data according to the captured PCM data of each frame.
In some examples, in performing the updating of the position data and the motion data of the virtual carrier and the virtual control volume in the virtual space in response to the second interactive message sent by the controller associated with the first client, the method comprises the steps of:
s31: and responding to the second interactive message, determining the motion trail and the initial motion data of the virtual carrier according to the real-time position data of the controller, and updating the real-time position data and the motion data of the virtual control body.
The user can perform a throwing motion by holding the controller to achieve the purpose of controlling the virtual control body in the virtual space to release the virtual carrier, for example, the user presses the interactive key 411 on the controller in the process of holding the controller, and at this moment, the virtual carrier is thrown out in the corresponding virtual space. The first client obtains real-time position data of the controller in the process of throwing the controller before the user presses the interactive key 411, and determines real-time position data of the virtual control body in the virtual space according to the real-time position data of the controller. At this time, the virtual control body grabs the virtual carrier, and determines the real-time position data of the virtual carrier according to the real-time position data of the virtual control body. And determining the instantaneous speed and direction of the virtual carrier when the virtual carrier is thrown according to the real-time position of the previous frame, the real-time position of the current frame and the time interval between the previous frame and the current frame when the virtual carrier is thrown. And simultaneously determining the motion trail of the virtual carrier according to the gravity acceleration of the virtual carrier, and updating the real-time position of the virtual carrier in the virtual space according to the instantaneous speed, the direction, the gravity acceleration and the motion trail.
S32: and updating the real-time position data of the virtual carrier according to the motion trail and the initial motion data.
Wherein, when the virtual carrier is released according to the position data and the motion data of the virtual carrier and the virtual control body in the virtual space, the method comprises the following steps:
s33: and displaying the virtual carrier to move along the motion track according to the real-time position data and the motion data of the virtual control body and the real-time position data of the virtual carrier.
According to the motion trail and the initial motion data of the virtual carrier acquired in step S301, the real-time position data of the virtual carrier in each frame of virtual reality image is determined, so that when the first client sends the virtual reality image to the display screen 403 of the head-mounted device 400 for display, the virtual carrier moves along the motion trail in the virtual reality scene seen by the user, and the initial motion data includes the instantaneous speed, direction and gravitational acceleration when the virtual carrier is released. As shown in fig. 3D, after the virtual voice ball 302 is thrown, the virtual voice ball 302 moves along the trajectory shown in fig. 3D.
In some examples, the media content sending method 500 provided in the present application, as shown in fig. 5, further includes the following steps:
s501: when the virtual control body grabs a virtual carrier in the at least one virtual carrier, if the state of the virtual carrier is a second state, a media content request message is sent to the data content server, wherein the media content request message carries the identifier of the virtual carrier, so that the data content server searches media content associated with the identifier of the virtual carrier.
In the example shown in fig. 2, after the above-mentioned step 204, i.e. after the virtual carrier is selected, the state of the virtual carrier is the first state, i.e. the initial state, in which case the incoming media data associated with the virtual carrier is received. In this example, after the virtual carrier is selected, the state of the virtual carrier is the second state, i.e. the state of the associated media data, in which case the media content associated with the virtual carrier is retrieved. Specifically, a network synchronization unit in the first client, for example, a module in the ghost engine for network synchronization, sends a media content request message to the data content server, where the media content request message carries an identifier of the virtual carrier. The data content server stores the media content corresponding to the identifier of the virtual carrier, and searches the corresponding media content according to the identifier of the virtual carrier. For example, in the VR game scenario shown in fig. 3A, after the virtual voice ball is selected as shown in fig. 3A, when the state of the virtual voice ball is the second state, the network synchronization unit in the first client requests the data content server 106 for voice data carried by the virtual voice ball, and the virtual voice ball may also carry other media data such as video data, and the data content server 106 may be a CDN server.
S502: receiving the media content sent by the data content server in response to the media content request message.
In some examples, the media content transmission method provided by the present application further includes the following steps: when the virtual control body grabs a virtual carrier in the at least one virtual carrier, if the state of the virtual carrier is the second state, the media content associated with the identifier is searched according to the identifier of the virtual carrier.
In this example, after the virtual carrier is selected, the state of the virtual carrier is the second state, i.e. the state of the associated media data, in which case if the media content associated with the virtual carrier is already stored locally, the media content associated with said virtual carrier is first looked up locally. For example, when the first client has previously selected a voice ball, the data content server 106 is requested for media content associated with the virtual carrier, and the media content is stored in a local cache. Or when other clients grab the virtual carrier and play the media content associated with the virtual carrier, if the playing is non-private, the first client will also request the media content associated with the virtual carrier from the data content server 106 according to the identifier of the virtual carrier and play the media content, and the requested media content is stored in the local cache. Therefore, when the media content associated with the virtual carrier is obtained, the media content is firstly searched in the local cache, and when the media content does not exist in the local cache, the media content is sent to the data content server for requesting, so that the response speed can be improved.
As for the process 600 of acquiring the media content associated with the virtual voice ball in the VR game scenario shown in fig. 3A, as shown in fig. 6, the process mainly includes the following steps:
step S601: a network synchronization unit in the first client, such as a module for network synchronization in the ghost engine, is configured to obtain voice data associated with the virtual voice ball, where the voice data is stored in a PCM file.
Step S602: the network synchronization unit first searches for the PCM file in the local cache, and if the PCM file exists in the local cache, step S603 is executed, and if the PCM file does not exist in the local cache, step S604 is executed.
Step S603: and obtaining the PCM file in a local cache.
Step S604: and requesting the PCM file by the CDN server, specifically, requesting the PCM file by the CDN server according to the identifier of the virtual voice ball, acquiring the PCM file by the CDN server according to the identifier of the virtual voice ball, and returning the PCM file to the network synchronization unit of the first client.
In some examples, the media content transmission method provided by the present application further includes the following steps: when the virtual carrier is determined to be grabbed, responding to a third function message sent by the controller, and playing the media content; and stopping playing the media content in response to a fourth function message sent by the controller.
After the virtual carrier is grabbed and when the virtual carrier is selected before the virtual carrier is grabbed, the media content associated with the virtual carrier is already acquired, when a user presses a function key 412 on the controller 410, the controller sends a third function message to the first client, and the first client starts playing the media content in response to the third function message; when the user releases the function key 412 on the controller 410, the controller transmits a fourth function message to the first client, and the first client stops playing the media content in response to the fourth function message.
In some examples, the media content sending method provided by the present application, wherein the media content is a voice, the method further includes converting the voice into a 3D voice for playing when playing the voice, and the method includes the following steps:
s41: and acquiring real-time position data of the head-mounted equipment associated with the first client.
The manner of obtaining the real-time position data of the first client-side-associated head-mounted device may refer to the manner of obtaining the real-time position data of the controller in step 203, and may adopt an infrared optical positioning manner, an image tracking positioning manner, and a super-generation tracking positioning manner, which are not described herein again.
S42: determining real-time location data of a head of a first virtual character associated with the first client in virtual space from the real-time location data of the head-mounted device.
After the real-time position of the head-mounted device associated with the first client is obtained in step S41, the position data, for example, the central position data, of the character corresponding to the first client in the real space may be obtained in the same manner, the relative position data of the head-mounted device with respect to the character may be determined, and then the relative position data of the head of the virtual character with respect to the first virtual character in the virtual space may be determined according to the relative position data. The initial position data of the virtual character may be sent by the application server 105 when the user logs in the first client, so as to determine the real-time position data of the head of the virtual character. Wherein the head of the first virtual character corresponds to a head mounted device in real space.
S43: and acquiring real-time position data of a controller associated with the first client, and determining the real-time position data of the virtual carrier according to the real-time position data of the controller.
S44: and determining the real-time distance and the real-time direction of the virtual carrier relative to the head of the first virtual character according to the real-time position data of the head of the first virtual character and the real-time position data of the virtual carrier.
When converting voice data into 3D voice, it is necessary to depend on the distance and direction of the virtual carrier relative to the head of the first virtual character, i.e. the simulated voice is the voice emitted from the virtual carrier. For example, when the virtual carrier is a virtual voice ball, the voice is modeled as the voice emitted by the virtual voice ball, and when the virtual voice ball is far away from the head of the first virtual character, the heard voice is relatively smaller; when the virtual voice ball is closer to the head of the first virtual character, the sound heard is relatively larger. When the virtual voice ball is close to the left ear of the first virtual character, the sound heard by the left ear is relatively larger; when the virtual voice ball is closer to the right ear of the first virtual character, the sound heard by the right ear is relatively larger.
S45: and converting the voice into voice with a multi-dimensional space sound effect according to the real-time distance and the real-time direction. The voice with the multidimensional space sound effect can be a voice with a three-dimensional space sound effect, namely a 3D voice for short.
Converting the voice into 3D voice according to the real-time distance and the real-time direction of the virtual carrier relative to the head of the first virtual character determined in the step S44. Specifically, a 3D voice playing Component, for example, a Component (Audio Component) playing 3D sound in the illusion engine UE4, is included in the first client to convert the voice into 3D voice. Specifically, the Audio Component is filled with the voice data in the PCM file every frame, and the real-time distance and the real-time direction determined in step S44 are input into the Audio Component, so that 3D voice data of the left and right ears are generated and played. A playing component 402, such as a headphone, is included in the headset 400, and sends the 3D voice to the headphone for playing to the user.
The first client, the VR client, is developed based on the UE4 engine, using a development mode of the Actor binding component. If other multimedia playing information exists subsequently, playing of different multimedia data can be realized in a component replacement mode. For example, if a voice ball is changed to a video ball, the Audio Component may be replaced with a Component supporting video playback.
S46: when the virtual carrier is determined to be grabbed, responding to a third function message sent by the controller, and playing the voice with the multi-dimensional space sound effect; and stopping playing the voice with the multi-dimensional space sound effect in response to a fourth function message sent by the controller.
The 3D voices of the left and right ears generated in step S45 are played back through a playback part associated with the first client, for example, the playback component 402 in the headset 400.
In some examples, the media content sending method provided by the present application further includes that when the first client plays the 3D voice, and when the virtual carrier is far away from the head of the first virtual character, the playing is non-private, and at this time, characters corresponding to other clients can also hear the 3D voice, and specifically includes the following steps:
step S51: and acquiring real-time position data of the head-mounted equipment associated with the first client.
Step S52: and determining the real-time position data of the head of the first virtual character in the virtual space according to the real-time position data of the head-mounted equipment.
Steps S51-S52 are the same as steps S41-S42, and are not repeated here.
Step S53: when the real-time position data of the head of the first virtual character and the real-time position data of the virtual carrier meet preset conditions, informing the application server of setting the state of the virtual carrier to be a third state; and informing the application server to send the state of the virtual carrier to one or more second clients, wherein the spatial relationship between the virtual character corresponding to each second client in the one or more second clients and the first virtual character meets a preset condition, and each second client responds to the received state of the virtual carrier to acquire and play the media content associated with the virtual carrier.
The preset condition may be a preset distance threshold, and when the distance between the real-time position of the head of the first virtual character and the real-time position of the virtual carrier is within the distance threshold, it indicates that the virtual carrier is closer to the head of the first virtual character, so that private playing is performed, and in the case of private playing, only the user corresponding to the first virtual character can hear the 3D voice. When the distance between the real-time position of the head of the first virtual character and the real-time position of the virtual carrier exceeds the distance threshold, the virtual carrier is far away from the head of the first virtual character, so that public playing is achieved, and under the condition of public playing, users corresponding to other virtual characters can hear the 3D voice, or users corresponding to other virtual characters within a certain distance range from the virtual carrier can hear the 3D voice. When the second client can hear the voice, the second client may request the PCM file of the voice associated with the virtual carrier from the CDN server according to the identifier of the virtual carrier and play the PCM file. When playing the PCM file, the PCM file may also be converted into 3D voice playing, each frame of voice data in the PCM file is filled into a voice Component, for example, a Component (Audio Component) in the ghost engine UE4 for playing 3D voice, and the real-time distance and the real-time direction between the virtual voice ball and the head of the second virtual character are input into the voice Component, so as to generate and play 3D voice data of left and right ears.
In some examples, after capturing the virtual carrier and playing the media content associated with the virtual carrier, as shown in fig. 7, the media content sending method 700 further includes the following steps:
s701: and monitoring the preset gesture operation of the controller.
When the virtual carrier is grabbed, if the state of the virtual carrier is the second state, that is, the media content has already been associated, in this case, the user may play the media content associated with the virtual carrier, and may also interact with the media content associated with the virtual carrier, for example, like praise the voice delivered by the virtual voice ball in the VR game. When the interactive operations such as praise are performed, a user can operate the controller to realize the interaction of the media content carried by the virtual carrier, for example, for a virtual voice ball in a VR game, when the user performs a "pan-pan" operation on the controller, praise the voice carried by the virtual voice ball, the first client monitors the state of the controller, and when the predetermined gesture operation is monitored, it is determined that the user wants to praise the voice carried by the virtual voice ball. In the monitoring process, the handle event driving module in the first client polls the vibration state of the handle every frame, and judges whether the vibration state of the handle is the 'shaking' gesture of the handle or not according to the vibration state of the handle.
S702: and generating interaction information when the preset gesture operation of the controller is monitored.
When the preset gesture operation of the controller is monitored, it is indicated that a user needs to interact with media content carried by the virtual carrier, and at this time, the first client generates interaction information, where the interaction information includes an identifier of the user associated with the first client and an identifier of the virtual carrier.
S703: and sending the interaction information to the application server so that the application server updates the interaction information of the virtual carrier according to the interaction information, and when the spatial relationship between a second virtual character corresponding to a second client and the virtual carrier in the virtual space meets a preset condition, sending the interaction information of the virtual carrier to the second client so that the second client displays the interaction information of the virtual carrier in the virtual space when the virtual carrier is determined to be selected.
The first client sends the interaction information generated in step S702 to the application server, so that the application server updates the interaction information of the virtual carrier according to the interaction information, where the updating process includes adding 1 to the number of interactions corresponding to the virtual carrier, and updating the identifier of the user associated with the first client to the identifier of the user who has interacted with the virtual carrier most recently. For example, for a VR game scenario, after the first client generates the like interaction information for the virtual voice ball, the interaction information is sent to the application server, the application server adds 1 to the number of like times for the virtual voice ball, and updates the identifier of the user corresponding to the first client to the identifier of the user who has recently like the virtual voice ball. When the spatial relationship between a second virtual character corresponding to a second client and the virtual carrier in the virtual space meets a preset condition, for example, a preset distance condition, the application server sends the interaction information of the virtual carrier to the second client, so that the second client displays the interaction information of the virtual carrier in the virtual space when determining to select the virtual carrier. As shown in fig. 8, in the VR game scenario 300, when the virtual voice ball is selected by the second VR client, the interaction information of the voice ball is displayed, and the interaction information includes an approval icon 802, the number of approvals 803 of the virtual voice ball, and the identification 801 of the user who has recently approved the virtual voice ball.
In some examples, in the step S701, when the predetermined gesture operation of monitoring the controller is performed, wherein after the controller is detected to vibrate, the controller is monitored to vibrate for a first preset number of image frames while meeting a first preset condition, and the controller is monitored to vibrate for a second preset number of image frames while meeting a second preset condition, the predetermined gesture operation of monitoring the controller is determined.
In some examples, further, the predetermined gesture operation of the controller is determined to be heard when a time interval between the first preset condition being heard to be met by the number of vibrations of the controller and the second preset condition being heard by the controller exceeds a predetermined time length.
In some examples, a flow 900 of listening for a predetermined gesture operation "pan-pan" of the controller is shown in fig. 9 below. The controller is a handle, the virtual carrier is a virtual voice ball in the VR game scene shown in fig. 3A, and when the virtual voice ball is captured, the media content carried by the virtual voice ball, such as voice, can be complied with by shaking the handle. As shown in fig. 9, the process mainly includes the following steps:
step S901: and when the user wants to like the voice ball, the operating handle is shaken to like the voice ball.
Step S902: and inquiring the vibration state of the handle of the ith frame.
When a virtual hand associated with a first client grabs a virtual voice ball, the vibration state of a handle can be inquired when a VR scene is updated by one frame by VR bottom layer driving in the first client, wherein the vibration state of the handle is detected by a sensor on the handle and transmitted to the VR client, the detection result of the sensor is converted into the vibration state of the handle by the VR client and stored in a buffer area of the VR client, and the vibration state of the handle can be inquired from the buffer area corresponding to the VR client by the VR bottom layer driving.
Step S903: when the vibration state of the handle is inquired (for example, the vibration state parameter is "True"), that is, the state of the handle of the ith frame is vibration, step S904 is executed, otherwise, step S905 is executed, i is assigned to i +1, and then step S902 is returned to, and the state of the handle of the next frame is inquired.
Step S904: after the vibration state of the handle is queried, that is, when the handle state of the ith frame is vibration, querying the handle states of the subsequent n1 frames, that is, querying the vibration states of the handles of the (i + 1) th frame to the (i + n) 1 th frame, in the same way as the handle state query in the step S902, the VR bottom driver queries the vibration state of the handle from the buffer corresponding to the VR client. The VR client counts the number of times of the vibration state of the handle inquired in the n1 frames, for example, the number of times that the vibration state parameter is "True". In the process that the user just starts to shake the handle, the user is not stable, the user may move the handle once or start to shake the handle, and therefore when the vibration frequency of the handle inquired in a preset frame exceeds a preset value, the user is judged to start to shake the handle.
Step S905: and when the state of the handle of the ith frame is not vibration, assigning i to be i +1, and then returning to the step S902 to inquire the state of the handle of the next frame.
Step S906: and judging whether the number of times of the vibration state of the handle inquired in the n1 frames is n2, wherein n1 and n2 are preset values. When the number of vibrations > of the handle found in the n1 frames is n2, the user starts shaking the handle, and in this case, step S907 is executed. When the number of vibrations of the handle found in the n1 frames is < n2, it indicates that the user can move the handle at one stroke, but the user does not shake the handle, and in this case, step S908 is performed.
Step S907: i is assigned as i + n1, step S909 is executed.
Step S908: and assigning i to i + n1+1, and returning to the step S902 to inquire the state of the handle of the next frame.
Step S909: if the number of times of shaking of the handle found in the n1 frame is satisfied > < n2 in step S906, it is determined that the user has started shaking the handle, and then the following steps are performed to determine whether the user has stopped shaking the handle according to the status of the handles of the subsequent n3 frames. In this step, the state of the handle from the i +1 th frame to the i + n3 th frame is queried, and the number of times that the handle queried in the subsequent n3 frames is vibrating is counted.
Step S910: and judging whether the number of times of the inquired handle vibration is less than n4 in the frames from the i +1 frame to the i + n3 frame, if so, executing the step S911, otherwise, executing the step S912 if the number of times of the inquired handle vibration is less than n4, which indicates that the user stops shaking the handle.
Step S911: it is determined whether or not a time interval between the time point when the number of times the handle detects the vibration in step S906 > n2 and the time point when the number of times the handle detects the vibration in step S910 < n4, that is, the time from the start of the shaking of the handle to the stop of the shaking, is greater than a preset time T. And when the time interval does not exceed T, executing step S914, wherein the judgment of the handle gesture fails, and judging that the user does not perform the 'shaking-shaking' operation on the handle, otherwise, executing step S913, and judging the 'shaking-shaking' gesture operation on the handle by the user. In this step, it is determined that the user has performed the "shake-and-shake" operation on the handle only when the user has shaken the handle for more than a predetermined time.
Step S912: when the number of times < n4 that the n3 intraframe handle is detected as vibration in step S910 indicates that the user does not stop shaking the handle, step S912 is executed to assign the value of i to i + n3, and step S909 is returned to and executed to query the vibration status of the handle in the subsequent n3 intraframes.
Step S913: and judging that the user carries out 'shaking-shaking' operation on the handle. After the current user carries out gesture operation of shaking the voice ball once, the praise interaction of the voice ball is determined, and the gesture recognition of shaking is not carried out subsequently.
Step S914: and the gesture of shaking is failed to judge, and the user is judged not to carry out the operation of shaking.
Step S915: when the gesture is judged to be recognized, the gesture of shaking is continuously recognized, in the step, the value of i is assigned to i + n3+1, then the step S902 is returned, the state of the handle is continuously inquired, and the subsequent gesture of shaking is recognized.
In this example, the frame times n1 and n2 and the thresholds n3 and n4 can be preset and can be set according to experience, and shaking hands can be judged more stably through the example, so that the occurrence of misoperation is greatly reduced.
The present application further provides a media content sending method 1000, which is applied to the application server 105, as shown in fig. 10, and includes the following steps:
s1001: sending information of at least one virtual carrier and information of a virtual control body associated with a first client to the first client, wherein the information of the at least one virtual carrier comprises: the identification and the state of each virtual carrier and real-time position data in the virtual space, wherein the information of the virtual control body comprises initial position information of the virtual control body; the first client displays the virtual space according to the information of the at least one virtual carrier and the information of the virtual control body, when the virtual control body grabs one virtual carrier in the at least one virtual carrier, and when the state of the virtual carrier is the first state, the input media content is acquired, and the media content and the identifier of the virtual carrier are sent to a data content server, so that the data content server associates the media content with the identifier of the virtual carrier.
The step is the same as the step in which the application server at the terminal side sends the information of the virtual carrier and the information of the virtual control body to the first client, captures the virtual carrier, associates the media content with the virtual carrier, and sends the media content to the data content server, and is not repeated here
S1002: and receiving a notification message sent by the first client, and setting the state of the virtual carrier to be a second state according to the notification message.
After the virtual carrier is associated with the media content, the first client sends a notification message to the application server, wherein the notification message is used for enabling the application server to be in accordance with the state of the virtual carrier. For example, when the virtual carrier is a voice ball, and after the voice ball is recorded, the state of the voice ball is changed, the first client notifies the application server 105, and the state of the voice ball is set to the second state by the application server 105, where the second state is a recorded state, and the color of the recorded virtual voice ball is different from the color of the virtual voice ball in the initial state, for example, may be red. The first client communicates with the application server 105 and the data content server 106 via a network synchronization unit, such as a module in the phantom engine for network synchronization.
S1003: and sending the information of the virtual carrier to the second client so as to enable the second client to acquire the media content associated with the identification of the virtual carrier from the data content server.
For example, in the VR game scenario described above, when a voice ball is released, e.g., thrown, other players may grab the voice ball and capture the voice on the voice ball. Specifically, the application server 105 updates the state of the voice ball and the identifier of the voice ball to the second client in real time, and when the player corresponding to the second client selects the voice ball, the CDN server obtains the media content associated with the voice ball according to the identifier of the voice ball in the second state of the voice ball. Thereby realizing the transmission and the communication of the media content through the voice ball.
By adopting the method for sending the media content, when the virtual carrier is captured, the media content is input and uploaded to a data content server, so that the virtual carrier is associated with the media content; and releasing the virtual carrier in the virtual space, so that after other clients select the voice ball, the media content associated with the virtual carrier is acquired from the data content server, and the subsequent playing can be carried out. And enabling the virtual carrier in the virtual space to be used as a carrier of the media content, wherein the media content related to the virtual carrier is transferred among the plurality of clients along with the transfer of the virtual carrier in the virtual space. The interaction of the media content has 3D stereoscopic immersion, and meanwhile, the media content is associated with the virtual carrier in the virtual space, so that the reality of VR social contact is improved.
In some examples, the media content sending method provided by the present application further includes the following steps:
s61: and receiving interaction information sent by the first client and the identifier of the virtual carrier, wherein the interaction information is generated and sent by the first client when the first client determines to grab the virtual carrier and monitors the preset gesture operation of a controller associated with the first client.
When the virtual carrier is grabbed, if the state of the virtual carrier is the second state, that is, the media content has already been associated, in this case, the user may play the media content associated with the virtual carrier, and may also interact with the media content associated with the virtual carrier, for example, like praise the voice delivered by the virtual voice ball in the VR game. When the interactive operations such as praise are performed, a user can operate the controller to realize the interaction of the media content carried by the virtual carrier, for example, for a virtual voice ball in a VR game, when the user performs a "pan-pan" operation on the controller, praise the voice carried by the virtual voice ball, the first client monitors the state of the controller, and when the predetermined gesture operation is monitored, it is determined that the user wants to praise the voice carried by the virtual voice ball. In the monitoring process, the handle event driving module in the first client polls the vibration state of the handle every frame, and judges whether the vibration state of the handle is the 'shaking' gesture of the handle or not according to the vibration state of the handle.
When the preset gesture operation of the controller is monitored, it is indicated that a user needs to interact with media content carried by the virtual carrier, and at this time, the first client generates interaction information, where the interaction information includes an identifier of the user associated with the first client and an identifier of the virtual carrier.
S62: and updating the interactive information of the virtual carrier according to the interactive information, and associating the interactive information of the virtual carrier with the identifier of the virtual carrier.
The first client sends the interaction information generated in step S61 to the application server, so that the application server updates the interaction information of the virtual carrier according to the interaction information, where the updating process includes adding 1 to the number of interactions corresponding to the virtual carrier, and updating the identifier of the user associated with the first client to the identifier of the user who has interacted with the virtual carrier most recently. For example, for a VR game scenario, after the first client generates the like interaction information for the virtual voice ball, the interaction information is sent to the application server, the application server adds 1 to the number of like times for the virtual voice ball, and updates the identifier of the user corresponding to the first client to the identifier of the user who has recently like the virtual voice ball.
S63: and when the spatial relationship between the virtual object associated with the second client and the virtual carrier in the virtual space meets a preset condition, sending the interaction information of the virtual carrier to the second client so that the second client displays the interaction information of the virtual carrier in the virtual space when determining to select the virtual carrier.
When the spatial relationship between a second virtual character corresponding to a second client and the virtual carrier in the virtual space meets a preset condition, for example, a preset distance condition, the application server sends the interaction information of the virtual carrier to the second client, so that the second client displays the interaction information of the virtual carrier in the virtual space when determining to select the virtual carrier. As shown in fig. 8, when the second VR client selects the virtual voice ball, the interaction information of the voice ball is displayed, and the interaction information includes an approval icon 802, the number of approvals 803 of the virtual voice ball, and the identification 801 of the user who recently approves the virtual voice ball.
When the media content sending method provided by the application is applied to a VR game scene, the first client is a VR game client, the application server is a VR game server, and the virtual carrier is a virtual voice ball, the virtual voice ball transmits a voice message interaction 1100, as shown in fig. 11, the method mainly includes the following steps:
step S1101: the first client sends an APP login request to the application server.
Step S1102: and the application server returns the information of the virtual voice ball and the information of the virtual role in the virtual space to the first client, wherein the information of the virtual voice ball comprises the state, the identification and the real-time position data of the virtual voice ball. The state of the voice ball comprises an initial state, a recorded state and a playing state, and the information of the virtual character comprises real-time position data of the virtual character.
Step S1103: and the first client generates a virtual reality image according to the real-time position data of the virtual voice ball and the real-time position data of the virtual role.
Step S1104: and sending the generated virtual reality image to a display of the head-mounted device for display.
Step S1105: real-time position data of the handle is obtained.
Step S1106: and determining real-time position data of the virtual control body according to the real-time position data of the handle, and determining real-time position data of the virtual ray according to the real-time position data of the virtual control body.
Step S1107: whether the virtual voice ball is selected or not can be judged according to the real-time position data of the virtual voice ball and the real-time position data of the virtual control body, or whether the virtual voice ball is selected or not can be judged according to the real-time position data of the virtual voice ball and the real-time position data of the virtual ray.
Step S1108: after the virtual voice ball is selected, the user clicks an interactive key on the handle, and the handle sends a first interactive message to the first client.
Step S1109: after receiving the first interactive message, the first client updates position data and motion data of the virtual voice ball and the virtual control body in the virtual space, so that the virtual control body captures the virtual voice ball, wherein the virtual control body corresponds to a virtual hand of the first virtual character.
After the virtual voice ball is captured, there are 3 cases in the following, in which the voice data is associated with the virtual voice ball when the state of the virtual voice ball is the first state, i.e., the initial state, in steps S1110 to S1116. Steps S1117-S1121 are performed by performing a praise interaction on the voice ball when the state of the virtual voice ball is the second state, i.e. the recorded state. In steps S1122-S1125, when the state of the virtual voice ball is the second state, i.e. the recorded state, the voice associated with the virtual voice ball is obtained for playing. Each step is described in detail below
(1) Steps S1110-S1116
Step S1110: when the state of the virtual voice ball is a first state, namely an initial state, a user presses down a function key on a handle, and the handle sends a first function message to a first client.
Step S1112: and responding to the first function message by the first client, and capturing the voice recorded by the microphone of the head-mounted equipment in real time and converting the voice into a PCM data format by a recording component in the first client, such as a real-time voice transmission third party library (Apollo voice).
Step S1113: and the user releases the function key on the handle, and the handle sends a second function message to the first client. The second client stops fetching the voice data.
Step S1114: the first client sends the obtained PCM voice data and the identification of the virtual voice ball to the CDN server, and the CDN server stores the PCM voice data and the identification of the virtual voice ball in a correlation mode.
Step S1115: after the virtual voice ball is associated with voice data, the state of the virtual voice ball changes, and the first client informs the application server of changing the state of the virtual voice ball.
Step S1116: and the application server sets the state of the virtual voice ball to be a second state and sends the state of the virtual voice ball to the second client, and optionally, when the spatial relationship between the virtual character corresponding to the second client and the virtual voice ball meets a preset condition, the state of the virtual voice ball is sent to the second client.
(2)S1117-S1121
Step S1117: and when the state of the virtual voice ball is the second state, the handle event driving module in the first client polls the vibration state of the handle every frame.
Step S1118: and judging whether the gesture is a shaking gesture of the handle according to the vibration state of the handle.
Step S1119: when a "pan-pan" gesture of the handle is determined, interaction information is generated.
Step S1120: the first client sends the interactive message to the application server.
Step S1121: the application server updates the interactive information of the virtual voice ball, which mainly comprises the steps of adding 1 to the praise information of the voice ball and updating the identification of the user who praise the voice ball most recently. And the application server sends the interactive information of the virtual voice ball to the second client, and optionally, when the spatial relationship between the virtual character corresponding to the second client and the virtual voice ball meets a preset condition, the interactive information of the virtual voice ball is sent to the second client. And when the second client selects the virtual voice ball, the interactive information of the virtual voice ball is displayed in the virtual space.
(3)S1122-S1125
Step S1122: when the virtual voice ball is selected, the first client side can request the CDN server for voice data associated with the virtual voice ball according to the identifier of the virtual voice ball when the state of the virtual voice ball is the second state.
Step S1123: and the CDN server returns the voice data associated with the virtual voice ball.
Step S1124: when the virtual control body in the first client captures the virtual voice ball, the user presses down a function key on the handle, and the handle sends a first function message to the first client.
Step S1125: and the first client responds to the first function message and acquires real-time position data of the handle.
Step S1126: and the first client determines the real-time position data of the virtual control body according to the real-time position data of the handle. The 3D voice playing Component in the first client, for example, the Audio Component, converts the voice into a 3D voice according to the real-time position data of the virtual control body and the real-time position data of the virtual voice ball, and plays the 3D voice. The headset device is provided with a headset which is associated with the first client side, and the converted 3D voice can be transmitted to the headset in the headset device to be played.
Step S1127: when the user releases the function key on the handle, the handle sends a second function message to the first client.
Step S1128: the second client stops playing the 3D voice in response to the second function message.
The present application also provides a media content transmitting apparatus 1200, applied to a first client, as shown in fig. 12, the apparatus includes:
a presentation module 1201 for presenting a virtual space, where the virtual space includes at least one virtual carrier and a virtual control object corresponding to the first client;
a media content sending module 1202, configured to, when the virtual controller captures a virtual carrier of the at least one virtual carrier, obtain input media content when the state of the virtual carrier is a first state, send the media content and the identifier of the virtual carrier to a data content server, so that the data content server associates the media content with the identifier of the virtual carrier, and notify the application server to set the state of the virtual carrier to a second state
A release module 1203, configured to control the virtual carrier to be in a release state by the virtual control entity.
In some examples, the presentation module 1201, is also to:
receiving information of the at least one virtual carrier and information of the virtual control body, which are sent by an application server, wherein the information of the at least one virtual carrier comprises: the identification and the state of each virtual carrier and real-time position data in the virtual space, wherein the information of the virtual control body comprises initial position information of the virtual control body;
and displaying the virtual space according to the information of the at least one virtual carrier and the information of the virtual control body.
In some examples, the media content sending module 1202 is further configured to:
acquiring real-time position data of a controller associated with the first client;
receiving real-time position data of the virtual carrier sent by an application server;
when the virtual carrier is determined to be selected according to the real-time position data of the virtual carrier and the real-time position data of the controller, the position data and/or the motion data of the virtual control body and the virtual carrier in the virtual space are updated in response to a first interactive message sent by the controller, and the virtual control body grabs the virtual carrier; and when the state of the virtual carrier is a first state, acquiring the input media content.
In some examples, the release module 1203 is to:
updating the position data and the motion data of the virtual carrier and the virtual control body in the virtual space in response to a second interactive message sent by a controller associated with the first client;
and displaying the virtual control body to release the virtual carrier according to the position data and the motion data of the virtual carrier and the virtual control body in the virtual space.
In some examples, the virtual space also includes a first virtual character corresponding to the first client; the display module 1201 is further configured to:
and when the spatial relationship between the first virtual character and each virtual carrier in the at least one virtual carrier meets a preset condition, the application server sends the information of the at least one virtual carrier to the first client.
In some examples, the media content sending module 1202 is further configured to:
and when the virtual control body is determined to collide with the virtual carrier according to the real-time position data of the virtual carrier and the real-time position data of the controller, determining to select the virtual carrier.
In some examples, the virtual space further includes a virtual ray associated with the virtual control volume, the virtual ray issued from the virtual control volume;
the media content sending module 1202 is further configured to:
determining real-time position data of the virtual ray in the virtual space according to the real-time position data of the controller;
and when the virtual ray is determined to collide with the virtual carrier according to the real-time position data of the virtual ray and the real-time position data of the virtual control body, determining to select the virtual carrier.
In some examples, the media content sending module 1202 is further configured to:
responding to a first function message sent by a controller associated with the first client, and starting to receive externally input media data through a data acquisition device associated with the first client;
stopping receiving media data in response to a second function message transmitted from the controller;
media content is generated from the received media data.
In some examples, the release module 1203 is further configured to:
responding to the second interactive message, determining the motion track and the initial motion data of the virtual carrier according to the real-time position data of the controller, and updating the real-time position data and the motion data of the virtual control body;
updating real-time position data of the virtual carrier according to the motion track and the initial motion data;
the release module 1203 is further configured to:
and displaying the virtual carrier to move along the motion track according to the real-time position data and the motion data of the virtual control body and the real-time position data of the virtual carrier.
In some examples, the apparatus further includes a media content acquisition module 1204 to:
when the virtual control body grabs a virtual carrier in the at least one virtual carrier, if the state of the virtual carrier is a second state, sending a media content request message to the data content server, wherein the media content request message carries the identifier of the virtual carrier, so that the data content server searches media content associated with the identifier of the virtual carrier;
receiving the media content sent by the data content server in response to the media content request message.
In some examples, the media content acquisition module 1204 is further configured to:
when the virtual control body grabs a virtual carrier in the at least one virtual carrier, if the state of the virtual carrier is the second state, the media content associated with the identifier is searched according to the identifier of the virtual carrier.
In some examples, the apparatus further includes a play module 1205 to:
when the virtual carrier is determined to be grabbed, responding to a third function message sent by the controller, and playing the media content; and stopping playing the media content in response to a fourth function message sent by the controller.
In some examples, the media content includes speech, and the play module 1205 is further configured to:
acquiring real-time position data of the head-mounted equipment associated with the first client;
determining real-time position data of a head of a first virtual character associated with the first client in a virtual space according to the real-time position data of the head-mounted device;
acquiring real-time position data of a controller associated with the first client, and determining the real-time position data of the virtual carrier according to the real-time position data of the controller; determining the real-time distance and the real-time direction of the virtual carrier relative to the head of the first virtual character according to the real-time position data of the head of the first virtual character and the real-time position data of the virtual carrier;
converting the voice into voice with multi-dimensional space sound effect according to the real-time distance and the real-time direction;
responding to a third function message sent by the controller, and playing the voice with the multi-dimensional space sound effect; and stopping playing the voice with the multi-dimensional space sound effect in response to a fourth function message sent by the controller.
In some examples, the play module 1205 is further configured to:
obtaining real-time location data of a headset associated with the first client,
determining real-time position data of the head of the first virtual character in a virtual space according to the real-time position data of the head-mounted device;
when the real-time position data of the head of the first virtual character and the real-time position data of the virtual carrier meet preset conditions, informing the application server of setting the state of the virtual carrier to be a third state; when the application server sends the state of the virtual carrier to the second client, the second client acquires the media content associated with the virtual carrier and plays the media content when the state of the virtual carrier is the third state.
In some examples, the apparatus further includes an interaction module 1206 to:
monitoring a preset gesture operation of the controller;
generating interaction information when the preset gesture operation of the controller is monitored;
and sending the interaction information to the application server so that the application server updates the interaction information of the virtual carrier according to the interaction information, and when the spatial relationship between a second virtual character corresponding to a second client and the virtual carrier in the virtual space meets a preset condition, sending the interaction information of the virtual carrier to the second client so that the second client displays the interaction information of the virtual carrier in the virtual space when the virtual carrier is determined to be selected.
In some examples, the interaction module 1206 is further configured to:
when the controller is detected to vibrate, monitoring that the vibration frequency of the controller meets a first preset condition in the process of playing a first preset number of image frames, and monitoring that the vibration frequency of the controller meets a second preset condition in the process of playing a second preset number of image frames later, determining to monitor the preset gesture operation of the controller.
In some examples, the interaction module 1206 is further configured to:
determining that the preset gesture operation of the controller is monitored when a time interval from monitoring that the vibration frequency of the controller meets a first preset condition to monitoring that the vibration frequency of the controller meets a second preset condition exceeds a preset time length.
The present application also provides a media content transmitting apparatus 1300, applied to an application server, the apparatus including:
a first information sending module 1301, configured to send, to a first client, information of at least one virtual bearer and information of a virtual controller associated with the first client, where the information of the at least one virtual bearer includes: the identification and the state of each virtual carrier and real-time position data in the virtual space, wherein the information of the virtual control body comprises initial position information of the virtual control body; enabling the first client to display the virtual space according to the information of the at least one virtual carrier and the information of the virtual control body, when the virtual control body grabs one virtual carrier in the at least one virtual carrier and the state of the virtual carrier is the first state, acquiring input media content, and sending the media content and the identifier of the virtual carrier to a data content server so that the data content server associates the media content with the identifier of the virtual carrier;
a message receiving module 1302, configured to receive a notification message sent by the first client, and set the state of the virtual carrier to a second state according to the notification message;
a second information sending module 1303, configured to send the information of the virtual carrier to a second client, so that the second client obtains media content associated with the identifier of the virtual carrier from the data content server,
in some examples, the apparatus further comprises:
an interactive information receiving module 1304, configured to receive interactive information sent by the first client and the identifier of the virtual carrier, where the first client generates the interactive information when determining to grab the virtual carrier and monitoring a predetermined gesture operation of a controller associated with the first client;
an interactive information updating module 1305, configured to update the interactive information of the virtual carrier according to the interactive information, and associate the interactive information of the virtual carrier with the identifier of the virtual carrier;
the interactive information sending module 1306 is configured to send the interactive information of the virtual carrier to the second client when a spatial relationship between a virtual object associated with the second client in the virtual space and the virtual carrier satisfies a preset condition, so that the second client displays the interactive information of the virtual carrier in the virtual space when determining to select the virtual carrier.
The present examples also provide a computer-readable storage medium storing computer-readable instructions that can cause at least one processor to perform the method as described above for the first client.
The present examples also provide a computer-readable storage medium storing computer-readable instructions that can cause at least one processor to perform the method as described above for an application server.
Fig. 14 shows a configuration diagram of a computing device in which the media content transmitting apparatus 1200 and the media content transmitting apparatus 1300 are located. As shown in fig. 14, the computing device includes one or more processors (CPUs) 1402, a communication module 1404, a memory 1406, a user interface 1410, and a communication bus 1408 for interconnecting these components.
The processor 1402 can receive and transmit data via the communication module 1404 to enable network communication and/or local communication.
User interface 1410 includes one or more output devices 1412 including one or more speakers and/or one or more visual displays. User interface 1410 also includes one or more input devices 914, including, for example, a keyboard, a mouse, a voice command input unit or microphone, a touch screen display, a touch-sensitive tablet, a gesture-capture camera or other input buttons or controls, and the like.
Memory 1406 may be high speed random access memory such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; or non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices.
Memory 1406 stores sets of instructions executable by processor 1402, including:
an operating system 1416, including programs for handling various basic system services and for performing hardware related tasks;
the application 1418 includes various application programs for media content transmission, which can implement the processing flow in the above examples, and may include, for example, some or all of the units or modules in the media content transmission apparatus 1200 or the media content transmission apparatus 1300. At least one of the units in the media content transmitting apparatus 1200 or the media content transmitting apparatus 1300 may store machine executable instructions. The processor 1402, by executing the machine-executable instructions in at least one of the units in the memory 1406, is able to implement the functionality of at least one of the units or modules described above.
It should be noted that not all steps and modules in the above flows and structures are necessary, and some steps or modules may be omitted according to actual needs. The execution order of the steps is not fixed and can be adjusted as required. The division of each module is only for convenience of describing adopted functional division, and in actual implementation, one module may be divided into multiple modules, and the functions of multiple modules may also be implemented by the same module, and these modules may be located in the same device or in different devices.
The hardware modules in the embodiments may be implemented in hardware or a hardware platform plus software. The software includes machine-readable instructions stored on a non-volatile storage medium. Thus, embodiments may also be embodied as software products.
In various examples, the hardware may be implemented by specialized hardware or hardware executing machine-readable instructions. For example, the hardware may be specially designed permanent circuits or logic devices (e.g., special purpose processors, such as FPGAs or ASICs) for performing the specified operations. The hardware may also include programmable logic devices or circuits temporarily configured by software (e.g., including a general purpose processor or other programmable processor) to perform certain operations.
In addition, each example of the present application can be realized by a data processing program executed by a data processing apparatus such as a computer. It is clear that a data processing program constitutes the present application. Further, the data processing program, which is generally stored in one storage medium, is executed by directly reading the program out of the storage medium or by installing or copying the program into a storage device (such as a hard disk and/or a memory) of the data processing device. Such a storage medium therefore also constitutes the present application, which also provides a non-volatile storage medium in which a data processing program is stored, which data processing program can be used to carry out any one of the above-mentioned method examples of the present application.
The corresponding machine-readable instructions of the modules of fig. 14 may cause an operating system or the like operating on the computer to perform some or all of the operations described herein. The nonvolatile computer-readable storage medium may be a memory provided in an expansion board inserted into the computer or written to a memory provided in an expansion unit connected to the computer. A CPU or the like mounted on the expansion board or the expansion unit may perform part or all of the actual operations according to the instructions.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (40)

  1. A media content sending method is applied to a first client side and comprises the following steps:
    displaying a virtual space, wherein the virtual space comprises at least one virtual carrier and a virtual control body corresponding to the first client;
    when the virtual control body grabs a virtual carrier in the at least one virtual carrier, when the state of the virtual carrier is a first state, acquiring input media content, sending the media content and the identifier of the virtual carrier to a data content server, so that the data content server associates the media content with the identifier of the virtual carrier, and informing the application server of setting the state of the virtual carrier to be a second state;
    and the virtual control body controls the virtual carrier to be in a release state.
  2. The method of claim 1, wherein said exposing a virtual space comprises:
    receiving information of the at least one virtual carrier and information of the virtual control body, which are sent by an application server, wherein the information of the at least one virtual carrier comprises: the identification and the state of each virtual carrier and real-time position data in the virtual space, wherein the information of the virtual control body comprises initial position information of the virtual control body;
    and displaying the virtual space according to the information of the at least one virtual carrier and the information of the virtual control body.
  3. The method of claim 1, wherein the obtaining input media content when the state of the virtual carrier is the first state when the virtual control object grabs a virtual carrier of the at least one virtual carrier comprises:
    acquiring real-time position data of a controller associated with the first client;
    receiving real-time position data of the virtual carrier sent by an application server;
    when the virtual carrier is determined to be selected according to the real-time position data of the virtual carrier and the real-time position data of the controller, the position data and/or the motion data of the virtual control body and the virtual carrier in the virtual space are updated in response to a first interactive message sent by the controller, and the virtual control body grabs the virtual carrier;
    and when the state of the virtual carrier is a first state, acquiring the input media content.
  4. The method of claim 1, wherein the virtual control body controlling the virtual carrier to a released state comprises:
    updating the position data and the motion data of the virtual carrier and the virtual control body in the virtual space in response to a second interactive message sent by a controller associated with the first client;
    and displaying the virtual control body to release the virtual carrier according to the position data and the motion data of the virtual carrier and the virtual control body in the virtual space.
  5. The method of claim 2, wherein the virtual space further comprises a first virtual character corresponding to the first client;
    wherein the receiving the information of the at least one virtual carrier and the information of the virtual controller sent by the application server includes: and when the spatial relationship between the first virtual character and each virtual carrier in the at least one virtual carrier meets a preset condition, the application server sends the information of the at least one virtual carrier to the first client.
  6. A method according to claim 3, wherein the virtual carrier is determined to be selected when the virtual control volume is determined to collide with the virtual carrier based on real time position data of the virtual carrier and real time position data of the controller.
  7. The method of claim 3, wherein the virtual space further comprises a virtual ray associated with the virtual control volume, the virtual ray issued from the virtual control volume;
    the method further comprises:
    determining real-time position data of the virtual ray in the virtual space according to the real-time position data of the controller;
    and when the virtual ray is determined to collide with the virtual carrier according to the real-time position data of the virtual ray and the real-time position data of the virtual control body, determining to select the virtual carrier.
  8. The method of claim 1, wherein the obtaining the input media content comprises:
    responding to a first function message sent by a controller associated with the first client, and starting to receive externally input media data through a data acquisition device associated with the first client;
    stopping receiving media data in response to a second function message transmitted from the controller;
    media content is generated from the received media data.
  9. The method of claim 4, wherein the updating of the position data and the motion data of the virtual carrier and the virtual control volume in the virtual space is performed in response to a second interactive message sent by a controller associated with the first client;
    responding to the second interactive message, determining the motion track and the initial motion data of the virtual carrier according to the real-time position data of the controller, and updating the real-time position data and the motion data of the virtual control body;
    updating real-time position data of the virtual carrier according to the motion track and the initial motion data;
    wherein the displaying of the virtual control body to release the virtual carrier according to the position data and the motion data of the virtual carrier and the virtual control body in the virtual space comprises:
    and displaying the virtual carrier to move along the motion track according to the real-time position data and the motion data of the virtual control body and the real-time position data of the virtual carrier.
  10. The method of claim 1, further comprising:
    when the virtual control body grabs a virtual carrier in the at least one virtual carrier, if the state of the virtual carrier is a second state, sending a media content request message to the data content server, wherein the media content request message carries the identifier of the virtual carrier, so that the data content server searches media content associated with the identifier of the virtual carrier;
    receiving the media content sent by the data content server in response to the media content request message.
  11. The method of claim 1, further comprising:
    when the virtual control body grabs a virtual carrier in the at least one virtual carrier, if the state of the virtual carrier is the second state, the media content associated with the identifier is searched according to the identifier of the virtual carrier.
  12. The method of claim 10 or 11, further comprising:
    when the virtual carrier is determined to be grabbed, responding to a third function message sent by the controller, and playing the media content; and stopping playing the media content in response to a fourth function message sent by the controller.
  13. The method of claim 10 or 11, wherein the media content comprises speech, the method further comprising:
    acquiring real-time position data of the head-mounted equipment associated with the first client;
    determining real-time position data of a head of a first virtual character associated with the first client in a virtual space according to the real-time position data of the head-mounted device;
    acquiring real-time position data of a controller associated with the first client, and determining the real-time position data of the virtual carrier according to the real-time position data of the controller;
    determining the real-time distance and the real-time direction of the virtual carrier relative to the head of the first virtual character according to the real-time position data of the head of the first virtual character and the real-time position data of the virtual carrier;
    converting the voice into voice with multi-dimensional space sound effect according to the real-time distance and the real-time direction;
    responding to a third function message sent by the controller, and playing the voice with the multi-dimensional space sound effect; and stopping playing the voice with the multi-dimensional space sound effect in response to a fourth function message sent by the controller.
  14. The method of claim 12 or 13, further comprising:
    obtaining real-time location data of a headset associated with the first client,
    determining real-time position data of the head of the first virtual character in a virtual space according to the real-time position data of the head-mounted device;
    when the real-time position data of the head of the first virtual character and the real-time position data of the virtual carrier meet preset conditions, the application server is informed to set the state of the virtual carrier to be a third state, the application server is informed to send the state of the virtual carrier to one or more second clients, the spatial relationship between the virtual character corresponding to each second client in the one or more second clients and the first virtual character meets preset conditions, and each second client responds to the received state of the virtual carrier to acquire and play media content associated with the virtual carrier.
  15. The method according to any one of claims 12-14, wherein the method further comprises:
    monitoring a preset gesture operation of the controller;
    generating interaction information when the preset gesture operation of the controller is monitored;
    and sending the interaction information to the application server so that the application server updates the interaction information of the virtual carrier according to the interaction information, and when the spatial relationship between a second virtual character corresponding to a second client and the virtual carrier in the virtual space meets a preset condition, sending the interaction information of the virtual carrier to the second client so that the second client displays the interaction information of the virtual carrier in the virtual space when the virtual carrier is determined to be selected.
  16. The method of claim 15, wherein,
    when the controller is detected to vibrate, monitoring that the vibration frequency of the controller meets a first preset condition in the process of playing a first preset number of image frames, and monitoring that the vibration frequency of the controller meets a second preset condition in the process of playing a second preset number of image frames later, determining to monitor the preset gesture operation of the controller.
  17. The method of claim 15, further comprising:
    and when the time interval from the monitoring that the vibration frequency of the controller meets a first preset condition to the monitoring that the vibration frequency of the controller meets a second preset condition exceeds a preset time length, determining that the preset gesture operation of the controller is monitored.
  18. A media content sending method is applied to an application server, and comprises the following steps:
    sending information of at least one virtual carrier and information of a virtual control body associated with a first client to the first client, wherein the information of the at least one virtual carrier comprises: the identification and the state of each virtual carrier and real-time position data in the virtual space, wherein the information of the virtual control body comprises initial position information of the virtual control body; enabling the first client to display the virtual space according to the information of the at least one virtual carrier and the information of the virtual control body, when the virtual control body grabs one virtual carrier in the at least one virtual carrier and the state of the virtual carrier is the first state, acquiring input media content, and sending the media content and the identifier of the virtual carrier to a data content server so that the data content server associates the media content with the identifier of the virtual carrier;
    receiving a notification message sent by the first client, and setting the state of the virtual carrier to be a second state according to the notification message;
    and sending the information of the virtual carrier to a second client so as to enable the second client to acquire the media content associated with the identification of the virtual carrier from the data content server.
  19. The method of claim 18, further comprising:
    receiving interaction information sent by the first client and the identifier of the virtual carrier, wherein the interaction information is generated when the first client determines to grab the virtual carrier and monitors a preset gesture operation of a controller associated with the first client;
    updating the interaction information of the virtual carrier according to the interaction information, and associating the interaction information of the virtual carrier with the identifier of the virtual carrier;
    and when the spatial relationship between the virtual object associated with the second client and the virtual carrier in the virtual space meets a preset condition, sending the interaction information of the virtual carrier to the second client so that the second client displays the interaction information of the virtual carrier in the virtual space when determining to select the virtual carrier.
  20. A media content transmitting apparatus, the apparatus comprising:
    the display module is used for displaying a virtual space, and the virtual space comprises at least one virtual carrier and a virtual control body corresponding to the first client;
    a media content sending module, configured to, when the virtual controller captures a virtual carrier of the at least one virtual carrier, obtain input media content when the state of the virtual carrier is the first state, send the media content and the identifier of the virtual carrier to a data content server, so that the data content server associates the media content with the identifier of the virtual carrier, and notify the application server to set the state of the virtual carrier to the second state
    And the release module is used for controlling the virtual carrier to be in a release state by the virtual control body.
  21. The apparatus of claim 20, wherein the presentation module is further configured to:
    receiving information of the at least one virtual carrier and information of the virtual control body, which are sent by an application server, wherein the information of the at least one virtual carrier comprises: the identification and the state of each virtual carrier and real-time position data in the virtual space, wherein the information of the virtual control body comprises initial position information of the virtual control body;
    and displaying the virtual space according to the information of the at least one virtual carrier and the information of the virtual control body.
  22. The apparatus of claim 20, wherein the media content transmitting module is configured to:
    acquiring real-time position data of a controller associated with the first client;
    receiving real-time position data of the virtual carrier sent by an application server;
    when the virtual carrier is determined to be selected according to the real-time position data of the virtual carrier and the real-time position data of the controller, the position data and/or the motion data of the virtual control body and the virtual carrier in the virtual space are updated in response to a first interactive message sent by the controller, and the virtual control body grabs the virtual carrier; and when the state of the virtual carrier is a first state, acquiring the input media content.
  23. The apparatus of claim 20, wherein the release module is to:
    updating the position data and the motion data of the virtual carrier and the virtual control body in the virtual space in response to a second interactive message sent by a controller associated with the first client;
    and displaying the virtual control body to release the virtual carrier according to the position data and the motion data of the virtual carrier and the virtual control body in the virtual space.
  24. The apparatus of claim 21, wherein the virtual space further comprises a first virtual character corresponding to the first client; the display module is further configured to:
    and when the spatial relationship between the first virtual character and each virtual carrier in the at least one virtual carrier meets a preset condition, the application server sends the information of the at least one virtual carrier to the first client.
  25. The apparatus of claim 22, wherein the media content transmitting module is further configured to:
    and when the virtual control body is determined to collide with the virtual carrier according to the real-time position data of the virtual carrier and the real-time position data of the controller, determining to select the virtual carrier.
  26. The apparatus of claim 22, wherein the virtual space further comprises a virtual ray associated with the virtual control volume, the virtual ray issued from the virtual control volume;
    the media content sending module is further configured to:
    determining real-time position data of the virtual ray in the virtual space according to the real-time position data of the controller;
    and when the virtual ray is determined to collide with the virtual carrier according to the real-time position data of the virtual ray and the real-time position data of the virtual control body, determining to select the virtual carrier.
  27. The apparatus of claim 20, wherein the media content transmitting module is further configured to:
    responding to a first function message sent by a controller associated with the first client, and starting to receive externally input media data through a data acquisition device associated with the first client;
    stopping receiving media data in response to a second function message transmitted from the controller;
    media content is generated from the received media data.
  28. The apparatus of claim 23, wherein the release module is further configured to:
    responding to the second interactive message, determining the motion track and the initial motion data of the virtual carrier according to the real-time position data of the controller, and updating the real-time position data and the motion data of the virtual control body;
    updating real-time position data of the virtual carrier according to the motion track and the initial motion data;
    the release module is further configured to:
    and displaying the virtual carrier to move along the motion track according to the real-time position data and the motion data of the virtual control body and the real-time position data of the virtual carrier.
  29. The apparatus of claim 20, wherein the apparatus further comprises a media content acquisition module to:
    when the virtual control body grabs a virtual carrier in the at least one virtual carrier, if the state of the virtual carrier is a second state, sending a media content request message to the data content server, wherein the media content request message carries the identifier of the virtual carrier, so that the data content server searches media content associated with the identifier of the virtual carrier;
    receiving the media content sent by the data content server in response to the media content request message.
  30. The apparatus of claim 20, wherein the media content acquisition module is further configured to:
    when the virtual control body grabs a virtual carrier in the at least one virtual carrier, if the state of the virtual carrier is the second state, the media content associated with the identifier is searched according to the identifier of the virtual carrier.
  31. The apparatus of claim 29 or 30, wherein the apparatus further comprises a playback module to:
    when the virtual carrier is determined to be grabbed, responding to a third function message sent by the controller, and playing the media content; and stopping playing the media content in response to a fourth function message sent by the controller.
  32. The apparatus of claim 29 or 30, wherein the media content comprises speech, the playback module further to:
    acquiring real-time position data of the head-mounted equipment associated with the first client;
    determining real-time position data of a head of a first virtual character associated with the first client in a virtual space according to the real-time position data of the head-mounted device;
    acquiring real-time position data of a controller associated with the first client, and determining the real-time position data of the virtual carrier according to the real-time position data of the controller; determining the real-time distance and the real-time direction of the virtual carrier relative to the head of the first virtual character according to the real-time position data of the head of the first virtual character and the real-time position data of the virtual carrier;
    converting the voice into voice with multi-dimensional space sound effect according to the real-time distance and the real-time direction;
    responding to a third function message sent by the controller, and playing the voice with the multi-dimensional space sound effect; and stopping playing the voice with the multi-dimensional space sound effect in response to a fourth function message sent by the controller.
  33. The apparatus of claim 31 or 32, wherein the playback module is further configured to:
    obtaining real-time location data of a headset associated with the first client,
    determining real-time position data of the head of the first virtual character in a virtual space according to the real-time position data of the head-mounted device;
    when the real-time position data of the head of the first virtual character and the real-time position data of the virtual carrier meet preset conditions, informing the application server of setting the state of the virtual carrier to be a third state; when the application server sends the state of the virtual carrier to the second client, the second client acquires the media content associated with the virtual carrier and plays the media content when the state of the virtual carrier is the third state.
  34. The apparatus of claims 31-33, wherein the apparatus further comprises an interaction module to:
    monitoring a preset gesture operation of the controller;
    generating interaction information when the preset gesture operation of the controller is monitored;
    and sending the interaction information to the application server so that the application server updates the interaction information of the virtual carrier according to the interaction information, and when the spatial relationship between a second virtual character corresponding to a second client and the virtual carrier in the virtual space meets a preset condition, sending the interaction information of the virtual carrier to the second client so that the second client displays the interaction information of the virtual carrier in the virtual space when the virtual carrier is determined to be selected.
  35. The apparatus of claim 34, wherein the interaction module is further configured to:
    when the controller is detected to vibrate, monitoring that the vibration frequency of the controller meets a first preset condition in the process of playing a first preset number of image frames, and monitoring that the vibration frequency of the controller meets a second preset condition in the process of playing a second preset number of image frames later, determining to monitor the preset gesture operation of the controller.
  36. The apparatus of claim 34, the interaction module further configured to:
    and when the time interval from the monitoring that the vibration frequency of the controller meets a first preset condition to the monitoring that the vibration frequency of the controller meets a second preset condition exceeds a preset time length, determining that the preset gesture operation of the controller is monitored.
  37. A media content transmitting apparatus applied to an application server, the apparatus comprising:
    a first information sending module, configured to send, to a first client, information of at least one virtual carrier and information of a virtual control entity associated with the first client, where the information of the at least one virtual carrier includes: the identification and the state of each virtual carrier and real-time position data in the virtual space, wherein the information of the virtual control body comprises initial position information of the virtual control body; enabling the first client to display the virtual space according to the information of the at least one virtual carrier and the information of the virtual control body, when the virtual control body grabs one virtual carrier in the at least one virtual carrier and the state of the virtual carrier is the first state, acquiring input media content, and sending the media content and the identifier of the virtual carrier to a data content server so that the data content server associates the media content with the identifier of the virtual carrier;
    the message receiving module is used for receiving a notification message sent by the first client and setting the state of the virtual carrier to be a second state according to the notification message;
    and the second information sending module is used for sending the information of the virtual carrier to a second client so as to enable the second client to obtain the media content associated with the identifier of the virtual carrier from the data content server.
  38. The apparatus of claim 37, wherein the apparatus further comprises:
    the interactive information receiving module is used for receiving interactive information sent by the first client and the identifier of the virtual carrier, wherein the interactive information is generated when the first client determines to grab the virtual carrier and monitors the preset gesture operation of a controller associated with the first client;
    the interactive information updating module is used for updating the interactive information of the virtual carrier according to the interactive information and associating the interactive information of the virtual carrier with the identifier of the virtual carrier;
    and the interactive information sending module is used for sending the interactive information of the virtual carrier to the second client when the spatial relationship between the virtual object associated with the second client and the virtual carrier in the virtual space meets a preset condition, so that the second client displays the interactive information of the virtual carrier in the virtual space when the virtual carrier is determined to be selected.
  39. A computer-readable storage medium storing computer-readable instructions that cause at least one processor to perform the method of any one of claims 1-17.
  40. A computer-readable storage medium storing computer-readable instructions that cause at least one processor to perform the method of any one of claims 18-19.
CN201880003415.0A 2018-01-25 2018-01-25 Media content transmitting method, device and storage medium Active CN110431513B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/074074 WO2019144330A1 (en) 2018-01-25 2018-01-25 Media content sending method and device, and storage medium

Publications (2)

Publication Number Publication Date
CN110431513A true CN110431513A (en) 2019-11-08
CN110431513B CN110431513B (en) 2020-11-27

Family

ID=67395811

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880003415.0A Active CN110431513B (en) 2018-01-25 2018-01-25 Media content transmitting method, device and storage medium

Country Status (2)

Country Link
CN (1) CN110431513B (en)
WO (1) WO2019144330A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111784850A (en) * 2020-07-03 2020-10-16 深圳市瑞立视多媒体科技有限公司 Object capture simulation method based on illusion engine and related equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106658146A (en) * 2016-12-28 2017-05-10 上海翌创网络科技股份有限公司 Bullet screen method based on virtual reality
KR20170082028A (en) * 2016-01-05 2017-07-13 주식회사 비블톡 Rim motion apparatus
CN107003797A (en) * 2015-09-08 2017-08-01 苹果公司 Intelligent automation assistant in media environment
CN107132917A (en) * 2017-04-25 2017-09-05 腾讯科技(深圳)有限公司 For the hand-type display methods and device in virtual reality scenario
CN107430790A (en) * 2015-04-09 2017-12-01 奇内莫伊北美有限责任公司 System and method for providing interactive virtual environments
CN107562201A (en) * 2017-09-08 2018-01-09 网易(杭州)网络有限公司 Orient exchange method, device, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10089071B2 (en) * 2016-06-02 2018-10-02 Microsoft Technology Licensing, Llc Automatic audio attenuation on immersive display devices

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107430790A (en) * 2015-04-09 2017-12-01 奇内莫伊北美有限责任公司 System and method for providing interactive virtual environments
CN107003797A (en) * 2015-09-08 2017-08-01 苹果公司 Intelligent automation assistant in media environment
KR20170082028A (en) * 2016-01-05 2017-07-13 주식회사 비블톡 Rim motion apparatus
CN106658146A (en) * 2016-12-28 2017-05-10 上海翌创网络科技股份有限公司 Bullet screen method based on virtual reality
CN107132917A (en) * 2017-04-25 2017-09-05 腾讯科技(深圳)有限公司 For the hand-type display methods and device in virtual reality scenario
CN107562201A (en) * 2017-09-08 2018-01-09 网易(杭州)网络有限公司 Orient exchange method, device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111784850A (en) * 2020-07-03 2020-10-16 深圳市瑞立视多媒体科技有限公司 Object capture simulation method based on illusion engine and related equipment
CN111784850B (en) * 2020-07-03 2024-02-02 深圳市瑞立视多媒体科技有限公司 Object grabbing simulation method based on illusion engine and related equipment

Also Published As

Publication number Publication date
WO2019144330A1 (en) 2019-08-01
CN110431513B (en) 2020-11-27

Similar Documents

Publication Publication Date Title
US11231841B2 (en) Continuation of playback of media content by different output devices
JP6724110B2 (en) Avatar display system in virtual space, avatar display method in virtual space, computer program
US10445941B2 (en) Interactive mixed reality system for a real-world event
US8867886B2 (en) Surround video playback
US9632683B2 (en) Methods, apparatuses and computer program products for manipulating characteristics of audio objects by using directional gestures
US11250636B2 (en) Information processing device, information processing method, and program
CN110213616B (en) Video providing method, video obtaining method, video providing device, video obtaining device and video providing equipment
EP2840463A1 (en) Haptically enabled viewing of sporting events
CN105450736B (en) Method and device for connecting with virtual reality
US20130129304A1 (en) Variable 3-d surround video playback with virtual panning and smooth transition
US10403327B2 (en) Content identification and playback
JP6843164B2 (en) Programs, methods, and information processing equipment
CN111045511A (en) Gesture-based control method and terminal equipment
CN109151565A (en) Play method, apparatus, electronic equipment and the storage medium of voice
JP6470374B1 (en) Program and information processing apparatus executed by computer to provide virtual reality
CN110431513B (en) Media content transmitting method, device and storage medium
EP3549009A1 (en) Realtime recording of gestures and/or voice to modify animations
EP4270155A1 (en) Virtual content
WO2023281820A1 (en) Information processing device, information processing method, and storage medium
CN110753233B (en) Information interaction playing method and device, electronic equipment and storage medium
JP2018114241A (en) Information processing device and game image/sound generation method
CN116964544A (en) Information processing device, information processing terminal, information processing method, and program
JP2021179933A (en) Communication terminal, communication system, communication method, and program
CN115604408A (en) Video recording method and device under virtual scene, storage medium and electronic equipment
CN118059485A (en) Audio processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant