WO2019144330A1 - 媒体内容发送方法、装置及存储介质 - Google Patents

媒体内容发送方法、装置及存储介质 Download PDF

Info

Publication number
WO2019144330A1
WO2019144330A1 PCT/CN2018/074074 CN2018074074W WO2019144330A1 WO 2019144330 A1 WO2019144330 A1 WO 2019144330A1 CN 2018074074 W CN2018074074 W CN 2018074074W WO 2019144330 A1 WO2019144330 A1 WO 2019144330A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
virtual carrier
client
carrier
real
Prior art date
Application number
PCT/CN2018/074074
Other languages
English (en)
French (fr)
Inventor
陈志浩
张振毅
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to PCT/CN2018/074074 priority Critical patent/WO2019144330A1/zh
Priority to CN201880003415.0A priority patent/CN110431513B/zh
Publication of WO2019144330A1 publication Critical patent/WO2019144330A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • the present application relates to the field of virtual reality technologies, and in particular, to a method, an apparatus, and a storage medium for transmitting media content in a virtual reality environment.
  • Virtual Reality (VR) technology uses a computer or other intelligent computing device, combined with photoelectric sensing technology to generate a virtual environment within a specific range of realistic viewing, listening and touch integration.
  • the virtual space generated by virtual reality technology provides the user with a sensory experience of sight, hearing and touch, resulting in an immersive experience of the virtual space.
  • Virtual reality technology has been widely used in many fields because it can surpass the limitations of physical conditions and create diverse scenarios to adapt to diverse application needs.
  • virtual reality technology can be applied in the field of games, for example, VR shooting games, tennis games, etc., and immersive scenes provided by virtual reality technology increase the fun of the game. Virtual reality technology is receiving more and more attention.
  • the application example provides a media content sending method, which is applied to a first client, and the method includes:
  • the virtual space includes at least one virtual carrier and a virtual control body corresponding to the first client;
  • the virtual control body captures a virtual carrier in the at least one virtual carrier, when the state of the virtual carrier is in a first state, the input media content is obtained, and the media content and the virtual carrier are obtained.
  • the virtual controller controls the virtual carrier to be in a released state.
  • the application example further provides a media content sending method, which is applied to an application server, and includes:
  • the information of the at least one virtual bearer includes: an identifier, a status, and a status of each virtual bearer
  • the information of the virtual controller includes initial location information of the virtual controller; such that the first client is based on the information of the at least one virtual carrier and the virtual controller a virtual space of the information display, when the virtual control body captures a virtual carrier in the at least one virtual carrier, when the state of the virtual carrier is in the first state, the input media content is obtained, and the media content is obtained
  • the identifier of the virtual carrier is sent to the data content server, so that the data content server associates the media content with the identifier of the virtual carrier;
  • the application example further provides a media content sending apparatus, where the apparatus includes:
  • Displaying a module displaying a virtual space, where the virtual space includes at least one virtual carrier and a virtual control body corresponding to the first client;
  • a media content sending module configured to: when the virtual control body captures a virtual carrier in the at least one virtual carrier, when the state of the virtual carrier is in a first state, acquiring the input media content, the media is obtained And the identifier of the content and the virtual carrier is sent to the data content server, so that the data content server associates the media content with the identifier of the virtual carrier, and notifies the application server to set the state of the virtual carrier to Second state
  • the application example further provides a media content sending apparatus, where the apparatus includes:
  • a first information sending module configured to send information of the at least one virtual carrier and the information of the virtual control body associated with the first client to the first client, where the information of the at least one virtual carrier includes: each virtual carrier Identification, status, and real-time location data in the virtual space, the information of the virtual control body includes initial location information of the virtual control body; such that the first client is in accordance with the at least one virtual carrier The information and the information of the virtual control body show the virtual space.
  • the virtual control body captures a virtual carrier in the at least one virtual carrier, when the state of the virtual carrier is in the first state, the input is obtained.
  • Media content sending the media content and the identifier of the virtual carrier to a data content server, so that the data content server associates the media content with an identifier of the virtual carrier;
  • a message receiving module configured to receive a notification message sent by the first client, and set a state of the virtual carrier to a second state according to the notification message;
  • a second information sending module configured to send information of the virtual carrier to the second client, so that the second client acquires media content associated with the identifier of the virtual carrier from the data content server .
  • the present application examples also provide a computer readable storage medium storing computer readable instructions that cause at least one processor to perform the method as described above for a first client.
  • the present application examples also provide a computer readable storage medium storing computer readable instructions that cause at least one processor to perform the method as described above for application to an application server.
  • FIG. 1 is a system architecture diagram related to an example of the present application
  • FIG. 2 is a flowchart of a method for transmitting media content applied to a first client by an example of the present application
  • 3A is a schematic diagram of selecting a virtual carrier in a virtual space
  • FIG. 3B is a schematic diagram of capturing a virtual carrier in a virtual space
  • 3C is a schematic diagram of media content associated with a virtual carrier in a virtual space
  • 3D is a schematic diagram of releasing a virtual carrier in a virtual space
  • FIG. 4A is a schematic structural diagram of a first VR device wearing device 400
  • FIG. 4B is a schematic structural diagram of the first VR device controller 410
  • FIG. 5 is a flowchart of a method for obtaining media content associated with a virtual carrier after selecting a virtual carrier according to an example of the present application
  • FIG. 6 is a detailed flowchart of obtaining media content associated with a virtual bearer according to an example of the present application
  • FIG. 8 is a schematic diagram showing interactive information carried by a virtual bearer according to an example of the present application.
  • FIG. 9 is a schematic flowchart of an operation of a listener gesture “shake” in an example of the present application.
  • FIG. 10 is a schematic flowchart of a method for sending media content applied to an application server according to an example of the present application
  • FIG. 11 is a message interaction diagram of voice transmission of a virtual voice ball according to an example of the present application.
  • FIG. 12 is a schematic structural diagram of an apparatus for transmitting media content according to an example of the present application.
  • FIG. 13 is a schematic structural diagram of a media content sending apparatus according to another example of the present application.
  • FIG. 14 is a schematic structural diagram of a computing device in an example of the present application.
  • the media content sending method proposed in the present application can be applied to a VR system.
  • FIG. 1 illustrates a VR system 100 that includes a first client 101, a second client 102, a first VR device 103, a second VR device 104, an application server 105, and a data content server 106.
  • a plurality of second clients 102 and corresponding second VR devices may be included.
  • the first client 101 and the second client 102 are connected to the application server 105 and the data content server 106 via the Internet.
  • the first client 101 and the second client 102 are VR clients (ie, VR APPs).
  • the first VR device 103 and the second VR device 104 may include a user-operable controller and wearable equipment (eg, various VR headsets, VR body-sensing devices, etc.).
  • the first client 101 can perform information interaction with the first VR device 103 to provide an immersive VR image for the user and complete corresponding operation functions
  • the second client 102 can perform information interaction with the second VR device 103 to provide immersion for the user.
  • the first VR device and the first client 101, the second VR device, and the second client 102 are separate components.
  • the first VR device is integrated with the first client 101.
  • the second VR device is integrated with the second client 102.
  • the VR client can display corresponding VR image data to the user according to the location information of the user in the virtual space provided by the wearable equipment and the motion information, so as to bring an immersive experience to the user.
  • the VR client can also perform corresponding operations in response to instructions issued by the user operating the controller, such as: capturing virtual objects in the virtual space.
  • the VR client can generate VR panoramic image data according to the position data and motion data of the virtual object in the virtual space, such as: panoramic picture, panoramic video, VR game, and the like.
  • the application server 105 is a VR application server (referred to as a VR server).
  • the VR server stores real-time location data, motion data, and status data of virtual objects in the virtual space, and can perform corresponding processing in response to the request of the VR client. For example, in response to a login request of the VR client, real-time location data, motion data, and status data of the virtual object in the virtual space are transmitted to the VR client.
  • the data content server 106 is configured to receive media content uploaded by the VR client, where the media content is associated with a virtual carrier in the virtual space. The data content server 106 can also send the media content to the VR client in response to the request of the VR client.
  • the terminal device where the VR client is located refers to a terminal device having a data calculation processing function, including but not limited to a smart phone, a palmtop computer, a tablet computer, and the like (with a communication module installed).
  • the terminal device may also be integrated with the VR device.
  • Operating systems are installed on these communication terminals, including but not limited to: Android operating system, Symbian operating system, Windows mobile operating system, and Apple iPhone OS operating system.
  • the above VR Head Mount Display (HMD) includes a screen that can display a real-time picture.
  • the above data content server may be a CDN (Content Delivery Netwoek) server.
  • the application provides a method 200 for sending media content, which is applied to a VR client. As shown in FIG. 2, the method includes the following steps:
  • S201 Display a virtual space, where the virtual space includes at least one virtual carrier and a virtual control body corresponding to the first client.
  • the virtual space includes one or more virtual bearers and one or more virtual characters.
  • the one or more virtual bearers are used to transfer information in the virtual space.
  • a client corresponding to a virtual character associates a virtual bearer with information, and when the virtual character releases the virtual bearer, another virtual character captures
  • the client corresponding to the other virtual character can obtain the information associated with the virtual bearer.
  • a virtual character corresponds to a client, and the corresponding virtual role is controlled by controlling the VR device associated with the client to complete the corresponding operation.
  • the virtual control body is associated with the first client, for example, the virtual control body is a virtual two-hand of a virtual character associated with the first client.
  • the client (including the first client) is a client developed based on a VR-enabled three-dimensional rendering engine (eg, UE4).
  • the first client includes a virtual carrier and a 3D model of each virtual character.
  • the first client may obtain information about each virtual carrier and information of each virtual character from the application server, according to information of each virtual carrier and each virtual character.
  • the information sets each virtual carrier and the 3D model of each virtual character in the virtual space.
  • all the grid data to be rendered onto the screen is generated.
  • the grid data is rendered to generate a virtual reality image.
  • the generated virtual reality image is sent to a display screen of the head device of the first VR device associated with the first client, thereby displaying a corresponding virtual space.
  • the hands of the first virtual character are the above-mentioned virtual control body.
  • a virtual space as shown in FIG. 3A is shown, in which four virtual players 303 are included.
  • the current player only displays the virtual hand 301 of the player, that is, the virtual control body described above.
  • the virtual control body corresponds to a controller (also called a handle) of the first VR device, and the control controller can be used to control the virtual control body to perform the required operations.
  • the virtual space also includes a virtual speech balloon 302, which is the virtual carrier described above.
  • the virtual speech ball is associated with the speech, and then the virtual speech ball is released, for example, the virtual player receiving the virtual speech ball 302 acquires the virtual speech ball.
  • Associated voice is achieved by the virtual speech ball 302.
  • the number of the virtual voice balls in the virtual space may be one or more, which is not specifically limited herein.
  • S202 When the virtual control body captures a virtual carrier in the at least one virtual carrier, when the state of the virtual carrier is in a first state, acquiring input media content, the media content and the virtual The identifier of the carrier is sent to the data content server to cause the data content server to associate the media content with the identifier of the virtual carrier, and notify the application server to set the state of the virtual carrier to the second state.
  • the virtual carrier has different states, the first state of the virtual carrier is the initial state of the virtual carrier, that is, the state in which the virtual carrier is not associated with the media content, and the second state of the virtual carrier is the state after the virtual carrier associates the media content.
  • the virtual carrier states are different, the virtual carrier has different colors in the virtual space.
  • the virtual speech ball of the first state is, for example, golden.
  • the virtual speech ball of the second state is, for example, red.
  • the virtual carrier When the state of the captured virtual carrier is the first state, that is, the initial state, the virtual carrier is associated with the media content, and the media content may be the media content selected by the first client to be locally stored, or may be the first and the first
  • the media content collected by the peripheral associated by the client for example, the voice recorded by the recording component 404.
  • the user can record a paragraph or sing a song by himself, and the recording component can be a microphone.
  • the first client includes a voice real-time transmission component that cooperates with the recording component.
  • the real-time voice transmission third-party library captures the voice of the recording component, such as a microphone, in real time in each frame, and converts it into a PCM data format, and saves it. In the cache of the terminal where the first client is located.
  • the special effects of the recording can be added above the virtual carrier. As shown in FIG. 3C, a special effect 306 is added to the speech ball.
  • the PCM data in the cache is uploaded to the data content server 106, and saved as a PCM file, and the saved PCM file is saved in association with the identifier of the voice ball.
  • the data content server 106 can be a CDN server, and the CDN server can ensure that the client can update to the required voice file in any place at any time.
  • the state of the voice ball changes, and the first client notifies the application server 105 that the state of the voice ball is set to the second state by the application server 105, and the second state is the recorded state.
  • the first client communicates with the application server 105 and the data content server 106, it can be performed by a network synchronization unit, such as a module for network synchronization in the Unreal Engine.
  • S203 The virtual control body controls the virtual carrier to be in a released state.
  • the user can release the virtual carrier through the control controller.
  • the user can perform a throwing action by holding the controller to control the virtual space.
  • the virtual control body throws the virtual carrier for the purpose.
  • a manner of kicking out a virtual carrier, throwing a virtual carrier, playing a virtual carrier, and the like to release a virtual carrier can also be realized.
  • the location data of the virtual carrier changes, the first client sends the real-time location data of the virtual carrier to the application server, and the application server virtualizes the virtual carrier.
  • the real-time location data is updated to the second client, so that the second client updates the location of the virtual carrier in the virtual space according to the real-time location data of the virtual carrier.
  • the state of the virtual carrier changes
  • the color of the virtual carrier also changes
  • the application server updates the state of the virtual carrier to the second client, so that the second client updates the color of the virtual carrier in the virtual space according to the state of the virtual carrier.
  • the application server 105 updates the state of the voice ball and the identifier of the voice ball to the second client in real time.
  • the state of the virtual voice ball is second.
  • the CDN server is obtained to obtain the media content associated with the voice ball. Thereby, the transmission and exchange of media content is realized through the voice ball.
  • the virtual media carrier When the virtual media carrier is captured by the method for sending media content provided by the present application, the media content is input, the media content is uploaded to the data content server, the virtual carrier is associated with the media content, and the virtual carrier is released in the virtual space.
  • the virtual carrier in the virtual space is used as a carrier of the media content, and the media content associated with the virtual carrier is transmitted between the plurality of clients as the virtual carrier is transferred in the virtual space, so that the interaction of the media content has a 3D stereo immersion.
  • the media content is associated with the virtual carrier in the virtual space to enhance the realism of VR social interaction.
  • the method when the displaying the virtual space is performed, the method includes the following steps:
  • S301 Receive information of the at least one virtual carrier and information of the virtual control body sent by the application server, where the information of the at least one virtual carrier includes: an identifier, a status of each virtual carrier, and in the virtual space.
  • the information of the virtual controller includes initial location information of the virtual controller.
  • S302 Display the virtual space according to the information of the at least one virtual carrier and the information of the virtual control body.
  • the first client obtains information about each virtual carrier and each virtual character in the virtual space from the application server, and then displays the virtual space according to the acquired information of each virtual carrier and information of each virtual character.
  • the first client 101 sends a login request to the application server 105, and the application server 105 sends the information of each virtual carrier in the virtual space and the information of each virtual character to the first A client 101.
  • the virtual character corresponds to a client, and the first virtual character corresponding to the first client is the virtual control body.
  • the virtual control body is the hands of the first virtual character.
  • the information of the virtual carrier includes: an identifier, a status of the virtual carrier, and real-time location data in the virtual space.
  • the virtual carrier can be a virtual voice ball in the virtual space, and the virtual voice ball itself has the morphological characteristics and elastic physical properties of the ball, which can satisfy the entertainment purpose of the general ball.
  • the information of the virtual character includes an initial position of the virtual character in the virtual space.
  • each virtual carrier is displayed at a corresponding position of the virtual space according to the location information of each virtual carrier.
  • each virtual character is displayed in the virtual space according to the initial location information of each virtual character.
  • the first client also receives real-time location data of each virtual carrier and each virtual character in the virtual space sent by the application server, thereby updating each virtual carrier and each virtual character in the virtual space.
  • the first client displays the virtual space, for the first virtual character associated with the first client, only the hands (virtual control body) of the first virtual character may be displayed in the virtual space.
  • the location data of the virtual control body can be determined according to the location data of the virtual character, and the virtual control body is displayed in the virtual space.
  • the initial location data of the first virtual character may be the location data of the center of the first virtual character.
  • the center point of the virtual character is the center point of the virtual player, and then the location data according to the center of the virtual character. Determine the location of the virtual control body.
  • step 202 when the virtual controller picks up a virtual carrier in the at least one virtual carrier, when the state of the virtual carrier is the first state, the input is obtained.
  • the media content includes steps:
  • S401 Acquire real-time location data of a controller associated with the first client.
  • S402 Receive real-time location data of the virtual carrier sent by the application server.
  • the first VR device associated with the first client includes a controller, and the first client acquires location data of the controller.
  • the first VR device further includes a locator, and the position data of the controller is acquired by the locator, and the locator can utilize infrared optical positioning.
  • the locator includes a plurality of infrared transmitting cameras to cover the indoor positioning space. An infrared reflection point is placed on the controller, and the position information of the controller in the space is determined by capturing images reflected by the reflection points back to the camera.
  • the locator sends the acquired real-time location data of the controller to the first client.
  • the positioner can also use an image tracker, a (group) camera capture controller, and then image processing technology to determine the position of the controller.
  • the real-time position data of the controller can be obtained by ultrasonic three-dimensional spatial positioning.
  • an ultrasonic tracker can be set on the controller to emit high-frequency ultrasonic pulses, which are received by three receivers installed on the ceiling of the real space.
  • the signal can calculate the distance by delay, determine the real-time position of the controller, and send the real-time position data of the controller to the first client by the receiver.
  • the location data of the role corresponding to the first client in the real space is obtained, and the first virtual role in the virtual space is updated according to the change of the location data of the role corresponding to the first client with respect to the initial location data.
  • Associated virtual control body The initial location data of the first virtual character may be sent by the application server 105 when the user logs in to the first client.
  • the real-time location data of the virtual bearer is obtained from the application server 105.
  • the relative position data of the controller relative to the character may be determined according to the obtained location data of the role corresponding to the first client in the real space. Further, based on the relative position data, relative position data of the virtual control body relative to the first virtual character in the virtual space is determined. Further, the location data of the virtual controller in the virtual space is determined based on the relative location data.
  • the controller corresponding to the first client is configured to control the virtual control body corresponding to the first client. When the position of the controller moves, the position of the virtual control body also moves accordingly, and the user controls through the control. The device can achieve the purpose of controlling the virtual control body.
  • the first VR device may include a headset 400, as described in FIG. 4A, and mainly includes a communication component 401, a playback component 402, a display screen 403, and a recording component 404.
  • the playing component 402, the display screen 403, and the recording component 404 are connected to the communication component 401, wherein the display screen 403 is a virtual reality display screen.
  • the first VR device further includes a controller 410. As shown in FIG.
  • the controller 410 includes an interaction key 411 and a function key 412.
  • the user clicks the interaction key 411 on the controller 410, and the controller sends the first interaction information to the first client in response to the user clicking on the interaction key 411, and the first client is configured according to the first And interacting with the message, updating the virtual control body and the location data and/or the motion data of the virtual carrier in the virtual space, so that the virtual control body captures the virtual carrier.
  • the skeletal animation data related to the grip of the virtual control body is saved in the first client, and after the first client receives the first interaction message, the skeletal animation data is invoked, so that in the virtual space, The virtual control body makes a gripping action corresponding to the skeletal animation data.
  • the first client After receiving the first interaction message, the first client attaches the virtual carrier to a preset position on the virtual control body, and the virtual carrier is attached to the location point, and then the location point is followed. Move and rotate.
  • a virtual carrier eg, a virtual voiceball
  • a fly into a virtual controller eg, a virtual hand
  • a leaping effect is played on the path between the virtual carrier and the virtual controller.
  • the virtual hand 301 makes an action of holding the ball in the virtual space according to the skeleton animation data of the virtual hand holding the ball.
  • the special effect of the virtual voice ball flying into the palm of the virtual hand is added, so that the virtual voice ball 302 is presented in the virtual reality scene and flies into the virtual hand 301, as shown in FIG. 3B.
  • the method includes: responding to the second interaction message sent by the controller associated with the first client, Updating the location data and the motion data of the virtual carrier and the virtual control body in the virtual space, and displaying the location according to the virtual carrier and the location data and motion data of the virtual control body in the virtual space The virtual controller releases the virtual carrier.
  • the user When the user takes the controller to make a throwing action, press the interactive key 411 on the controller. At this moment, the virtual carrier is released, for example, thrown, in the corresponding virtual space.
  • the first client obtains real-time location data of the controller in the process.
  • the controller After the user presses the interaction key 411, the controller sends a second interaction message to the first client, where the first client responds to the second interaction message according to the second client.
  • the acquired real-time location data of the controller updates the location of the virtual control body in the virtual space.
  • the position data and the motion data of the virtual carrier at the time of the ejection are determined according to the acquired real-time position data of the controller.
  • the position of the virtual carrier in the virtual space is updated according to the position data and the motion data of the virtual carrier at the time of the ejection.
  • the virtual control body throws a virtual carrier, for example, in a VR game scenario, as shown in FIG. 3D, the virtual hand 301 throws a virtual speech ball 302.
  • the virtual space further includes a first virtual role corresponding to the first client; wherein, in step S301, performing execution of the at least one virtual carrier sent by the receiving application server And the step of: when the spatial relationship between the first virtual character and each virtual carrier in the at least one virtual carrier meets a predetermined condition, the application server sends the information to the first client Sending information of the at least one virtual carrier.
  • the virtual character is an object in the virtual space of the user who controls the controller, and a user associated with the client corresponds to a virtual character, and the virtual control body in the virtual space may correspond to the hand of the virtual object, for example, in a VR game.
  • one player corresponds to one virtual character in the virtual space
  • the virtual control body corresponds to the virtual player's hand
  • one virtual character corresponds to one client.
  • the application server 105 sends the information of the virtual carrier to the VR client only when the spatial relationship between the virtual character corresponding to the virtual carrier and the virtual carrier meets the predetermined condition.
  • the predetermined condition may be a predetermined distance range, and the application server 105 determines a distance between the location of the virtual carrier and the location of the first virtual character, and the information of the virtual carrier whose distance from the first virtual character exceeds the predetermined distance range is not sent. And sending, to the first client, information of the virtual carrier whose distance from the first virtual character is within the predetermined distance to the first client.
  • the application server 105 stores location data of each virtual character, such as location data of a center point of the virtual character.
  • the application server sends the information of the virtual voice ball to the VR client corresponding to the virtual character only when the spatial position relationship between the virtual voice ball and the virtual character satisfies the predetermined condition.
  • determining to select the virtual carrier when it is determined that the virtual control body collides with the virtual carrier according to real-time location data of the virtual carrier and real-time location data of the controller, determining to select the virtual carrier.
  • Determining real-time location data of the virtual control body according to the real-time location data of the controller For example, in the VR game scenario shown in FIG. 3A, determining real-time location data of the virtual hand 301 in the virtual space according to the real-time location data of the controller.
  • the virtual control body is a virtual hand 301 of the virtual character
  • the virtual carrier is a virtual voice ball 302.
  • the virtual hand 301 collides with the virtual voice ball 302 according to the position data of the virtual hand and the position data of the virtual voice ball it is determined that the virtual carrier is selected.
  • a method such as a radiation detection, a volume sweep, an overlap test, or the like can be employed.
  • the virtual space further includes a virtual ray associated with the virtual control body, the virtual ray being emitted from the virtual control body; the method further comprising the steps of:
  • S11 Determine real-time location data of the virtual ray in the virtual space according to the real-time location data of the controller.
  • the virtual ray is sent from the virtual control body, and the real-time position data of the virtual control body is determined according to the real-time position data of the acquired controller. Since the virtual ray is emitted by the virtual control body, the virtual ray can be determined according to the real-time position data of the virtual control body. Real-time location data.
  • the virtual ray collides with the virtual carrier, it is determined that the virtual carrier is selected.
  • the virtual hand 301 emits a virtual ray 304, and when the virtual ray 304 collides with the virtual speech ball 302, it is determined that the virtual speech ball is selected.
  • step 202 when the obtaining the input media content is performed, the following steps are included:
  • S21 responsive to the first function message sent by the controller associated with the first client, starting to receive externally input media data by the associated data collection device; in response to the second function message sent from the controller , stop receiving media data.
  • the controller sends the first function information to the first client.
  • the first client receives the first function message
  • the input media data for example, is collected by a recording component 404 on the headset 400 associated with the first client.
  • the first client sends a control message to the headset device 400, so that the headset device 400 turns on the recording component, starts to listen to the voice input by the user, and the user can record a speech or sing a song by himself.
  • the recording component can be a microphone.
  • the first client includes a voice real-time transmission component that cooperates with the recording component 404.
  • a real-time voice transmission third-party library captures the voice component 404, such as the voice in the microphone, in real time in each frame, and converts The PCM data format is saved in the cache of the terminal where the first client is located.
  • the controller sends the second function information to the first client.
  • the first client receives the second function message, Stop receiving voice data.
  • the first client sends a control message to the headset device 400, causing the headset device 400 to turn off the recording component and stop recording.
  • the voice real-time transmission component in the first client captures each frame of PCM data in the recording component 404 in real time, and forms voice data according to the captured PCM data of each frame.
  • updating the location information and motion of the virtual carrier and the virtual control body in the virtual space in execution of the second interaction message sent by the controller associated with the first client association the following steps are included:
  • S31 responsive to the second interaction message, determining a motion trajectory and initial motion data of the virtual carrier according to real-time location data of the controller, and updating real-time location data and motion data of the virtual control body.
  • the user can make a throwing action by holding the controller to control the virtual controller in the virtual space to release the virtual carrier. For example, the user presses the interactive key 411 on the controller while the controller is throwing the controller. At this moment, the virtual carrier is thrown in the corresponding virtual space.
  • the first client obtains the real-time location data of the controller in the process of the controller before the user presses the interaction key 411, and determines the real-time location data of the virtual controller in the virtual space according to the real-time location data of the controller. Therefore, the virtual controller captures the virtual carrier and determines the real-time location data of the virtual carrier according to the real-time location data of the virtual controller.
  • the instantaneous speed and direction when the virtual carrier is thrown are determined according to the real-time position of the previous frame when the virtual carrier is thrown, the real-time position of the current frame, and the time interval between the previous frame and the current frame.
  • the motion trajectory of the virtual carrier is determined according to the gravity acceleration of the virtual carrier, and the real-time position of the virtual carrier in the virtual space is updated according to the instantaneous speed, the direction, the gravity acceleration and the motion trajectory.
  • S32 Update real-time location data of the virtual carrier according to the motion trajectory and initial motion data.
  • the method includes the following steps: when the virtual host and the virtual control body are used to display the virtual carrier according to the location data and the motion data in the virtual space.
  • S33 Display, according to the real-time location data of the virtual control body, the motion data, and the real-time location data of the virtual carrier, the virtual carrier moves along the motion track.
  • step S301 Determining, according to the motion trajectory of the virtual carrier and the initial motion data acquired in step S301, real-time location data of the virtual carrier in each frame of the virtual reality image, so that when the first client sends the virtual reality image to the headset device 400
  • the virtual carrier moves along the motion trajectory
  • the initial motion data includes an instantaneous speed, a direction, and a gravitational acceleration when the virtual carrier is released.
  • FIG. 3D after the virtual speech ball 302 is thrown, the virtual speech ball 302 moves along the trajectory shown in FIG. 3D.
  • the media content sending method 500 provided by the present application, as shown in FIG. 5, further includes the following steps:
  • the state of the virtual bearer is the first state, that is, the initial state, in which case the input media data associated with the virtual bearer is received.
  • the state of the virtual carrier is the second state, ie the state of the associated media data, in which case the media content associated with the virtual carrier is obtained.
  • the network synchronization unit in the first client for example, the module for network synchronization in the Unreal Engine, sends a media content request message to the data content server, where the media content request message carries the identifier of the virtual carrier.
  • the data content server stores the media content corresponding to the identifier of the virtual carrier, and searches for the corresponding media content according to the identifier of the virtual carrier. For example, in the VR game scenario shown in FIG. 3A, when the virtual voice ball is selected as shown in FIG. 3A, when the state of the virtual voice ball is the second state, the network synchronization unit in the first client sends the data synchronization server 106 to the data content server 106. The voice data carried by the virtual voice ball is requested.
  • the virtual voice ball may also carry other media data such as video data, and the data content server 106 may be a CDN server.
  • S502 Receive the media content that is sent by the data content server in response to the media content request message.
  • the media content delivery method provided by the present application further includes the following steps: when the virtual control body captures a virtual carrier in the at least one virtual carrier, if the state of the virtual carrier is the second state And searching for media content associated with the identifier according to the identifier of the virtual carrier.
  • the state of the virtual carrier is the second state, that is, the state of the associated media data.
  • the media content associated with the virtual carrier is locally located.
  • the first client has previously selected the voice ball
  • the data content associated with the virtual carrier is requested from the data content server 106, and the media content is saved in the local cache.
  • the other client captures the virtual carrier and plays the media content associated with the virtual carrier, if the non-private playback is performed, the first client also requests the virtual carrier from the data content server 106 according to the identifier of the virtual carrier.
  • the associated media content is played and played, and the requested media content is saved in a local cache. Therefore, when the media content associated with the virtual carrier is obtained, the search is performed in the local cache first, and when there is no local cache, the data content server is requested again, which can improve the response speed.
  • the process 600 for acquiring the media content associated with the virtual voice ball mainly includes the following steps:
  • Step S601 A network synchronization unit in the first client, such as a module for network synchronization in the Unreal Engine, is used to acquire voice data associated with the virtual voice ball, and the voice data is saved in the form of a PCM file.
  • Step S602 The network synchronization unit first searches for the PCM file in the local cache. When the PCM file exists in the local cache, step S603 is performed, otherwise step S604 is performed.
  • Step S603 Acquire the PCM file in a local cache.
  • Step S604 The CDN server is requested to go to the CDN server, specifically, the CDN server requests the PCM file according to the identifier of the virtual voice ball, and the CDN server acquires the PCM file according to the identifier of the virtual voice ball, and the PCM file is obtained. Returning to the network synchronization unit of the first client.
  • the media content delivery method provided by the present application further includes the following steps: when determining to capture the virtual carrier, playing the media content in response to the third function message sent by the controller; The fourth function message sent by the controller stops playing the media content.
  • the media content associated with the virtual carrier has been acquired.
  • the controller When the user presses the function key 412 on the controller 410, the controller will The third function message is sent to the first client, and the first client starts playing the media content in response to the third function message; when the user releases the function key 412 on the controller 410, the controller will be the fourth The function message is sent to the first client, and the first client stops playing the media content in response to the fourth function message.
  • the media content sending method provided by the present application, wherein the media content is voice, the method further includes: when the voice is played, converting the voice into 3D voice for playing, the method comprising the following steps :
  • S41 Acquire real-time location data of the headset associated with the first client.
  • the real-time location data of the headset associated with the first client refer to the manner of obtaining the real-time location data of the controller in step 203, and the infrared optical positioning mode, the image tracking and positioning mode, and the super-generation tracking may be adopted. The way of positioning will not be described here.
  • S42 Determine real-time location data of a header of the first virtual character associated with the first client in the virtual space according to the real-time location data of the headset.
  • the location data of the character corresponding to the first client in the real space may be determined in the same manner.
  • the relative position data of the device relative to the character is worn, and according to the relative position data, the relative position data of the head of the virtual character in the virtual space relative to the first virtual character is determined.
  • the initial location data of the virtual character may be sent by the application server 105 when the user logs in to the first client, thereby determining real-time location data of the head of the virtual character.
  • the head of the first virtual character corresponds to the head mounted device in the real space.
  • S43 Acquire real-time location data of the controller associated with the first client, and determine real-time location data of the virtual carrier according to real-time location data of the controller.
  • S44 Determine a real-time distance and a real-time direction of the virtual carrier relative to a head of the first virtual character according to real-time location data of a header of the first virtual character and real-time location data of the virtual carrier.
  • the voice is the voice emitted from the virtual carrier.
  • the voice of the model is a voice emitted by the virtual voice ball, and when the virtual voice ball is farther from the head of the first virtual character, the heard sound is relatively small.
  • the sound heard is relatively larger.
  • the virtual speech ball is closer to the left ear of the first virtual character, the sound heard by the left ear is relatively larger; when the virtual speech ball is closer to the right ear of the first virtual character, the right ear hears The sound is relatively louder.
  • S45 Convert the voice into a voice with multi-dimensional spatial sound effects according to the real-time distance and the real-time direction.
  • This voice with multi-dimensional spatial sound effects can be three-dimensional spatial sound, referred to as 3D speech.
  • the first client includes a 3D voice playing component, such as an Audio Component that plays 3D sound in the Unreal Engine UE4, to convert the voice into 3D voice.
  • the voice data in the PCM file is filled into the Audio Component every frame, and the real-time distance and the real-time direction determined in step S44 are input into the Audio Component to generate 3D voice data of the left and right ears and played.
  • the headset device 400 includes a playback component 402, such as a headset, that transmits the 3D voice to the headset for playback to a user.
  • the first client is developed based on the UE4 engine and uses the development mode of the Actor combined component. If there are other multimedia playback information, you can replace the components to achieve different multimedia data playback. For example, if the voice ball is changed to a video ball, you can replace the Audio Component with a component that supports video playback.
  • the 3D speech of the left and right ears generated in step S45 is played through the playback component associated with the first client, for example, by the playback component 402 in the headset 400.
  • the method for sending media content provided by the present application further includes: when the first client plays the 3D voice, when the virtual carrier is far away from the head of the first virtual character, the non-private play is performed.
  • the roles corresponding to other clients can also hear the 3D voice, including the following steps:
  • Step S51 Acquire real-time location data of the headset associated with the first client.
  • Step S52 Determine real-time location data of the head of the first virtual character in the virtual space according to the real-time location data of the headset.
  • Steps S51-S52 are the same as steps S41-S42, and are not described herein again.
  • Step S53 when the real-time location data of the head of the first virtual character and the real-time location data of the virtual carrier meet a preset condition, notify the application server to set the state of the virtual carrier to a third state; Sending, by the application server, the status of the virtual carrier to one or more second clients, where the virtual role corresponding to each second client of the one or more second clients is related to the first virtual role The spatial relationship satisfies a predetermined condition, and each of the second clients acquires and plays the media content associated with the virtual carrier in response to the received status of the virtual carrier.
  • the predetermined condition may be a preset distance threshold, when the distance between the real-time position of the head of the first virtual character and the real-time position of the virtual carrier is within the distance threshold, indicating that the virtual carrier is away from the first virtual character
  • the head is relatively close and is played privately. In the case of private play, only the user corresponding to the first virtual character can hear the 3D voice.
  • the distance between the real-time position of the head of the first virtual character and the real-time position of the virtual carrier exceeds the distance threshold, the virtual carrier is farther away from the head of the first virtual character, and is publicly played, in public play.
  • the user corresponding to the other virtual characters can also hear the 3D voice, or the user corresponding to other virtual characters within a certain distance of the virtual carrier can also hear the 3D voice.
  • the second client can go to the CDN server to request the PCM file of the voice associated with the virtual carrier according to the identifier of the virtual carrier and play it.
  • the PCM file When playing the PCM file, it can also be converted into 3D voice play, and the voice data in the PCM file is filled into the sound component every frame, for example, into the component of the Unreal Engine UE4 playing the 3D sound (Audio Component). And inputting a real-time distance and a real-time direction of the virtual voice ball and the head of the second virtual character into the sound component, thereby generating 3D voice data of the left and right ears and playing.
  • the media content sending method 700 further includes the following steps:
  • S701 Listen to a predetermined gesture operation of the controller.
  • the user can play the media content associated with the virtual carrier, and the virtual carrier can also be Associated media content interacts, such as praising the voice delivered by a virtual voiceball in a VR game.
  • the user can operate the controller to implement interaction with the media content carried by the virtual carrier, for example, for the virtual voice ball in the VR game, when the user “shakes the controller”
  • the voice carried by the virtual voice ball is praised, and the first client monitors the state of the controller.
  • the handle event driving module in the first client drives the vibration state of the handle every frame, and determines whether the handle is a "shake" gesture according to the vibration state of the handle.
  • S702 Generate interactive information when the predetermined gesture operation of the controller is monitored.
  • the user When the predetermined gesture operation of the controller is monitored, the user is required to interact with the media content carried by the virtual carrier, and the first client generates the interaction information, where the interaction information includes the user associated with the first client. Identification, the identity of the virtual carrier.
  • S703 Send the interaction information to the application server, so that the application server updates the interaction information of the virtual carrier according to the interaction information, when the second virtual role corresponding to the second client in the virtual space is Sending the interaction information of the virtual carrier to the second client when the spatial relationship of the virtual carrier meets the preset condition, so that the second client, when determining to select the virtual carrier, is in the virtual
  • the interactive information of the virtual carrier is displayed in the space.
  • the first client sends the interaction information generated in step S702 to the application server, so that the application server updates the interaction information of the virtual carrier according to the interaction information, where the update process includes adding 1 to the number of interactions corresponding to the virtual carrier. Updating the identity of the user associated with the first client to the user identity that recently interacted with the virtual carrier. For example, in the case of the VR game scenario, after the first client generates the like information for the virtual voice ball, the interaction information is sent to the application server, and the application server adds 1 to the number of clicks of the virtual voice ball. And updating the identifier of the user corresponding to the first client to the identifier of the user who recently liked the virtual voice ball.
  • the application server sends the interaction information of the virtual carrier. Giving the second client, so that the second client displays the interaction information of the virtual carrier in the virtual space when determining to select the virtual carrier.
  • the interaction information includes a point of the praise icon 802 and the virtual voice ball.
  • step S701 when performing the predetermined gesture operation of monitoring the controller, wherein, after detecting the vibration of the controller, playing the first preset number of image frames During the process, the number of vibrations of the controller is monitored to satisfy the first preset condition, and in the process of playing the second preset number of image frames, the number of vibrations of the controller is monitored to satisfy the second preset condition. Determining the predetermined gesture operation that is listening to the controller.
  • a time interval from when the number of vibrations of the controller is monitored to satisfy the first preset condition until the number of times the vibration of the controller is monitored satisfies the second preset condition exceeds a predetermined length of time Determining the predetermined gesture operation of listening to the controller.
  • the process 900 of the predetermined gesture operation of the snoop controller is "shake", as shown in FIG.
  • the controller is a handle
  • the virtual carrier is a virtual voice ball in the VR game scene shown in FIG. 3A.
  • the virtual voice can be shaken by the handle.
  • the media content carried by the ball, such as voice, is liked and interacted.
  • the process mainly includes the following steps:
  • Step S901 The user operation handle associated with the first client, when the user wants to praise the voice ball, the operation handle is shaken to perform the like.
  • Step S902 Query the vibration state of the handle of the i-th frame.
  • the VR underlying driver in the first client can query the vibration state of the handle when the VR scene is updated one frame, wherein the vibration state of the handle is on the handle.
  • the sensor detects and transmits the result to the VR client.
  • the VR client converts the detection result of the sensor into the vibration state of the handle and saves it in its own buffer area.
  • the VR underlying driver queries the handle from the corresponding buffer area of the VR client. Vibration state.
  • Step S903 When the vibration state of the handle is detected (for example, the vibration state parameter is “True”), that is, when the state of the i-th frame handle is vibration, step S904 is performed; otherwise, step S905 is performed, and i is assigned to i+. 1. Going back to step S902, the status of the next frame handle is queried.
  • the vibration state parameter is “True”
  • Step S904 After querying the vibration state of the handle, that is, when the state of the handle of the i-th frame is vibration, query the state of the handle of the subsequent n1 frames, that is, query the vibration of the i+1th frame to the i+n1 frame handle.
  • the state is the same as the state of the query handle in step S902.
  • the VR underlay driver queries the buffer state of the handle from the buffer area corresponding to the VR client.
  • the VR client simultaneously counts the number of times the vibration state of the handle is queried in the n1 frames, such as the number of times the vibration state parameter is "True".
  • the user may move the handle, or may start to shake the handle, so when the number of vibrations of the handle queried within the predetermined frame exceeds a predetermined value, it is determined that the user starts to The handle is shaken.
  • Step S905 When the state of the i-th frame handle is not vibration, i is assigned to i+1, and then returns to step S902 to query the state of the next frame handle.
  • Step S907 Assign i to i+n1, and execute step S909.
  • Step S908 Assign i to i+n1+1, and then return to step S902 to query the state of the next frame handle.
  • Step S910 determining whether the number of times the handle is vibrated is less than n4 in the i+1 frame to the i+n3 frame. If the user is less than n4, the user starts to stop shaking the handle, and then step S911 is performed. Otherwise, the user does not stop. The handle is shaken, and step S912 is performed at this time.
  • the time that is, the time the handle starts to shake until it stops shaking.
  • step S914 is performed, the handle gesture determination fails, and it is determined that the user does not perform a “shake” operation on the handle. Otherwise, step S913 is performed to determine that the user “shakes” the handle. operating. In this step, it is determined that the user has "shake” the handle only when the user shakes the handle for more than a predetermined time.
  • Step S912 In step S910, the number of times the n3 intra-frame handle is detected as a vibration ⁇ n4 indicates that the user does not stop shaking the handle. At this time, step S912 is performed, the value of i is assigned to i+n3, and the process returns to step S909. , query the vibration state of the subsequent n3 intra-frame handles.
  • Step S913 It is determined that the user performs a "shake” operation on the handle. After the current user performs a "shake” gesture on the voice ball, it is determined that the voice ball is praised and interacted, and the gesture "shake" is no longer performed.
  • Step S914 The gesture "shake” determination fails, and it is determined that the user does not perform a "shake” operation on the handle.
  • Step S915 When the gesture determination is recognized, the recognition of the gesture “shake” is continued. In this step, the value of i is assigned to i+n3+1, and then returns to step S902 to continue to query the state of the handle for subsequent The recognition of the gesture "shake".
  • the frame times n1, n2, and the thresholds n3, n4 can be preset, and can be set according to experience.
  • the shake gesture can be relatively stable, thereby greatly reducing errors. The occurrence of the operation.
  • the application further provides a media content sending method 1000, which is applied to the application server 105, as shown in FIG. 10, and includes the following steps:
  • S1001 Send information of at least one virtual bearer and information of the virtual control body associated with the first client to the first client, where the information of the at least one virtual bearer includes: identifier, status, and The real-time location data in the virtual space, the information of the virtual control body includes initial location information of the virtual control body, so that the first client is based on the information of the at least one virtual carrier and the virtual control
  • the information display virtual space when the virtual control body captures a virtual carrier in the at least one virtual carrier, when the state of the virtual carrier is in the first state, the input media content is obtained,
  • the media content and the identification of the virtual carrier are sent to a data content server to cause the data content server to associate the media content with an identification of the virtual carrier.
  • the step is the same as the step of the application server on the terminal side transmitting the information of the virtual carrier and the information of the virtual control body to the first client, the virtual carrier, the media content associated with the virtual carrier, and the media content to the data content server. , no longer repeat here
  • S1002 Receive a notification message sent by the first client, and set a state of the virtual carrier to a second state according to the notification message.
  • the first client sends a notification message to the application server, where the notification message is used to make the application server according to the state of the virtual carrier.
  • the notification message is used to make the application server according to the state of the virtual carrier.
  • the virtual carrier is a voice ball
  • the voice ball when the voice ball is recorded, the state of the voice ball changes, the first client notifies the application server 105 that the state of the voice ball is set to the second state by the application server 105, and the second The status is the recorded state, and the color of the recorded virtual speech ball is different from the color of the virtual speech ball in the initial state, for example, may be red.
  • the first client communicates with the application server 105 and the data content server 106, it can be performed by a network synchronization unit, such as a module for network synchronization in the Unreal Engine.
  • S1003 Send information of the virtual carrier to the second client, so that the second client acquires media content associated with the identifier of the virtual carrier from the data content server.
  • the application server 105 updates the state of the voice ball and the identifier of the voice ball to the second client in real time, and when the player corresponding to the second client selects the voice ball, when the second state of the voice ball, Based on the identification of the voice ball, the CDN server is obtained to obtain the media content associated with the voice ball. Thereby, the transmission and exchange of media content is realized through the voice ball.
  • the virtual carrier when the virtual carrier is captured, the media content is input, the media content is uploaded to the data content server, the virtual carrier is associated with the media content, and the virtual carrier is released in the virtual space.
  • the media content associated with the virtual carrier is obtained from the data content server, and subsequent playback can be performed.
  • the virtual carrier in the virtual space is used as a carrier of the media content, and the media content associated with the virtual carrier is transferred between the plurality of clients as the virtual carrier is transferred in the virtual space.
  • the interaction of the media content has a 3D stereo immersion, and the media content is associated with the virtual carrier in the virtual space to enhance the realism of the VR social.
  • the method for sending media content provided by the present application further includes the following steps:
  • S61 Receive interaction information sent by the first client and an identifier of the virtual carrier, where the first client, when determining to capture the virtual carrier, and listening to being associated with the first client The interactive information is generated and sent when the predetermined gesture operation of the controller is performed.
  • the user can play the media content associated with the virtual carrier, and the virtual carrier can also be Associated media content interacts, such as praising the voice delivered by a virtual voiceball in a VR game.
  • the user can operate the controller to implement interaction with the media content carried by the virtual carrier, for example, for the virtual voice ball in the VR game, when the user “shakes the controller”
  • the voice carried by the virtual voice ball is praised, and the first client monitors the state of the controller.
  • the handle event driving module in the first client drives the vibration state of the handle every frame, and determines whether the handle is a "shake" gesture according to the vibration state of the handle.
  • the user When the predetermined gesture operation of the controller is monitored, the user is required to interact with the media content carried by the virtual carrier, and the first client generates the interaction information, where the interaction information includes the user associated with the first client. Identification, the identity of the virtual carrier.
  • S62 Update interaction information of the virtual carrier according to the interaction information, and associate interaction information of the virtual carrier with an identifier of the virtual carrier.
  • the first client sends the interaction information generated in step S61 to the application server, so that the application server updates the interaction information of the virtual carrier according to the interaction information, and the update process includes adding 1 to the number of interactions corresponding to the virtual carrier. Updating the identity of the user associated with the first client to the user identity that recently interacted with the virtual carrier. For example, in the case of the VR game scenario, after the first client generates the like information for the virtual voice ball, the interaction information is sent to the application server, and the application server adds 1 to the number of clicks of the virtual voice ball. And updating the identifier of the user corresponding to the first client to the identifier of the user who recently liked the virtual voice ball.
  • the application server sends the interaction information of the virtual carrier. Giving the second client, so that the second client displays the interaction information of the virtual carrier in the virtual space when determining to select the virtual carrier.
  • the interaction information includes a click icon 802, a virtual voice ball number of times 803, and a recent pair.
  • the virtual voice ball is advertised by the user's logo 801.
  • the first client is a VR game client
  • the application server is a VR game server
  • the virtual carrier is a virtual voice ball
  • the virtual voice is The ball relay voice message interaction 1100, as shown in FIG. 11, mainly includes the following steps:
  • Step S1101 The first client sends an APP login request to the application server.
  • Step S1102 The application server returns information of the virtual voice ball and information of the virtual character in the virtual space to the first client, where the information of the virtual voice ball includes the state, the identifier, and the real-time location data of the virtual voice ball.
  • the state of the voice ball includes an initial state, a recorded state, and a play state, and the information of the virtual character includes real-time location data of the virtual character.
  • Step S1103 The first client generates a virtual reality image according to real-time location data of the virtual voice ball and real-time location data of the virtual character.
  • Step S1104 Send the generated virtual reality image to the display of the display of the head mounted device.
  • Step S1105 Acquire real-time position data of the handle.
  • Step S1106 Determine real-time position data of the virtual control body according to the real-time position data of the handle, and determine real-time position data of the virtual ray according to the real-time position data of the virtual control body.
  • Step S1107 judging whether the virtual speech ball is selected according to the real-time position data of the virtual speech ball and the real-time position data of the virtual control body, or determining whether the virtual is selected according to the real-time position data of the virtual speech ball and the real-time position data of the virtual ray Speech ball.
  • Step S1108 After selecting the virtual voice ball, the user clicks the interaction key on the handle, and the handle sends the first interaction message to the first client.
  • Step S1109 After receiving the first interaction message, the first client updates the location data and the motion data of the virtual voice ball and the virtual control body in the virtual space, so that the virtual controller captures the virtual voice ball.
  • the virtual control body corresponds to the virtual hand of the first virtual character.
  • steps S1110-S1116 are the first state of the virtual speech ball state, that is, the virtual speech ball is associated with the speech data.
  • Steps S1117-S1111 are: when the state of the virtual speech ball is the second state, that is, when the state is already recorded, the voice ball is liked and interacted.
  • Steps S1122-S1125 are: when the state of the virtual voice ball is the second state, that is, the recorded state, the voice associated with the virtual voice ball is acquired for playing. The steps are described in detail below.
  • Step S1110 When the state of the virtual voice ball is the first state, that is, the initial state, the user presses a function key on the handle, and the handle sends a first function message to the first client.
  • Step S1112 The first client responds to the first function message, and the recording component in the first client, such as a real-time voice transmission Apollo voice, captures the voice recorded by the microphone of the headset in real time and converts For PCM data format.
  • the recording component in the first client such as a real-time voice transmission Apollo voice
  • Step S1113 The user releases the function key on the handle, and the handle sends a second function message to the first client.
  • Step S1114 The first client sends the acquired PCM voice data and the identifier of the virtual voice ball to the CDN server, and the CDN server saves the PCM voice data in association with the identifier of the virtual voice ball.
  • Step S1115 After the virtual voice ball is associated with the voice data, the state of the virtual voice ball changes, and the first client notifies the application server to change the state of the virtual voice ball.
  • Step S1116 The application server sets the state of the virtual voice ball to the second state, and sends the state of the virtual voice ball to the second client.
  • the state of the virtual voice ball is sent to the second client.
  • Step S1117 When the state of the virtual voice ball is the second state, the handle event driving module of the first client polls the vibration state of the handle every frame.
  • Step S1118 judging whether it is a "shake" gesture of the handle according to the vibration state of the handle.
  • Step S1119 When the "shake" gesture of the handle is determined, interactive information is generated.
  • Step S1120 The first client sends an interaction message to the application server.
  • Step S1121 The application server updates the interaction information of the virtual voice ball, which mainly includes adding 1 to the click information of the voice ball, and updating the identifier of the user who likes the latest voice ball.
  • the application server sends the interaction information of the virtual voice ball to the second client.
  • the interaction information of the virtual voice ball is sent to The second client.
  • the interaction information of the virtual voice ball is displayed in the virtual space.
  • Step S1122 When the virtual voice ball is selected, when the state of the virtual voice ball is the second state, the first client requests the voice data associated with the virtual voice ball according to the identifier of the virtual voice ball to the CDN server.
  • Step S1123 The CDN server returns the voice data associated with the virtual voice ball.
  • Step S1124 When the virtual control body captures the virtual voice ball in the first client, the user presses a function key on the handle, and the handle sends a first function message to the first client.
  • Step S1125 The first client acquires real-time location data of the handle in response to the first function message.
  • Step S1126 The first client determines real-time location data of the virtual control body according to the real-time location data of the handle.
  • the 3D voice playback component in the first client for example, the Audio Component, converts the voice into 3D voice and plays according to the real-time location data of the virtual controller and the real-time location data of the virtual voiceball.
  • the headset has a headset associated with the first client, and the converted 3D voice can be transmitted to the headset in the headset.
  • Step S1127 When the user releases the function key on the handle, the handle sends a second function message to the first client.
  • Step S1128 The second client stops playing the 3D voice in response to the second function message.
  • the present application further provides a media content sending apparatus 1200, which is applied to a first client.
  • the apparatus includes:
  • the display module 1201 is configured to display a virtual space, where the virtual space includes at least one virtual carrier and a virtual control body corresponding to the first client;
  • the media content sending module 1202 is configured to: when the virtual control body captures a virtual carrier in the at least one virtual carrier, when the state of the virtual carrier is in a first state, acquiring the input media content, The media content and the identifier of the virtual carrier are sent to the data content server, so that the data content server associates the media content with the identifier of the virtual carrier, and notifies the application server to set the state of the virtual carrier Second state
  • the release module 1203 is configured to control, by the virtual control body, the virtual carrier to be in a released state.
  • the display module 1201 is further configured to:
  • the information of the at least one virtual carrier includes: an identifier, a status of each virtual carrier, and real-time in the virtual space Location data, the information of the virtual control body includes initial location information of the virtual control body;
  • the media content sending module 1202 is further configured to:
  • the virtual control body grabs the virtual carrier; and when the state of the virtual carrier is the first state, the input media content is acquired.
  • the release module 1203 is configured to:
  • the virtual controller releases the virtual carrier.
  • the virtual space further includes a first virtual role corresponding to the first client; the display module 1201 is further configured to:
  • the application server sends the information of the at least one virtual carrier to the first client.
  • the media content sending module 1202 is further configured to:
  • the virtual space further includes a virtual ray associated with the virtual control body, the virtual ray being emitted from the virtual control body;
  • the media content sending module 1202 is further configured to:
  • the media content sending module 1202 is further configured to:
  • Stop receiving media data in response to a second function message sent from the controller
  • the media content is generated based on the received media data.
  • the release module 1203 is further configured to:
  • the release module 1203 is further configured to:
  • the apparatus further includes a media content acquisition module 1204 to:
  • the virtual control body captures a virtual carrier in the at least one virtual carrier, if the state of the virtual carrier is in a second state, sending a media content request message to the data content server, the media content request message Carrying an identifier of the virtual bearer, so that the data content server searches for media content associated with the identifier of the virtual bearer;
  • the media content obtaining module 1204 is further configured to:
  • the virtual control body captures a virtual bearer in the at least one virtual bearer, if the state of the virtual bearer is the second state, searching for the media content associated with the identifier according to the identifier of the virtual bearer .
  • the apparatus further includes a play module 1205 for:
  • the media content is played in response to a third function message sent by the controller; and the media content is stopped in response to a fourth function message sent by the controller.
  • the media content includes a voice
  • the playing module 1205 is further configured to:
  • Obtaining real-time location data of the controller associated with the first client determining real-time location data of the virtual carrier according to real-time location data of the controller; and real-time location data according to a header of the first virtual character and Real-time location data of the virtual carrier, determining a real-time distance and a real-time direction of the virtual carrier relative to a head of the first virtual character;
  • the playing module 1205 is further configured to:
  • the apparatus further includes an interaction module 1206 for:
  • the application server updates the interaction information of the virtual carrier according to the interaction information, when the second virtual character corresponding to the second client in the virtual space and the virtual carrier Transmitting the interaction information of the virtual carrier to the second client when the spatial relationship satisfies the preset condition, so that the second client, when determining to select the virtual carrier, is in the virtual space Presenting interactive information of the virtual carrier.
  • the interaction module 1206 is further configured to:
  • the number of vibrations of the controller is monitored during the playing of the first preset number of image frames to satisfy the first preset condition, and the second preset number is played afterwards.
  • the predetermined gesture operation of the controller is determined to be monitored.
  • the interaction module 1206 is further configured to:
  • the present application further provides a media content sending apparatus 1300, which is applied to an application server, and the apparatus includes:
  • the first information sending module 1301 is configured to send information of the at least one virtual carrier and the information of the virtual control body associated with the first client to the first client, where the information of the at least one virtual carrier includes: each virtual The identifier of the carrier, the status, and the real-time location data in the virtual space, the information of the virtual controller includes initial location information of the virtual controller; so that the first client is configured according to the at least one virtual carrier The information and the information of the virtual control body show the virtual space.
  • the virtual control body captures a virtual carrier in the at least one virtual carrier, when the state of the virtual carrier is in the first state, the input is obtained.
  • Media content sending the media content and the identifier of the virtual carrier to a data content server, so that the data content server associates the media content with an identifier of the virtual carrier;
  • the message receiving module 1302 is configured to receive a notification message sent by the first client, and set a state of the virtual carrier to a second state according to the notification message.
  • the second information sending module 1303 is configured to send the information of the virtual carrier to the second client, so that the second client obtains the media associated with the identifier of the virtual carrier from the data content server. content
  • the apparatus further includes:
  • the interactive information receiving module 1304 is configured to receive the interaction information sent by the first client and the identifier of the virtual carrier, where the first client detects that the virtual carrier is captured, and listens to the Generating the interaction information when a predetermined gesture operation of the controller associated with the first client is performed;
  • the interactive information update module 1305 is configured to update the interaction information of the virtual carrier according to the interaction information, and associate the interaction information of the virtual carrier with the identifier of the virtual carrier;
  • the interactive information sending module 1306 is configured to: when the spatial relationship between the virtual object associated with the second client and the virtual carrier in the virtual space meets a preset condition, send the interaction information of the virtual carrier to the first And the second client, so that the second client displays the interactive information of the virtual carrier in the virtual space when determining to select the virtual carrier.
  • the present application examples also provide a computer readable storage medium storing computer readable instructions that cause at least one processor to perform the method as described above for a first client.
  • the present application examples also provide a computer readable storage medium storing computer readable instructions that cause at least one processor to perform the method as described above for application to an application server.
  • FIG. 14 is a diagram showing the structure of a computing device in which the media content transmitting device 1200 and the media content transmitting device 1300 are located.
  • the computing device includes one or more processors (CPUs) 1402, communication modules 1404, memory 1406, user interface 1410, and a communication bus 1408 for interconnecting these components.
  • processors CPUs
  • communication modules 1404 communication modules 1404, memory 1406, user interface 1410, and a communication bus 1408 for interconnecting these components.
  • communication bus 1408 for interconnecting these components.
  • the processor 1402 can receive and transmit data through the communication module 1404 to effect network communication and/or local communication.
  • User interface 1410 includes one or more output devices 1412 that include one or more speakers and/or one or more visual displays.
  • User interface 1410 also includes one or more input devices 914 including, for example, a keyboard, a mouse, a voice command input unit or loudspeaker, a touch screen display, a touch sensitive tablet, a gesture capture camera or other input button or control, and the like.
  • the memory 1406 can be a high speed random access memory such as DRAM, SRAM, DDR RAM, or other random access solid state storage device; or a nonvolatile memory such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, Or other non-volatile solid-state storage devices.
  • a high speed random access memory such as DRAM, SRAM, DDR RAM, or other random access solid state storage device
  • nonvolatile memory such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, Or other non-volatile solid-state storage devices.
  • the memory 1406 stores a set of instructions executable by the processor 1402, including:
  • Operating system 1416 including programs for processing various basic system services and for performing hardware related tasks
  • the application 1418 includes various applications for media content delivery, such an application being capable of implementing the processing flow in each of the above examples, such as may include some or all of the units in the media content transmitting device 1200 or the media content transmitting device 1300 or Module. At least one of the units in the media content transmitting device 1200 or the media content transmitting device 1300 may store the machine executable instructions.
  • the processor 1402 can implement the functions of at least one of the above-described units or modules by executing machine-executable instructions in at least one of the units in the memory 1406.
  • the hardware modules in the embodiments may be implemented in a hardware manner or a hardware platform plus software.
  • the above software includes machine readable instructions stored in a non-volatile storage medium.
  • embodiments can also be embodied as software products.
  • the hardware may be implemented by specialized hardware or hardware that executes machine readable instructions.
  • the hardware can be a specially designed permanent circuit or logic device (such as a dedicated processor such as an FPGA or ASIC) for performing a particular operation.
  • the hardware may also include programmable logic devices or circuits (such as including general purpose processors or other programmable processors) that are temporarily configured by software for performing particular operations.
  • each instance of the present application can be implemented by a data processing program executed by a data processing device such as a computer.
  • the data processing program constitutes the present application.
  • a data processing program usually stored in a storage medium is executed by directly reading a program out of a storage medium or by installing or copying the program to a storage device (such as a hard disk and or a memory) of the data processing device. Therefore, such a storage medium also constitutes the present application, and the present application also provides a non-volatile storage medium in which a data processing program is stored, which can be used to execute any of the above-mentioned method examples of the present application. An example.
  • the machine readable instructions corresponding to the modules of FIG. 14 may cause an operating system or the like operating on a computer to perform some or all of the operations described herein.
  • the non-transitory computer readable storage medium may be inserted into a memory provided in an expansion board within the computer or written to a memory provided in an expansion unit connected to the computer.
  • the CPU or the like installed on the expansion board or the expansion unit can perform part and all of the actual operations according to the instructions.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

本申请公开了一种媒体内容发送方法,所述方法包括:展示虚拟空间,所述虚拟空间中包括至少一个虚拟载体和所述第一客户端对应的虚拟控制体;当所述虚拟控制体抓取所述至少一个虚拟载体中一虚拟载体时,当所述虚拟载体的状态为第一状态时,获取输入的媒体内容,将所述媒体内容及所述虚拟载体的标识发送给数据内容服务器,以使所述数据内容服务器将所述媒体内容与所述虚拟载体的标识相关联,并通知所述应用服务器设置所述虚拟载体的状态为第二状态;所述虚拟控制体控制所述虚拟载体为释放状态。本申请还公开了相应的应用于应用服务器的媒体内容发送方法、装置及存储介质。

Description

媒体内容发送方法、装置及存储介质 技术领域
本申请涉及虚拟现实技术领域,尤其涉及虚拟现实环境下的媒体内容发送方法、装置及存储介质。
背景技术
虚拟现实(Virtual Reality VR)技术利用电脑或其他智能计算设备,结合光电传感技术生成逼真的视、听、触一体化的特定范围内的虚拟环境。虚拟现实技术产生的虚拟空间向用户提供视觉、听觉和触觉等感官的体验,使之产生对虚拟空间身临其境的体验。由于其能够超越物理条件的限制,创造多样化的场景来适应多样化的应用需求,因而虚拟现实技术在很多领域都得到了广泛的应用。举例来说,虚拟现实技术可以应用在游戏领域,例如,结合VR的射击游戏,网球游戏等等,虚拟现实技术提供的身临其境的场景增加了游戏的趣味性。虚拟现实技术受到越来越多的关注。
技术内容
本申请实例提供了一种媒体内容发送方法,应用于第一客户端,所述方法包括:
展示虚拟空间,所述虚拟空间中包括至少一个虚拟载体和所述第一客户端对应的虚拟控制体;
当所述虚拟控制体抓取所述至少一个虚拟载体中一虚拟载体时,当所述虚拟载体的状态为第一状态时,获取输入的媒体内容,将所述媒体 内容及所述虚拟载体的标识发送给数据内容服务器,以使所述数据内容服务器将所述媒体内容与所述虚拟载体的标识相关联,并通知所述应用服务器设置所述虚拟载体的状态为第二状态;
所述虚拟控制体控制所述虚拟载体为释放状态。
本申请实例还提供了一种媒体内容发送方法,应用于应用服务器,包括:
向第一客户端发送至少一个虚拟载体的信息及所述第一客户端关联的虚拟控制体的信息,其中,所述至少一个虚拟载体的信息包括:各虚拟载体的标识、状态及在所述虚拟空间中的实时位置数据,所述虚拟控制体的信息包括所述虚拟控制体的初始位置信息;以使所述第一客户端根据所述至少一个虚拟载体的信息及所述虚拟控制体的信息展示所虚拟空间,当所述虚拟控制体抓取所述至少一个虚拟载体中一虚拟载体时,当所述虚拟载体的状态为第一状态时,获取输入的媒体内容,将所述媒体内容及所述虚拟载体的标识发送给数据内容服务器,以使所述数据内容服务器将所述媒体内容与所述虚拟载体的标识相关联;
接收所述第一客户端发送的通知消息,根据该通知消息将所述虚拟载体的状态设置为第二状态;
将所述虚拟载体的信息发送给第二客户端,以使所述第二客户端,从所述数据内容服务器获取与所述虚拟载体的标识相关联的媒体内容。
本申请实例还提供了一种媒体内容发送装置,所述装置包括:
展示模块,展示虚拟空间,所述虚拟空间中包括至少一个虚拟载体和所述第一客户端对应的虚拟控制体;
媒体内容发送模块,用以当所述虚拟控制体抓取所述至少一个虚拟载体中一虚拟载体时,当所述虚拟载体的状态为第一状态时,获取输入的媒体内容,将所述媒体内容及所述虚拟载体的标识发送给数据内容服 务器,以使所述数据内容服务器将所述媒体内容与所述虚拟载体的标识相关联,并通知所述应用服务器设置所述虚拟载体的状态为第二状态
释放模块,用以所述虚拟控制体控制所述虚拟载体为释放状态。
本申请实例还提供了一种媒体内容发送装置,所述装置包括:
第一信息发送模块,用以向第一客户端发送至少一个虚拟载体的信息及所述第一客户端关联的虚拟控制体的信息,其中,所述至少一个虚拟载体的信息包括:各虚拟载体的标识、状态及在所述虚拟空间中的实时位置数据,所述虚拟控制体的信息包括所述虚拟控制体的初始位置信息;以使所述第一客户端根据所述至少一个虚拟载体的信息及所述虚拟控制体的信息展示所虚拟空间,当所述虚拟控制体抓取所述至少一个虚拟载体中一虚拟载体时,当所述虚拟载体的状态为第一状态时,获取输入的媒体内容,将所述媒体内容及所述虚拟载体的标识发送给数据内容服务器,以使所述数据内容服务器将所述媒体内容与所述虚拟载体的标识相关联;
消息接收模块,用以接收所述第一客户端发送的通知消息,根据该通知消息将所述虚拟载体的状态设置为第二状态;
第二信息发送模块,用以将所述虚拟载体的信息发送给第二客户端,以使所述第二客户端,从所述数据内容服务器获取与所述虚拟载体的标识相关联的媒体内容。
本申请实例还提供了一种计算机可读存储介质,存储有计算机可读指令,可以使至少一个处理器执行如上述应用于第一客户端的所述的方法。
本申请实例还提供了一种计算机可读存储介质,存储有计算机可读指令,可以使至少一个处理器执行如上述应用于应用服务器的所述的方法。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是是本申请实例涉及的***构架图;
图2是本申请一实例应用于第一客户端的媒体内容发送方法的流程图;
图3A为在虚拟空间中选取虚拟载体的示意图;
图3B为在虚拟空间中抓取虚拟载体的示意图;
图3C为在虚拟空间中对虚拟载体关联媒体内容的示意图;
图3D为在虚拟空间中释放虚拟载体的示意图;
图4A为第一VR设备头戴设备400的结构示意图;
图4B为第一VR设备控制器410的结构示意图;
图5为本申请一实例选取虚拟载体后获取虚拟载体关联的媒体内容的方法的流程图;
图6为本申请一实例获取虚拟载体关联的媒体内容的详细流程图;
图7为本申请一实例对虚拟载体进行互动的流程图;
图8为本申请一实例展示虚拟载体携带的互动信息的示意图;
图9为本申请一实例监听手势“摇一摇”操作的流程示意图;
图10为本申请一实例应用于应用服务器的媒体内容发送方法的流程示意图;
图11为本申请一实例虚拟语音球传递语音的消息交互图;
图12为本申请一实例媒体内容发送装置的结构示意图;
图13为本申请另一实例媒体内容发送装置的结构示意图;
以及
图14为本申请实例中的计算设备组成结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本发明一部分实施例,而不是全部的实施例。基于
本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本申请提出的媒体内容发送方法可应用于VR***。图1示出了一种VR***100,其包括:第一客户端101、第二客户端102、第一VR设备103、第二VR设备104、应用服务器105及数据内容服务器106。在VR***100中,可以包括多个第二客户端102及对应的第二VR设备。第一客户端101及第二客户端102与应用服务器105及数据内容服务器106通过互联网进行连接。
第一客户端101及第二客户端102为VR客户端(即VR APP)。第一VR设备103及第二VR设备104可包括用户可操作的控制器(Controller)和可穿戴的装备(如:各种VR头戴设备、VR体感设备等)。第一客户端101可与第一VR设备103进行信息交互,以为用户提供沉浸式的VR图像并完成相应的操作功能,第二客户端102可与第二VR设备103进行信息交互以为用户提供沉浸式的VR图像并完成相应的操作功能。在图1中,第一VR设备与第一客户端101、第二VR设备与第二客户端102为独立的部件,在一些实例中,第一VR设备与第 一客户端101集成为一体,第二VR设备与第二客户端102集成为一体。VR客户端可根据可穿戴的装备提供的用户在虚拟空间中的位置信息以及运动信息为用户展示相应的VR图像数据,以给用户带来沉浸式体验。VR客户端还可以响应于用户操作控制器发出的指令而执行相应的操作,比如:抓取虚拟空间中的虚拟物体。VR客户端可根据虚拟空间中的虚拟物体的位置数据及运动数据,生成VR全景图像数据,比如:全景图片、全景视频、VR游戏等。应用服务器105为VR应用服务器(简称VR服务器),VR服务器中保存有虚拟空间中的虚拟物体的实时位置数据、运动数据及状态数据,可响应于VR客户端的请求而执行相应的处理。例如,响应于VR客户端的登录请求,将虚拟空间中的虚拟物体的实时位置数据、运动数据及状态数据发送给所述VR客户端。数据内容服务器106用以接收VR客户端上传的媒体内容,该媒体内容与虚拟空间中的虚拟载体相关联。数据内容服务器106还可响应于VR客户端的请求,将媒体内容发送给VR客户端。
这里,上述VR客户端所在终端设备是指具有数据计算处理功能的终端设备,包括但不限于(安装有通信模块的)智能手机、掌上电脑、平板电脑等。所述终端设备还可以与所述VR设备装备为一体。这些通信终端上都安装有操作***,包括但不限于:Android操作***、Symbian操作***、Windows mobile操作***、以及苹果iPhone OS操作***等等。上述的VR头戴设备(HMD,Head Mount Display)包括可以显示实时的画面的屏幕。上述数据内容服务器可以为CDN(Content Delivery Netwoek,内容分发网络)服务器。
本申请提供了一种媒体内容的发送方法200,应用于VR客户端,如图2所示,该方法包括以下步骤:
S201:展示虚拟空间,所述虚拟空间中包括至少一个虚拟载体和所 述第一客户端对应的虚拟控制体。
所述虚拟空间中包括一个或多个虚拟载体及一个或多个虚拟角色。所述一个或多个虚拟载体用于在虚拟空间中传递信息,例如,一个虚拟角色对应的客户端使一个虚拟载体关联信息,当该虚拟角色将该虚拟载体释放后,并另一个虚拟角色抓取到该虚拟载体,则该另一个虚拟角色对应的客户端可获取该虚拟载体关联的信息。其中,一个虚拟角色对应一个客户端,通过控制客户端关联的VR设备控制对应的虚拟角色完成相应操作。所述虚拟控制体与所述第一客户端关联,例如,所述虚拟控制体为所述第一客户端关联的虚拟角色的虚拟双手。所述客户端(包括第一客户端)为基于支持VR的三维渲染引擎(例如UE4)开发得到的客户端。该第一客户端中包括各虚拟载体、各虚拟角色的3D模型,第一客户端可以从应用服务器获取各虚拟载体的信息及各虚拟角色的信息,根据各虚拟载体的信息及各虚拟角色的信息将各虚拟载体、各虚拟角色的3D模型设置在虚拟空间中。进而通过坐标变换,生成要渲染到屏幕上的所有网格数据。对该网格数据进行渲染,生成虚拟现实图像。将生成的虚拟现实图像发送到第一客户端关联的第一VR设备的头戴设备的显示屏显示,从而展示对应的虚拟空间。第一客户端在进行上述渲染时,可以只渲染第一客户端关联的第一虚拟角色的双手,忽略第一虚拟角色的身体。该第一虚拟角色的双手即上述虚拟控制体。例如,在VR游戏场景300中,展示如图3A所示的虚拟空间,在该虚拟空间中,包括四个虚拟玩家303。其中当前玩家只展示该玩家的虚拟手301,也即上述的虚拟控制体。该虚拟控制体与第一VR设备的控制器(也称手柄)相对应,可以通过控制控制器来控制虚拟控制体执行所需要的操作。虚拟空间中还包括一个虚拟语音球302,也即上述虚拟载体。虚拟空间中的虚拟玩家可以抓取到所述虚拟语音球302后,将虚拟语音球关联语 音,之后将虚拟语音球释放,例如抛出,接收到虚拟语音球302的虚拟玩家获取到虚拟语音球关联的语音。通过虚拟语音球302实现语音的传递。其中,虚拟空间中虚拟语音球的个数可以为一个或多个,在此不做具体限定。
S202:当所述虚拟控制体抓取所述至少一个虚拟载体中一虚拟载体时,当所述虚拟载体的状态为第一状态时,获取输入的媒体内容,将所述媒体内容及所述虚拟载体的标识发送给数据内容服务器,以使所述数据内容服务器将所述媒体内容与所述虚拟载体的标识相关联,并通知所述应用服务器设置所述虚拟载体的状态为第二状态。
虚拟载体具有不同的状态,虚拟载体的第一状态为虚拟载体的初始状态,即虚拟载体不关联媒体内容的状态,虚拟载体的第二状态为虚拟载体关联媒体内容后的状态。当虚拟载体状态不同时,虚拟载体在虚拟空间中的颜色不同。例如,图3A所示的VR游戏的虚拟空间中,第一状态(初始状态)的虚拟语音球,例如为金色的。在图3B中,当抓取到虚拟语音球并将虚拟语音球关联媒体内容后,第二状态(关联媒体内容后的状态)的虚拟语音球,例如为红色的。当抓取到的虚拟载体的状态为第一状态时,即初始状态时,将虚拟载体关联媒体内容,该媒体内容可以是第一客户端选择本地存储的媒体内容,也可以为通过与第一客户端关联的外设采集的媒体内容,例如,通过录音部件404录制的语音。用户可以录制一段话或者自己唱一首歌,所述录音部件可以为麦克风。第一客户端上包括与所述录音部件协作的语音实时传输部件,例如,实时语音传输第三方库会在每帧实时抓取录音部件,例如麦克风中的语音,并转化为PCM数据格式,保存在第一客户端所在终端的缓存中。在将虚拟载体关联媒体内容的过程中,在虚拟空间中,可以在虚拟载体的上方添加录音的特效。如图3C所示,在语音球上添加有特效306。语 音数据录制结束后,将缓存中的PCM数据上传到数据内容服务器106,保存成PCM文件,同时保存的PCM文件与语音球的标识关联保存。所述数据内容服务器106可以为CDN服务器,使用CDN服务器能够保证客户端在任何地点都能及时更新到所需的语音文件。当语音球录音后,语音球的状态发生了改变,第一客户端通知应用服务器105,由应用服务器105该语音球的状态设置为第二状态,第二状态为已录音状态。第一客户端与应用服务器105及数据内容服务器106进行通信时,可以通过网络同步单元进行,例如虚幻引擎中用以网络同步的模块。
S203:所述虚拟控制体控制所述虚拟载体为释放状态。
当获取到与虚拟载体对应的媒体内容,并上传给数据内容服务器106之后,用户可以通过控制控制器释放虚拟载体,例如,用户可以通过手拿控制器做出抛的动作,以实现控制虚拟空间中虚拟控制体抛出虚拟载体的目的。此外,还可以实现踢出虚拟载体、扔出虚拟载体、打出虚拟载体等多种释放虚拟载体的方式。当第一客户端对应的虚拟角色在虚拟空间中将虚拟载体抛出后,虚拟载体的位置数据会发生变化,第一客户端向应用服务器发送虚拟载体的实时位置数据,应用服务器将虚拟载体的实时位置数据更新给第二客户端,使得第二客户端根据虚拟载体的实时位置数据更新虚拟空间中虚拟载体的位置。同时当虚拟载体的状态发生变化时,虚拟载体的颜色也发生变化,应用服务器将虚拟载体的状态更新给第二客户端,使得第二客户端根据虚拟载体的状态更新虚拟空间中虚拟载体的颜色。例如,在上述所述的VR游戏场景中,当虚拟语音球被抛出后,其他玩家可以抓取到该虚拟语音球,并获取虚拟语音球关联的语音。具体地,应用服务器105会实时将语音球的状态及该语音球的标识更新给第二客户端,当第二客户端对应的玩家选取到该语音球后,当虚拟语音球的状态为第二状态时,根据该语音球的标识,去CDN 服务器获取与该语音球关联的媒体内容。从而通过语音球实现媒体内容的传递和交流。
采用本申请提供的媒体内容的发送方法,抓取虚拟载体时,输入媒体内容,将该媒体内容上传给数据内容服务器,使虚拟载体与所述媒体内容相关联,在虚拟空间中释放虚拟载体。使得虚拟空间中的虚拟载体作为媒体内容的载体,虚拟载体关联的媒体内容随着虚拟载体在虚拟空间中的传递在多个客户端之间进行传递,使得媒体内容的交互具有3D立体沉浸感,同时媒体内容关联虚拟空间中的虚拟载体,提升VR社交的真实感。
在一些实例中,在上述步骤201中,在执行所述展示虚拟空间时,包括步骤:
S301:接收应用服务器发送的所述至少一个虚拟载体的信息及所述虚拟控制体的信息,其中,所述至少一个虚拟载体的信息包括:各虚拟载体的标识、状态及在所述虚拟空间中的实时位置数据,所述虚拟控制体的信息包括所述虚拟控制体的初始位置信息。
S302:根据所述至少一个虚拟载体的信息及所述虚拟控制体的信息展示所述虚拟空间。
第一客户端从应用服务器获取虚拟空间中各虚拟载体及各虚拟角色的信息,进而根据获取到的各虚拟载体的信息及各虚拟角色的信息展示虚拟空间。在一些实例中,当用户登录第一客户端101时,第一客户端101向应用服务器105发送登录请求,应用服务器105将虚拟空间中的各虚拟载体的信息及各虚拟角色的信息发送给第一客户端101。其中,一虚拟角色对应一客户端,第一客户端对应的第一虚拟角色为所述虚拟控制体,例如,所述虚拟控制体为所述第一虚拟角色的双手。所述虚拟载体的信息包括:所述虚拟载体的标识、状态及在所述虚拟空间中的实 时位置数据。该虚拟载体可以为虚拟空间中的虚拟语音球,虚拟语音球本身具有球的形态特性及弹性物理属性,可以满足一般球类的娱乐目的。所述虚拟角色的信息包括虚拟角色在虚拟空间中的初始位置。
第一客户端在展示虚拟空间时,根据各虚拟载体的位置信息在虚拟空间的对应位置处展示各虚拟载体。同时根据各虚拟角色的初始位置信息在虚拟空间中展示各虚拟角色。后续,第一客户端还会接收应用服务器发送的虚拟空间中各虚拟载体及各虚拟角色的实时位置数据,进而更新虚拟空间中的各虚拟载体及各虚拟角色。第一客户端在展示虚拟空间时,对于第一客户端关联的第一虚拟角色,可以在虚拟空间中只展示第一虚拟角色的双手(虚拟控制体)。在虚拟空间中展示虚拟控制体时,可以根据虚拟角色的位置数据确定虚拟控制体的位置数据,进而在虚拟空间中展示虚拟控制体。其中,第一虚拟角色的初始位置数据可以为第一虚拟角色的中心的位置数据,例如,在VR游戏中,虚拟角色的中心点为虚拟玩家的中心点,进而根据虚拟角色的中心的位置数据确定虚拟控制体的位置。
在一些实例中,在上述步骤202中,在执行所述当所述虚拟控制体抓取所述至少一个虚拟载体中一虚拟载体时,当所述虚拟载体的状态为第一状态时,获取输入的媒体内容时,包括步骤:
S401:获取所述第一客户端关联的控制器的实时位置数据;
S402:接收应用服务器发送的所述虚拟载体的实时位置数据;
S403:当根据所述虚拟载体的实时位置数据和所述控制器的实时位置数据确定选取所述虚拟载体时,响应于所述控制器发送的第一交互消息,更新所述虚拟控制体及所述虚拟载体在所述虚拟空间中的位置数据和/或运动数据,所述虚拟控制体抓取到所述虚拟载体;当所述虚拟载体的状态为第一状态时,获取输入的媒体内容。
与第一客户端关联的第一VR设备包括控制器,第一客户端获取控制器的位置数据。所述第一VR设备还包括***,通过***获取控制器的位置数据,该***可以利用红外光学定位,具体地,***包括多个红外发射摄像头,对室内定位空间进行覆盖,在控制器上放置红外反光点,通过捕捉这些反光点反射回摄像机的图像,确定控制器在空间中的位置信息。***将获取到的控制器的实时位置数据发送给第一客户端。此外,***还可以采用图像***,一台(组)摄像机拍摄控制器,然后由图像处理技术确定控制器的位置。此外,还可以通过超声波三维空间定位获取控制器的实时位置数据,具体地,可以在控制器上设置超声波***,发出高频超声脉冲,由安装在真实空间的天花板上的三个接收器接收信号,通过延迟即可计算距离,确定控制器的实时位置,由接收器将控制器的实时位置数据发送给第一客户端。同时采用上述方式还可以获得第一客户端对应的角色在真实空间中的位置数据,根据该第一客户端对应的角色的位置数据相对于初始位置数据的变化更新虚拟空间中的第一虚拟角色关联的虚拟控制体。其中,第一虚拟角色的初始位置数据,可以当用户登录第一客户端时,由应用服务器105发送。
虚拟载体的实时位置数据从应用服务器105处获得。根据获取的第一客户端对应的角色在真实空间中的位置数据,可以确定控制器相对于所述角色的相对位置数据。进而根据该相对位置数据,确定在虚拟空间中虚拟控制体相对于第一虚拟角色的相对位置数据。进而根据该相对位置数据确定虚拟空间中虚拟控制体的位置数据。其中,第一客户端对应的控制器用以控制与第一客户端对应的虚拟控制体,当控制器的位置发生移动时,虚拟控制体的位置也随着发生相应的移动,因而用户通过控制控制器可以达到控制虚拟控制体的目的。
当虚拟控制体与虚拟载体的空间关系满足预定条件时,选取虚拟载体。响应于控制器发送的第一交互消息,更新所述虚拟控制体及所述虚拟载体在所述虚拟空间中的位置数据和/或运动数据,以使所述虚拟控制体抓取到所述虚拟载体。第一VR设备可以包括头戴设备400,如图4A所述,主要包括通信部件401、播放组件402、显示屏403、录音部件404。其中,播放组件402、显示屏403、录音部件404与通信部件401进行连接,其中的显示屏403为虚拟现实显示屏。第一VR设备还包括控制器410,如图4B所示,控制器410上包括交互键411及功能键412。当选取所述虚拟载体后,用户点击控制器410上的交互键411,控制器响应于用户对交互键411的点击,向第一客户端发送第一交互信息,第一客户端根据该第一交互消息,更新所述虚拟控制体及所述虚拟载体在所述虚拟空间中的位置数据和/或运动数据,以使所述虚拟控制体抓取到所述虚拟载体。具体地,第一客户端中保存有虚拟控制体的与握取相关的骨骼动画数据,当第一客户端接收到第一交互消息后,调用所述骨骼动画数据,使得在虚拟空间中,所述虚拟控制体做出与所述骨骼动画数据对应的握取动作。在第一客户端接收到第一交互消息后,将所述虚拟载体附着到虚拟控制体上一个预先设置好的位置点上,虚拟载体附着到所述位置点后,之后会跟着所述位置点进行移动和旋转。为了让虚拟载体(例如,虚拟语音球)有一个飞到虚拟控制器(例如,虚拟手)中的过程,会在虚拟载体到虚拟控制器之间的路径上播放一个飞跃的特效。例如在图3A所示的VR游戏场景中,当用户点击交互键411后,根据虚拟手的握球的骨骼动画数据,使得在虚拟空间中虚拟手301做出握球的动作。同时添加虚拟语音球飞到虚拟手的手心里的特效,使得在虚拟现实场景中呈现虚拟语音球302飞到虚拟手301中,如图3B所示。
在一些实例中,在上述步骤203中,在执行所述虚拟控制体控制所 述虚拟载体为释放状态时,包括步骤:响应于所述第一客户端关联的控制器发送的第二交互消息,更新所述虚拟载体及所述虚拟控制体在所述虚拟空间中的位置数据和运动数据,根据所述虚拟载体及所述虚拟控制体在所述虚拟空间中的位置数据和运动数据显示所述虚拟控制体释放所述虚拟载体。
用户在手拿控制器做出抛的动作时,按一下控制器上的交互键411,在此刻,对应虚拟空间中,虚拟载体被释放,例如抛出。第一客户端获取该过程中控制器的实时位置数据,当用户按下交互键411后,控制器向第一客户端发送第二交互消息,第一客户端响应于该第二交互消息,根据获取到的控制器的实时位置数据更新虚拟空间中虚拟控制体的位置。同时根据获取到的控制器的实时位置数据确定虚拟载体在抛出时刻的位置数据及运动数据(例如,抛出时刻的速度及加速度)。进而根据虚拟载体在抛出时刻的位置数据及运动数据更新虚拟空间中虚拟载体的位置。在虚拟空间中,虚拟控制体抛出虚拟载体,例如,在VR游戏场景中,如图3D所示,虚拟手301抛出虚拟语音球302。
在一些实例中,其中,所述虚拟空间还包含所述第一客户端对应的第一虚拟角色;其中,在上述步骤S301中,执行在所述接收应用服务器发送的所述至少一个虚拟载体的信息及所述虚拟控制体的信息时,包括步骤:当所述第一虚拟角色与所述至少一个虚拟载体中各虚拟载体的空间关系满足预定条件时,所述应用服务器向所述第一客户端发送所述至少一个虚拟载体的信息。
所述虚拟角色为操控控制器的用户在虚拟空间中的对象,一个客户端关联的用户对应一个虚拟角色,虚拟空间中的所述虚拟控制体可以对应所述虚拟对象的手,例如在VR游戏中,一个玩家对应虚拟空间中的一个虚拟角色,虚拟控制体对应虚拟玩家的手,一个虚拟角色对应一个 客户端。
在该实例中,只有当VR客户端对应的虚拟角色与虚拟载体的空间关系满足预定条件时,应用服务器105才将虚拟载体的信息发送给所述VR客户端。所述预定条件可以为预定距离范围,应用服务器105判断虚拟载体的位置与第一虚拟角色的位置之间的距离,与第一虚拟角色的距离超过所述预定距离范围的虚拟载体的信息不发送给所述第一客户端,与第一虚拟角色的距离在所述预定距离范围内的虚拟载体的信息发送给所述第一客户端。其中,应用服务器105上保存有各虚拟角色的位置数据,例如虚拟角色的中心点的位置数据。其中,当所述距离在所述预定距离范围时,认为所述第一虚拟角色看不到所述虚拟载体,否则,认为所述第一虚拟角色能看到所述虚拟载体。例如在上述所述的VR游戏场景中,只有当虚拟语音球与虚拟角色的空间位置关系满足预定条件时,应用服务器才将虚拟语音球的信息发送给所述虚拟角色对应的VR客户端。
在一些实例中,其中,当根据所述虚拟载体的实时位置数据及所述控制器的实时位置数据确定所述虚拟控制体与所述虚拟载体发生碰撞时,确定选取所述虚拟载体。
根据控制器的实时位置数据确定虚拟控制体的实时位置数据,例如,在图3A所示的VR游戏场景中,根据控制器的实时位置数据确定虚拟手301在虚拟空间中的实时位置数据,所示虚拟控制体为虚拟角色的虚拟手301,所示虚拟载体为虚拟语音球302。当根据虚拟手的位置数据及虚拟语音球的位置数据确定虚拟手301与虚拟语音球302发生碰撞时,确定选取所述虚拟载体。当判断虚拟手301与虚拟语音球302发生碰撞时,可以采用射线检测、体积扫过、重叠测试等方法。
在一些实例中,其中,所述虚拟空间还包括与所述虚拟控制体关联 的虚拟射线,所述虚拟射线是从所述虚拟控制体发出的;所述方法进一步包括以下步骤:
S11:根据所述控制器的所述实时位置数据确定所述虚拟射线在所述虚拟空间中的实时位置数据。
虚拟射线从虚拟控制体发出,根据获取的控制器的实时位置数据,确定虚拟控制体的实时位置数据,由于虚拟射线由虚拟控制体发出,因而根据虚拟控制体的实时位置数据,可以确定虚拟射线的实时位置数据。
S12:其中,当根据所述虚拟射线的所述实时位置数据及所述虚拟控制体的所述实时位置数据确定所述虚拟射线与所述虚拟载体发生碰撞时,确定选取所述虚拟载体。
在该实例中,当虚拟射线与虚拟载体发生碰撞时,确定选取所述虚拟载体。例如,在图3A所示的VR游戏场景中,虚拟手301上发出虚拟射线304,当虚拟射线304与虚拟语音球302发生碰撞时,确定选取了所述虚拟语音球。
在一些实例中,在上述步骤202中,在执行所述获取输入的媒体内容时,包括以下步骤:
S21:响应于来自所述第一客户端关联的控制器发送的第一功能消息,通过自身关联的数据采集装置开始接收外部输入的媒体数据;响应于来自所述控制器发送的第二功能消息,停止接收媒体数据。
当用户按下控制器410上的功能键412时,控制器向第一客户端发送第一功能信息,第一客户端接收到该第一功能消息时,通过自身关联的数据采集装置开始接收外部输入的媒体数据,例如通过与第一客户端关联的头戴设备400上的录音部件404采集语音数据。具体地,第一客户端向头戴设备400发送控制消息,使得头戴设备400开启录音部件, 开始录取用户输入的语音,用户可以录制一段话或者自己唱一首歌。所述录音部件可以为麦克风。第一客户端上包括与所述录音部件404协作的语音实时传输部件,例如,实时语音传输第三方库(Apollo Voice)会在每帧实时抓取录音部件404,例如麦克风中的语音,并转化为PCM数据格式,保存在第一客户端所在终端的缓存中。
在采集媒体数据的过程中,用户一直按压功能键412,当用户松开功能键412时,控制器向第一客户端发送第二功能信息,第一客户端接收到该第二功能消息时,停止接收语音数据。具体地,第一客户端向头戴设备400发送控制消息,使得头戴设备400关闭录音部件,停止录音。
S22:根据接收到的媒体数据生成媒体内容。
在上述步骤S201中,第一客户端中的语音实时传输部件实时抓取录音部件404中的每一帧PCM数据,根据抓取的每一帧的PCM数据形成语音数据。
在一些实例中,在执行所述响应于所述第一客户端关联的控制器发送的第二交互消息,更新所述虚拟载体及所述虚拟控制体在所述虚拟空间中的位置数据和运动数据时,包括以下步骤:
S31:响应于所述第二交互消息,根据所述控制器的实时位置数据,确定所述虚拟载体的运动轨迹和初始运动数据,并更新所述虚拟控制体的实时位置数据和运动数据。
用户可以通过手拿控制器做出抛的动作,以实现控制虚拟空间中虚拟控制体释放虚拟载体的目的,例如,用户在手拿控制器抛的过程中,按一下控制器上的交互键411,在此刻,对应虚拟空间中,虚拟载体被抛出。第一客户端获取用户按交互键411之前抛控制器的这一过程中控制器的实时位置数据,根据控制器的实时位置数据,确定虚拟空间中虚拟控制体的实时位置数据。因此时,虚拟控制体抓取虚拟载体,根据虚 拟控制体的实时位置数据确定虚拟载体的实时位置数据。根据虚拟载体被抛出时上一帧的实时位置、当前帧的实时位置以及上一帧与当前帧之间的时间间隔,确定虚拟载体被抛出时的瞬时速度及方向。同时根据虚拟载体的重力加速度确定虚拟载体的运动轨迹,根据该瞬时速度、方向、所述重力加速度及所述运动轨迹更新所述虚拟载体在虚拟空间中的实时位置。
S32:根据所述运动轨迹和初始运动数据,更新所述虚拟载体的实时位置数据。
其中,在执行所述根据所述虚拟载体及所述虚拟控制体在所述虚拟空间中的位置数据和运动数据显示所述虚拟控制体释放所述虚拟载体时,包括步骤:
S33:根据所述虚拟控制体的实时位置数据、运动数据以及所述虚拟载体的实时位置数据显示所述虚拟载体沿所述运动轨迹运动。
根据步骤S301中获取的虚拟载体的运动轨迹及初始运动数据,确定每一帧虚拟现实图像中虚拟载体的实时位置数据,从而当第一客户端将所述虚拟现实图像发送到头戴设备400的显示屏403进行展示时,用户看到的虚拟现实场景中,所述虚拟载体沿所述运动轨迹运动,所述初始运动数据包括虚拟载体被释放时的瞬时速度、方向及重力加速度。如图3D所示,抛出虚拟语音球302后,虚拟语音球302沿图3D所示的轨迹运动。
在一些实例中,本申请提供的媒体内容发送方法500,如图5所示,进一步包括以下步骤:
S501:当所述虚拟控制体抓取所述至少一个虚拟载体中一虚拟载体时,如果所述虚拟载体的状态为第二状态,向所述数据内容服务器发送媒体内容请求消息,所述媒体内容请求消息中携带所述虚拟载体的标 识,以使数据内容服务器查找所述虚拟载体的标识关联的媒体内容。
在图2所示的实例中,在上述步骤204之后,即选取虚拟载体之后,虚拟载体的状态为第一状态,即初始状态,在该情况下,接收输入的与虚拟载体关联的媒体数据。在该实例中,当选取虚拟载体之后,虚拟载体的状态为第二状态,即已关联媒体数据的状态,在该情况下,获取与该虚拟载体关联的媒体内容。具体地,第一客户端中的网络同步单元,例如虚幻引擎中的用以网络同步的模块,向数据内容服务器发送媒体内容请求消息,所述媒体内容请求消息中携带所述虚拟载体的标识。数据内容服务器中保存有与虚拟载体的标识相对应的媒体内容,根据虚拟载体的标识查找对应的媒体内容。例如在图3A所示的VR游戏场景中,当如图3A所示选取虚拟语音球后,当虚拟语音球的状态为第二状态时,第一客户端中的网络同步单元向数据内容服务器106请求虚拟语音球携带的语音数据,此外,虚拟语音球也可以携带视频数据等其他媒体数据,所述数据内容服务器106可以为CDN服务器。
S502:接收数据内容服务器响应于所述媒体内容请求消息而发送的所述媒体内容。
在一些实例中,本申请提供的媒体内容传送方法,进一步包括以下步骤:当所述虚拟控制体抓取所述至少一个虚拟载体中一虚拟载体时,如果所述虚拟载体的状态为第二状态,则根据所述虚拟载体的标识查找与所述标识相关联的媒体内容。
在该实例中,当选取虚拟载体之后,虚拟载体的状态为第二状态,即已关联媒体数据的状态,在该情况下,如果本地已经保存了虚拟载体关联的媒体内容,此时,先在本地查找与所述虚拟载体相关联的媒体内容。例如,当第一客户端之前选取过语音球,则向数据内容服务器106请求过虚拟载体关联的媒体内容,则该媒体内容会保存在本地缓存中。 或者当其他客户端抓取到所述虚拟载体,播放所述虚拟载体关联的媒体内容时,如果为非私密播放,则第一客户端也会根据虚拟载体的标识向数据内容服务器106请求虚拟载体关联的媒体内容并播放,请求的所述媒体内容保存在本地的缓存中。因而当获取虚拟载体关联的媒体内容时,先在本地缓存中进行查找,本地缓存中没有时,再去数据内容服务器去请求,能够提高响应速度。
对于图3A所示的VR游戏场景中,获取虚拟语音球关联的媒体内容的流程600,如图6所示,主要包括以下步骤:
步骤S601:第一客户端中的网络同步单元,例如虚幻引擎中的用以网络同步的模块,用以获取与虚拟语音球关联的语音数据,该语音数据以PCM文件的形式保存。
步骤S602:网络同步单元首先在本地缓存中查找所述PCM文件,当本地缓存中存在所述PCM文件是,执行步骤S603,否者执行步骤S604。
步骤S603:在本地缓存中获取所述PCM文件。
步骤S604:去CDN服务器请求所述PCM文件,具体地,根据虚拟语音球的标识去CDN服务器请求所述PCM文件,CDN服务器根据虚拟语音球的标识获取所述PCM文件,并将所述PCM文件返回给所述第一客户端的网络同步单元。
在一些实例中,本申请提供的媒体内容传送方法,进一步包括以下步骤:当确定抓取所述虚拟载体时,响应于所述控制器发送的第三功能消息,播放所述媒体内容;响应于所述控制器发送的第四功能消息,停止播放所述媒体内容。
在抓取到所述虚拟载体后,且在抓取虚拟载体之前选取虚拟载体时,已经获取了与虚拟载体关联的媒体内容,当用户按下控制器410上 的功能键412时,控制器将第三功能消息发送给第一客户端,第一客户端响应于所述第三功能消息,开始播放所述媒体内容;当用户松开控制器410上的功能键412时,控制器将第四功能消息发送给第一客户端,第一客户端响应于所述第四功能消息,停止播放所述媒体内容。
在一些实例,本申请提供的媒体内容发送方法,其中,所述媒体内容为语音,所述方法进一步包括播放所述语音时,将所述语音转换为3D语音进行播放,所述方法包括以下步骤:
S41:获取所述第一客户端关联的头戴设备的实时位置数据。
获取第一客户端关联的头戴设备的实时位置数据的方式,可以参考步骤203中获取控制器的实时位置数据的方式,可以采用红外光学的定位方式、图像跟踪定位的方式、以及超生成跟踪定位的方式,在此不再赘述。
S42:根据所述头戴设备的实时位置数据确定虚拟空间中与所述第一客户端关联的第一虚拟角色的头部的实时位置数据。
当在步骤S41中获取到第一客户端关联的头戴设备的实时位置后,采用同样的方式可以获取第一客户端对应的角色在真实空间中的位置数据,例如中心位置数据,可以确定头戴设备相对于所述角色的相对位置数据,进而根据该相对位置数据,确定在虚拟空间中虚拟角色的头部相对于第一虚拟角色的相对位置数据。其中,虚拟角色的初始位置数据,可以当用户登录第一客户端时,由应用服务器105发送,进而确定虚拟角色的头部的实时位置数据。其中,第一虚拟角色的头部与真实空间中的头戴设备相对应。
S43:获取所述第一客户端关联的控制器的实时位置数据,根据所述控制器的实时位置数据确定所述虚拟载体的实时位置数据。
S44:根据所述第一虚拟角色的头部的实时位置数据及所述虚拟载 体的实时位置数据,确定所述虚拟载体相对于所述第一虚拟角色的头部的实时距离及实时方向。
在将语音数据转换为3D语音时,需要依据虚拟载体相对于第一虚拟角色的头部的距离及方向,即模拟语音是从虚拟载体发出的语音。例如当所述虚拟载体为虚拟语音球时,模型所述语音为所述虚拟语音球发出的语音,当所述虚拟语音球距离第一虚拟角色的头部较远时,听到的声音相对小一点;当所述虚拟语音球距离第一虚拟角色的头部较近时,听到的声音相对大一点。当所述虚拟语音球距离第一虚拟角色的左耳较近时,左耳听到的声音相对大一点;当所述虚拟语音球距离第一虚拟角色的右耳较近时,右耳听到的声音相对大一点。
S45:根据所述实时距离及所述实时方向将所述语音转换为具有多维度空间音效的语音。此具有多维度空间音效的语音可以为三维空间音效的语音,简称3D语音。
根据步骤S44中确定的所述虚拟载体相对于所述第一虚拟角色的头部的实时距离及实时方向,将所述语音转换为3D语音。具体地,第一客户端中包括3D语音播放组件,例如虚幻引擎UE4中的播放3D声音的组件(Audio Component),用以将语音转换为3D语音。具体地,将PCM文件中的语音数据每帧填充到Audio Component中,同时将步骤S44中确定的所述实时距离及实时方向输入到Audio Component中,生成左右耳的3D语音数据并播放。其中,头戴设备400中包括播放组件402,例如耳机,将所述3D语音发送给所述耳机,以向用户播放。
第一客户端,即VR客户端基于UE4引擎开发,使用的是Actor结合组件的开发模式。后续如果有其他的多媒体播放信息,可以通过替换组件的方式,来实现不同多媒体数据的播放。例如如果语音球改成视频球,则可以将Audio Component替换为支持视频播放的组件。
S46:当确定抓取所述虚拟载体时,响应于所述控制器发送的第三功能消息,播放所述具有多维度空间音效的语音;响应于所述控制器发送的第四功能消息,停止播放所述具有多维度空间音效的语音。
将步骤S45中生成的左右耳的3D语音通过与第一客户端关联的播放部件进行播放,例如头戴设备400中的播放组件402进行播放。
在一些实例中,本申请提供的媒体内容发送方法,进一步包括,第一客户端在播放3D语音时,当虚拟载体与第一虚拟角色的头部相距较远时,为非私密播放,此时,其他客户端对应的角色也能听到所述3D语音,具体包括以下步骤:
步骤S51:获取所述第一客户端关联的头戴设备的实时位置数据。
步骤S52:根据所述头戴设备的实时位置数据确定虚拟空间中所述第一虚拟角色的头部的实时位置数据。
步骤S51-S52与步骤S41-S42相同,在此不再赘述。
步骤S53:当所述第一虚拟角色的头部的实时位置数据与所述虚拟载体的实时位置数据满足预设条件时,通知所述应用服务器设置所述虚拟载体的状态为第三状态;通知所述应用服务器将所述虚拟载体的状态发送给一个或多个第二客户端,所述一个或多个第二客户端中各第二客户端对应的虚拟角色与所述第一虚拟角色的空间关系满足预定条件,所述各第二客户端响应于接收到的所述虚拟载体的状态,获取与所述虚拟载体关联的媒体内容并播放。
所述预定条件可以为预设距离阈值,当第一虚拟角色的头部的实时位置与虚拟载体的实时位置之间的距离在所述距离阈值之内时,说明虚拟载体距离第一虚拟角色的头部较近,为私密播放,在私密播放的情况下,只有第一虚拟角色对应的用户才能听到所述3D语音。当第一虚拟角色的头部的实时位置与虚拟载体的实时位置之间的距离超过所述距 离阈值时,说明虚拟载体距离第一虚拟角色的头部较远,为公开播放,在公开播放的情况下,其他虚拟角色对应的用户也能听到所述3D语音,或者距离虚拟载体一定距离范围内的其他虚拟角色对应的用户也能听到所述3D语音。当第二客户端能够听到该语音时,第二客户端可以根据虚拟载体的标识去CDN服务器请求与该虚拟载体关联的语音的PCM文件并进行播放。在播放所述PCM文件时,也可以转换为3D语音播放,将PCM文件中的语音数据每帧填充到声音组件中,例如,填充到虚幻引擎UE4中的播放3D声音的组件(Audio Component)中,向所述声音组件中输入虚拟语音球与第二虚拟角色的头部的实时距离及实时方向,从而生成左右耳的3D语音数据并播放。
在一些实例中,在抓取到虚拟载体,并播放了虚拟载体关联的媒体内容后,如图7所示,所述媒体内容发送方法700进一步包括以下步骤:
S701:监听所述控制器的预定手势操作。
当抓取到虚拟载体时,如果该虚拟载体的状态为第二状态,即已经关联过媒体内容,在该情况下,用户可以播放所述虚拟载体关联的媒体内容,同时也可以对该虚拟载体关联的媒体内容进行互动,例如对VR游戏中的虚拟语音球传递的语音进行点赞。在进行所述点赞等互动操作时,用户可以通过操作控制器以实现对虚拟载体携带的所述媒体内容的互动,例如对于VR游戏中的虚拟语音球,当用户对控制器进行“摇一摇”操作时,对虚拟语音球携带的语音进行点赞,第一客户端监听控制器的状态,当监听到所述预定手势操作时,确定用户要对虚拟语音球携带的语音进行点赞。在监听的过程中,第一客户端中的手柄事件驱动模块每帧去轮询手柄的震动状态,根据手柄的震动状态判断是否为手柄的“摇一摇”手势。
S702:当监听到所述控制器的所述预定手势操作时,生成互动信息。
当监听到控制器的所述预定手势操作时,说明用户要对虚拟载体携带的媒体内容进行互动,此时第一客户端生成互动信息,该互动信息中包括与第一客户端关联的用户的标识,虚拟载体的标识。
S703:将该互动信息发送给所述应用服务器,以使应用服务器根据该互动信息更新所述虚拟载体的互动信息,当在所述虚拟空间中第二客户端对应的第二虚拟角色与所述虚拟载体的空间关系满足预设条件时,将所述虚拟载体的互动信息发送给所述第二客户端,以使所述第二客户端,当确定选取所述虚拟载体时,在所述虚拟空间中展示所述虚拟载体的互动信息。
第一客户端将在步骤S702中生成的互动信息发送给应用服务器,以使应用服务器根据该互动信息更新虚拟载体的互动信息,该更新过程包括,将与该虚拟载体对应的互动次数加1,将所述第一客户端关联的用户的标识更新为最近对该虚拟载体进行互动的用户标识。例如,对应VR游戏场景来说,当第一客户端生成对虚拟语音球的点赞互动信息后,将该互动信息发送给应用服务器,应用服务器将该虚拟语音球的点赞次数进行加1,并将第一客户端对应的用户的标识更新为对该虚拟语音球最近进行点赞的用户的标识。当在所述虚拟空间中第二客户端对应的第二虚拟角色与所述虚拟载体的空间关系满足预设条件时,例如满足预设距离条件时,应用服务器将所述虚拟载体的互动信息发送给所述第二客户端,以使所述第二客户端,当确定选取所述虚拟载体时,在所述虚拟空间中展示所述虚拟载体的互动信息。如图8所示,在VR游戏场景300中,当第二VR客户端选取到所述虚拟语音球时,展示该语音球的互动信息,该互动信息包括点赞图标802、虚拟语音球的点赞次数803、以及最近对该虚拟语音球进行点赞的用户的标识801。
在一些实例中,在上述步骤S701中,在执行所述监听所述控制器 的预定手势操作时,其中,当检测到所述控制器的震动后,在播放第一预设数量的图像帧的过程中监听到所述控制器的震动次数满足第一预设条件,且在之后播放第二预设数量的图像帧的过程中监听到所述控制器的震动次数满足第二预设条件时,确定监听到所述控制器的所述预定手势操作。
在一些实例中,进一步地,当从监听到所述控制器的震动次数满足第一预设条件到监听到所述控制器的震动次数满足第二预设条件之间的时间间隔超过预定时长时,确定监听到所述控制器的所述预定手势操作。
在一些实例中,监听控制器的预定手势操作“摇一摇”的流程900,如下图9所示。其中,上述控制器为手柄,上述虚拟载体为所述图3A所示的VR游戏场景中的虚拟语音球,当抓取到虚拟语音球时,可以通过对手柄进行摇一摇操作,对虚拟语音球携带的媒体内容,例如语音,进行点赞互动。如图9所示,该流程主要包括以下步骤:
步骤S901:第一客户端关联的用户操作手柄,当用户要对语音球进行点赞时,操作手柄进行摇一摇,以进行点赞。
步骤S902:查询第i帧的手柄的震动状态。
当第一客户端关联的虚拟手抓取到虚拟语音球时,第一客户端中的VR底层驱动可以在VR场景更新一帧时查询手柄的震动状态,其中,手柄的震动状态由手柄上的传感器进行检测并传递给VR客户端,VR客户端会将传感器的检测结果转换为手柄的震动状态而保存在自身的缓存区,所述VR底层驱动会从VR客户端对应的缓存区查询手柄的震动状态。
步骤S903:当查询到手柄的震动状态(比如震动状态参数为“True”)时,即第i帧手柄的状态为震动时,则执行步骤S904,否则,执行步骤 S905,将i赋值为i+1,进而返回步骤S902,查询下一帧手柄的状态。
步骤S904:当查询到手柄的震动状态后,即当第i帧手柄的状态为震动时,查询后续的n1个帧的手柄的状态,即查询第i+1帧至i+n1帧手柄的震动状态,与步骤S902中查询手柄的状态方式相同,所述VR底层驱动会从VR客户端对应的缓存区查询手柄的震动状态。VR客户端同时统计在该n1个帧查询到的手柄的震动状态的次数,比如震动状态参数为“True”的次数。在用户刚开始摇动手柄的过程中,不太稳定,用户可能是动了一下手柄,也可能是开始摇动手柄,因而当预定帧内查询到的手柄的震动次数超过预定值时,判定用户开始对手柄进行摇动。
步骤S905:当第i帧手柄的状态不是震动时,将i赋值为i+1,进而返回步骤S902,查询下一帧手柄的状态。
步骤S906:判断所述n1个帧内查询到的手柄的震动状态的次数是否>=n2,其中,n1和n2为预定值。当n1个帧内查询到的手柄的震动次数>=n2时,说明用户开始摇动手柄,在此种情况下,执行步骤S907。当n1个帧内查询到的手柄的震动次数<n2时,说明用户可能动了一下手柄,但并没有对手柄进行摇动,在此种情况下,执行步骤S908。
步骤S907:将i赋值为i+n1,执行步骤S909。
步骤S908:将i赋值为i+n1+1,进而返回步骤S902,查询下一帧手柄的状态。
步骤S909:当在步骤S906中,当满足n1帧内查询到的手柄为震动的次数>=n2时,说明检测到用户开始摇动手柄,此时执行下面的步骤,根据后续的n3个帧手柄的状态,判断用户是否停止摇动手柄,在用户停止摇动手柄的过程中,在固定个帧内,查询到的手柄为震动的次数会越来越少。在该步骤中,查询第i+1帧至i+n3帧手柄的状态,同时统计该后续的n3个帧内查询到的手柄为震动的次数。
步骤S910:判断i+1帧至i+n3帧内,查询到的手柄为震动的次数是否小于n4,如果小于n4,说明用户开始停止摇动手柄,则执行步骤S911,否则,说明用户并没有停止摇动手柄,此时执行步骤S912。
步骤S911:判断时间间隔是否大于预设时间T,所述时间间隔为步骤S906中手柄检测为震动的次数>=n2的时间点到步骤S910中手柄检测为震动的次数<n4的时间点之间的时间,即手柄开始摇动到停止摇动的时间。当所述时间间隔不超过T时,执行步骤S914,手柄手势判定失败,判定用户并没有对手柄进行“摇一摇”操作,否则,执行步骤S913,判定用户对手柄的“摇一摇”手势操作。在该步骤中,只有用户对手柄的摇动超过预定时间时,才判定用户对手柄进行了“摇一摇”操作。
步骤S912:当在步骤S910中,所述n3帧内手柄检测为震动的次数<n4说明用户并没有停止摇动手柄,此时执行步骤S912,将i值赋值为i+n3,后续返回执行步骤S909,查询后续的n3个帧内手柄的震动状态。
步骤S913:判定用户对手柄进行了“摇一摇”操作。当前用户对语音球进行了一次“摇一摇”手势操作后,确定对该语音球进行了点赞互动,后续不再进行手势“摇一摇”的识别。
步骤S914:手势“摇一摇”判定失败,判定用户没有对手柄进行“摇一摇”操作。
步骤S915:当手势判定识别时,继续进行手势“摇一摇”的识别,在该步骤中,将i值赋值为i+n3+1,进而返回步骤S902,继续查询手柄的状态,进行后续的手势“摇一摇”的识别。
在该实例中,这里的帧次n1、n2,以及阈值n3、n4都可预先设定,可根据经验进行设置,通过该实例,摇一摇手势能够比较稳定的判断出来,从而极大减少误操作的发生。
本申请还提供一种媒体内容发送方法1000,应用于应用服务器105, 如图10所示,包括以下步骤:
S1001:向第一客户端发送至少一个虚拟载体的信息及所述第一客户端关联的虚拟控制体的信息,其中,所述至少一个虚拟载体的信息包括:各虚拟载体的标识、状态及在所述虚拟空间中的实时位置数据,所述虚拟控制体的信息包括所述虚拟控制体的初始位置信息;以使所述第一客户端根据所述至少一个虚拟载体的信息及所述虚拟控制体的信息展示所虚拟空间,当所述虚拟控制体抓取所述至少一个虚拟载体中一虚拟载体时,当所述虚拟载体的状态为第一状态时,获取输入的媒体内容,将所述媒体内容及所述虚拟载体的标识发送给数据内容服务器,以使所述数据内容服务器将所述媒体内容与所述虚拟载体的标识相关联。
该步骤与上述终端侧的应用服务器将虚拟载体的信息及虚拟控制体的信息发送给第一客户端、抓取虚拟载体、将虚拟载体关联媒体内容、将媒体内容发送给数据内容服务器的步骤相同,在此不再赘述
S1002:接收所述第一客户端发送的通知消息,根据该通知消息将所述虚拟载体的状态设置为第二状态。
当虚拟载体关联媒体内容后,第一客户端向应用服务器发送通知消息,该通知消息用以使应用服务器根据虚拟载体的状态。例如,当虚拟载体为语音球时,当语音球录音后,语音球的状态发生了改变,第一客户端通知应用服务器105,由应用服务器105该语音球的状态设置为第二状态,第二状态为已录音状态,已录音的虚拟语音球的颜色与初始状态的虚拟语音球的颜色不同,例如,可以为红色。第一客户端与应用服务器105及数据内容服务器106进行通信时,可以通过网络同步单元进行,例如虚幻引擎中用以网络同步的模块。
S1003:将所述虚拟载体的信息发送给所述第二客户端,以使所述第二客户端,从所述数据内容服务器获取与所述虚拟载体的标识相关联 的媒体内容。
例如,在上述所述的VR游戏场景中,当语音球被释放,例如抛出后,其他玩家可以抓取到该语音球,并获取语音球上的语音。具体地,应用服务器105会实时将语音球的状态及该语音球的标识更新给第二客户端,当第二客户端对应的玩家选取到该语音球后,当该语音球的第二状态,根据该语音球的标识,去CDN服务器获取与该语音球关联的媒体内容。从而通过语音球实现媒体内容的传递和交流。
采用本申请提供的媒体内容的发送方法,抓取虚拟载体时,输入媒体内容,将该媒体内容上传给数据内容服务器,使虚拟载体与所述媒体内容相关联;在虚拟空间中释放虚拟载体,使得其他客户端选取到语音球后,向数据内容服务器获取该虚拟载体关联的媒体内容,并可以进行后续的播放。使得虚拟空间中的虚拟载体作为媒体内容的载体,虚拟载体关联的媒体内容随着虚拟载体在虚拟空间中的传递在多个客户端之间进行传递。使得媒体内容的交互具有3D立体沉浸感,同时媒体内容关联虚拟空间中的虚拟载体,提升VR社交的真实感。
在一些实例中,本申请提供的媒体内容发送方法,进一步包括以下步骤:
S61:接收所述第一客户端发送的互动信息及所述虚拟载体的标识,其中所述第一客户端当确定抓取到所述虚拟载体时,且监听到与所述第一客户端关联的控制器的预定手势操作时,生成所述互动信息并发送。
当抓取到虚拟载体时,如果该虚拟载体的状态为第二状态,即已经关联过媒体内容,在该情况下,用户可以播放所述虚拟载体关联的媒体内容,同时也可以对该虚拟载体关联的媒体内容进行互动,例如对VR游戏中的虚拟语音球传递的语音进行点赞。在进行所述点赞等互动操作时,用户可以通过操作控制器以实现对虚拟载体携带的所述媒体内容的 互动,例如对于VR游戏中的虚拟语音球,当用户对控制器进行“摇一摇”操作时,对虚拟语音球携带的语音进行点赞,第一客户端监听控制器的状态,当监听到所述预定手势操作时,确定用户要对虚拟语音球携带的语音进行点赞。在监听的过程中,第一客户端中的手柄事件驱动模块每帧去轮询手柄的震动状态,根据手柄的震动状态判断是否为手柄的“摇一摇”手势。
当监听到控制器的所述预定手势操作时,说明用户要对虚拟载体携带的媒体内容进行互动,此时第一客户端生成互动信息,该互动信息中包括与第一客户端关联的用户的标识,虚拟载体的标识。
S62:根据所述互动信息更新所述虚拟载体的互动信息,并将所述虚拟载体的互动信息与所述虚拟载体的标识相关联。
第一客户端将在步骤S61中生成的互动信息发送给应用服务器,以使应用服务器根据该互动信息更新虚拟载体的互动信息,该更新过程包括,将与该虚拟载体对应的互动次数加1,将所述第一客户端关联的用户的标识更新为最近对该虚拟载体进行互动的用户标识。例如,对应VR游戏场景来说,当第一客户端生成对虚拟语音球的点赞互动信息后,将该互动信息发送给应用服务器,应用服务器将该虚拟语音球的点赞次数进行加1,并将第一客户端对应的用户的标识更新为对该虚拟语音球最近进行点赞的用户的标识。
S63:当在所述虚拟空间中第二客户端关联的虚拟对象与所述虚拟载体的空间关系满足预设条件时,将所述虚拟载体的互动信息发送给所述第二客户端,以使所述第二客户端,当确定选取所述虚拟载体时,在虚拟空间中展示所述虚拟载体的互动信息。
当在所述虚拟空间中第二客户端对应的第二虚拟角色与所述虚拟载体的空间关系满足预设条件时,例如满足预设距离条件时,应用服务 器将所述虚拟载体的互动信息发送给所述第二客户端,以使所述第二客户端,当确定选取所述虚拟载体时,在所述虚拟空间中展示所述虚拟载体的互动信息。如图8所示,当第二VR客户端选取到所述虚拟语音球时,展示该语音球的互动信息,该互动信息包括点赞图标802、虚拟语音球的点赞次数803、以及最近对该虚拟语音球进行点赞的用户的标识801。
当本申请提供的媒体内容发送方法应用到VR游戏场景中时,所述第一客户端为VR游戏客户端,所述应用服务器为VR游戏服务器,所述虚拟载体为虚拟语音球时,虚拟语音球传递语音的消息交互1100,如图11所示,主要包括以下步骤:
步骤S1101:第一客户端向应用服务器发送APP登录请求。
步骤S1102:应用服务器向第一客户端返回虚拟语音球的信息及虚拟空间中虚拟角色的信息,其中虚拟语音球的信息包括虚拟语音球的状态、标识及实时位置数据。其中,语音球的状态包括初始状态、已录音状态及播放状态,虚拟角色的信息包括虚拟角色的实时位置数据。
步骤S1103:第一客户端根据虚拟语音球的实时位置数据及虚拟角色的实时位置数据生成虚拟现实图像。
步骤S1104:将生成的虚拟现实图像发送到头戴设备的显示器显示。
步骤S1105:获取手柄的实时位置数据。
步骤S1106:根据手柄的实时位置数据确定虚拟控制体的实时位置数据,根据虚拟控制体的实时位置数据确定虚拟射线的实时位置数据。
步骤S1107:根据虚拟语音球的实时位置数据与虚拟控制体的实时位置数据可以判断是否选取了虚拟语音球,或者根据虚拟语音球的实时位置数据与虚拟射线的实时位置数据可以判断是否选取了虚拟语音球。
步骤S1108:选取虚拟语音球后,用户点击手柄上的交互键,手柄 向第一客户端发送第一交互消息。
步骤S1109:第一客户端接收到第一交互消息后,更新虚拟语音球及虚拟控制体在虚拟空间中的位置数据及运动数据,使得虚拟控制体抓取所述虚拟语音球,其中,所述虚拟控制体对应第一虚拟角色的虚拟手。
在抓取到虚拟语音球后,后面出现3种情况,其中步骤S1110-S1116为虚拟语音球的状态为第一状态,即初始状态时,为虚拟语音球关联语音数据。步骤S1117-S1121为当虚拟语音球的状态为第二状态,即已录音状态时,通过对语音球进行点赞互动。步骤S1122-S1125为当虚拟语音球的状态为第二状态,即已录音状态时,获取虚拟语音球关联的语音进行播放。下面详细介绍各步骤
(1)步骤S1110-S1116
步骤S1110:当虚拟语音球的状态为第一状态,即初始状态时,用户按下手柄上的功能键,手柄向第一客户端发送第一功能消息。
步骤S1112:第一客户端响应于所述第一功能消息,第一客户端中的录音部件,如实时语音传输第三方库(Apollo voice),实时抓取头戴设备的麦克风录制的语音并转化为PCM数据格式。
步骤S1113:用户松开手柄上的功能键,手柄向第一客户端发送第二功能消息。第二客户端停止抓取语音数据。
步骤S1114:第一客户端将获取的PCM语音数据及虚拟语音球的标识发送给CDN服务器,CDN服务器将所述PCM语音数据与虚拟语音球的标识关联保存。
步骤S1115:虚拟语音球关联语音数据后,虚拟语音球的状态发生变化,第一客户端通知应用服务器更改虚拟语音球的状态。
步骤S1116:应用服务器将虚拟语音球的状态设置为第二状态,并将虚拟语音球的状态发送给第二客户端,可选地,当第二客户端对应的 虚拟角色与虚拟语音球的空间关系满意预定条件时,将虚拟语音球的状态发送给所述第二客户端。
(2)S1117-S1121
步骤S1117:当虚拟语音球的状态为第二状态时,第一客户端中手柄事件驱动模块每帧去轮询手柄的震动状态。
步骤S1118:根据手柄的震动状态判断是否为手柄的“摇一摇”手势。
步骤S1119:当确定手柄的“摇一摇”手势时,生成交互信息。
步骤S1120:第一客户端将交互消息发送给应用服务器。
步骤S1121:应用服务器更新虚拟语音球的交互信息,主要包括对语音球的点赞信息加1,更新最新对语音球进行点赞的用户的标识。应用服务器将虚拟语音球的交互信息发送给第二客户端,可选地,当第二客户端对应的虚拟角色与虚拟语音球的空间关系满意预定条件时,将虚拟语音球的交互信息发送给所述第二客户端。第二客户端在选取该虚拟语音球时,在虚拟空间中展示该虚拟语音球的交互信息。
(3)S1122-S1125
步骤S1122:当选取虚拟语音球时,当虚拟语音球的状态为第二状态时,第一客户端会根据虚拟语音球的标识向CDN服务器请求虚拟语音球关联的语音数据。
步骤S1123:CDN服务器返回虚拟语音球关联的语音数据。
步骤S1124:当第一客户端中虚拟控制体抓取到虚拟语音球时,用户按下手柄上的功能键,手柄向第一客户端发送第一功能消息。
步骤S1125:第一客户端响应于第一功能消息,获取手柄的实时位置数据。
步骤S1126:第一客户端根据手柄的实时位置数据确定虚拟控制体 的实时位置数据。第一客户端中的3D语音播放组件,例如,Audio Component,根据虚拟控制体的实时位置数据及虚拟语音球的实时位置数据将语音转换为3D语音并播放。头戴设备中具有与第一客户端关联的耳机,可以将转换后的3D语音传送给头戴设备中的耳机中播放。
步骤S1127:当用户松开手柄上的功能键时,手柄向第一客户端发送第二功能消息。
步骤S1128:第二客户端响应于第二功能消息,停止播放3D语音。
本申请还提供了一种媒体内容发送装置1200,应用于第一客户端,如图12所示,所述装置包括:
展示模块1201,展示虚拟空间,所述虚拟空间中包括至少一个虚拟载体和所述第一客户端对应的虚拟控制体;
媒体内容发送模块1202,用以当所述虚拟控制体抓取所述至少一个虚拟载体中一虚拟载体时,当所述虚拟载体的状态为第一状态时,获取输入的媒体内容,将所述媒体内容及所述虚拟载体的标识发送给数据内容服务器,以使所述数据内容服务器将所述媒体内容与所述虚拟载体的标识相关联,并通知所述应用服务器设置所述虚拟载体的状态为第二状态
释放模块1203,用以所述虚拟控制体控制所述虚拟载体为释放状态。
在一些实例中,所述展示模块1201,还用以:
接收应用服务器发送的所述至少一个虚拟载体的信息及所述虚拟控制体的信息,其中,所述至少一个虚拟载体的信息包括:各虚拟载体的标识、状态及在所述虚拟空间中的实时位置数据,所述虚拟控制体的信息包括所述虚拟控制体的初始位置信息;
根据所述至少一个虚拟载体的信息及所述虚拟控制体的信息展示 所述虚拟空间。
在一些实例中,媒体内容发送模块1202,还用以:
获取所述第一客户端关联的控制器的实时位置数据;
接收应用服务器发送的所述虚拟载体的实时位置数据;
当根据所述虚拟载体的实时位置数据和所述控制器的实时位置数据确定选取所述虚拟载体时,响应于所述控制器发送的第一交互消息,更新所述虚拟控制体及所述虚拟载体在所述虚拟空间中的位置数据和/或运动数据,所述虚拟控制体抓取到所述虚拟载体;当所述虚拟载体的状态为第一状态时,获取输入的媒体内容。
在一些实例中,所述释放模块1203,用以:
响应于所述第一客户端关联的控制器发送的第二交互消息,更新所述虚拟载体及所述虚拟控制体在所述虚拟空间中的位置数据和运动数据;
根据所述虚拟载体及所述虚拟控制体在所述虚拟空间中的位置数据和运动数据显示所述虚拟控制体释放所述虚拟载体。
在一些实例中,所述虚拟空间还包含所述第一客户端对应的第一虚拟角色;所述展示模块1201,还用以:
当所述第一虚拟角色与所述至少一个虚拟载体中各虚拟载体的空间关系满足预定条件时,所述应用服务器向所述第一客户端发送所述至少一个虚拟载体的信息。
在一些实例中,所述媒体内容发送模块1202,还用以:
当根据所述虚拟载体的实时位置数据及所述控制器的实时位置数据确定所述虚拟控制体与所述虚拟载体发生碰撞时,确定选取所述虚拟载体。
在一些实例中,所述虚拟空间还包括与所述虚拟控制体关联的虚拟 射线,所述虚拟射线是从所述虚拟控制体发出的;
所述媒体内容发送模块1202,还用以:
根据所述控制器的所述实时位置数据确定所述虚拟射线在所述虚拟空间中的实时位置数据;
其中,当根据所述虚拟射线的所述实时位置数据及所述虚拟控制体的所述实时位置数据确定所述虚拟射线与所述虚拟载体发生碰撞时,确定选取所述虚拟载体。
在一些实例中,所述媒体内容发送模块1202,还用以:
响应于来自所述第一客户端关联的控制器发送的第一功能消息,通过自身关联的数据采集装置开始接收外部输入的媒体数据;
响应于来自所述控制器发送的第二功能消息,停止接收媒体数据;
根据接收到的媒体数据生成媒体内容。
在一些实例中,所述释放模块1203,还用以:
响应于所述第二交互消息,根据所述控制器的实时位置数据,确定所述虚拟载体的运动轨迹和初始运动数据,并更新所述虚拟控制体的实时位置数据和运动数据;
根据所述运动轨迹和初始运动数据,更新所述虚拟载体的实时位置数据;
所述释放模块1203,还用以:
根据所述虚拟控制体的实时位置数据、运动数据以及所述虚拟载体的实时位置数据显示所述虚拟载体沿所述运动轨迹运动。
在一些实例中,所述装置还包括媒体内容获取模块1204,用以:
当所述虚拟控制体抓取所述至少一个虚拟载体中一虚拟载体时,如果所述虚拟载体的状态为第二状态,向所述数据内容服务器发送媒体内容请求消息,所述媒体内容请求消息中携带所述虚拟载体的标识,以使 数据内容服务器查找所述虚拟载体的标识关联的媒体内容;
接收数据内容服务器响应于所述媒体内容请求消息而发送的所述媒体内容。
在一些实例中,所述媒体内容获取模块1204,还用以:
当所述虚拟控制体抓取所述至少一个虚拟载体中一虚拟载体时,如果所述虚拟载体的状态为第二状态,则根据所述虚拟载体的标识查找与所述标识相关联的媒体内容。
在一些实例中,所述装置还包括播放模块1205,用以:
当确定抓取所述虚拟载体时,响应于所述控制器发送的第三功能消息,播放所述媒体内容;响应于所述控制器发送的第四功能消息,停止播放所述媒体内容。
在一些实例中,所述媒体内容包括语音,所述播放模块1205,还用以:
获取所述第一客户端关联的头戴设备的实时位置数据;
根据所述头戴设备的实时位置数据确定虚拟空间中与所述第一客户端关联的第一虚拟角色的头部的实时位置数据;
获取所述第一客户端关联的控制器的实时位置数据,根据所述控制器的实时位置数据确定所述虚拟载体的实时位置数据;根据所述第一虚拟角色的头部的实时位置数据及所述虚拟载体的实时位置数据,确定所述虚拟载体相对于所述第一虚拟角色的头部的实时距离及实时方向;
根据所述实时距离及所述实时方向将所述语音转换为具有多维度空间音效的语音;
响应于所述控制器发送的第三功能消息,播放所述具有多维度空间音效的语音;响应于所述控制器发送的第四功能消息,停止播放所述具有多维度空间音效的语音。
在一些实例中,所述播放模块1205,还用以:
获取所述第一客户端关联的头戴设备的实时位置数据,
根据所述头戴设备的实时位置数据确定虚拟空间中所述第一虚拟角色的头部的实时位置数据;
当所述第一虚拟角色的头部的实时位置数据与所述虚拟载体的实时位置数据满足预设条件时,通知所述应用服务器设置所述虚拟载体的状态为第三状态;以使当所述应用服务器将所述虚拟载体的状态发送给所述第二客户端时,所述第二客户端当所述虚拟载体的状态为所述第三状态时,获取与所述虚拟载体关联的媒体内容并播放。
在一些实例中,所述装置还包括互动模块1206,用以:
监听所述控制器的预定手势操作;
当监听到所述控制器的所述预定手势操作时,生成互动信息;
将该互动信息发送给所述应用服务器,以使应用服务器根据该互动信息更新所述虚拟载体的互动信息,当在所述虚拟空间中第二客户端对应的第二虚拟角色与所述虚拟载体的空间关系满足预设条件时,将所述虚拟载体的互动信息发送给所述第二客户端,以使所述第二客户端,当确定选取所述虚拟载体时,在所述虚拟空间中展示所述虚拟载体的互动信息。
在一些实例中,所述互动模块1206,还用以:
当检测到所述控制器的震动后,在播放第一预设数量的图像帧的过程中监听到所述控制器的震动次数满足第一预设条件,且在之后播放第二预设数量的图像帧的过程中监听到所述控制器的震动次数满足第二预设条件时,确定监听到所述控制器的所述预定手势操作。
在一些实例中,所述互动模块1206,还用以:
当从监听到所述控制器的震动次数满足第一预设条件到监听到所 述控制器的震动次数满足第二预设条件之间的时间间隔超过预定时长时,确定监听到所述控制器的所述预定手势操作。
本申请还提供了一种媒体内容发送装置1300,应用于应用服务器,所述装置包括:
第一信息发送模块1301,用以向第一客户端发送至少一个虚拟载体的信息及所述第一客户端关联的虚拟控制体的信息,其中,所述至少一个虚拟载体的信息包括:各虚拟载体的标识、状态及在所述虚拟空间中的实时位置数据,所述虚拟控制体的信息包括所述虚拟控制体的初始位置信息;以使所述第一客户端根据所述至少一个虚拟载体的信息及所述虚拟控制体的信息展示所虚拟空间,当所述虚拟控制体抓取所述至少一个虚拟载体中一虚拟载体时,当所述虚拟载体的状态为第一状态时,获取输入的媒体内容,将所述媒体内容及所述虚拟载体的标识发送给数据内容服务器,以使所述数据内容服务器将所述媒体内容与所述虚拟载体的标识相关联;
消息接收模块1302,用以接收所述第一客户端发送的通知消息,根据该通知消息将所述虚拟载体的状态设置为第二状态;
第二信息发送模块1303,用以将所述虚拟载体的信息发送给第二客户端,以使所述第二客户端,从所述数据内容服务器获取与所述虚拟载体的标识相关联的媒体内容,
在一些实例中,所述装置还包括:
互动信息接收模块1304,用以接收所述第一客户端发送的互动信息及所述虚拟载体的标识,其中所述第一客户端当确定抓取到所述虚拟载体时,且监听到与所述第一客户端关联的控制器的预定手势操作时,生成所述互动信息;
互动信息更新模块1305,用以根据所述互动信息更新所述虚拟载体 的互动信息,并将所述虚拟载体的互动信息与所述虚拟载体的标识相关联;
互动信息发送模块1306,用以当在所述虚拟空间中第二客户端关联的虚拟对象与所述虚拟载体的空间关系满足预设条件时,将所述虚拟载体的互动信息发送给所述第二客户端,以使所述第二客户端,当确定选取所述虚拟载体时,在虚拟空间中展示所述虚拟载体的互动信息。
本申请实例还提供了一种计算机可读存储介质,存储有计算机可读指令,可以使至少一个处理器执行如上述应用于第一客户端的所述的方法。
本申请实例还提供了一种计算机可读存储介质,存储有计算机可读指令,可以使至少一个处理器执行如上述应用于应用服务器的所述的方法。
图14示出了媒体内容发送装置1200及媒体内容发送装置1300所在的计算设备的组成结构图。如图14所示,该计算设备包括一个或者多个处理器(CPU)1402、通信模块1404、存储器1406、用户接口1410,以及用于互联这些组件的通信总线1408。
处理器1402可通过通信模块1404接收和发送数据以实现网络通信和/或本地通信。
用户接口1410包括一个或多个输出设备1412,其包括一个或多个扬声器和/或一个或多个可视化显示器。用户接口1410也包括一个或多个输入设备914,其包括诸如,键盘,鼠标,声音命令输入单元或扩音器,触屏显示器,触敏输入板,姿势捕获摄像机或其他输入按钮或控件等。
存储器1406可以是高速随机存取存储器,诸如DRAM、SRAM、DDR RAM、或其他随机存取固态存储设备;或者非易失性存储器,诸 如一个或多个磁盘存储设备、光盘存储设备、闪存设备,或其他非易失性固态存储设备。
存储器1406存储处理器1402可执行的指令集,包括:
操作***1416,包括用于处理各种基本***服务和用于执行硬件相关任务的程序;
应用1418,包括用于媒体内容发送的各种应用程序,这种应用程序能够实现上述各实例中的处理流程,比如可以包括媒体内容发送装置1200或媒体内容发送装置1300中的部分或全部单元或者模块。媒体内容发送装置1200或媒体内容发送装置1300中的各单元中的至少一个单元可以存储有机器可执行指令。处理器1402通过执行存储器1406中各单元中至少一个单元中的机器可执行指令,进而能够实现上述各单元或模块中的至少一个模块的功能。
需要说明的是,上述各流程和各结构图中不是所有的步骤和模块都是必须的,可以根据实际的需要忽略某些步骤或模块。各步骤的执行顺序不是固定的,可以根据需要进行调整。各模块的划分仅仅是为了便于描述采用的功能上的划分,实际实现时,一个模块可以分由多个模块实现,多个模块的功能也可以由同一个模块实现,这些模块可以位于同一个设备中,也可以位于不同的设备中。
各实施例中的硬件模块可以以硬件方式或硬件平台加软件的方式实现。上述软件包括机器可读指令,存储在非易失性存储介质中。因此,各实施例也可以体现为软件产品。
各例中,硬件可以由专门的硬件或执行机器可读指令的硬件实现。例如,硬件可以为专门设计的永久性电路或逻辑器件(如专用处理器,如FPGA或ASIC)用于完成特定的操作。硬件也可以包括由软件临时配置的可编程逻辑器件或电路(如包括通用处理器或其它可编程处理器) 用于执行特定操作。
另外,本申请的每个实例可以通过由数据处理设备如计算机执行的数据处理程序来实现。显然,数据处理程序构成了本申请。此外,通常存储在一个存储介质中的数据处理程序通过直接将程序读取出存储介质或者通过将程序安装或复制到数据处理设备的存储设备(如硬盘和或内存)中执行。因此,这样的存储介质也构成了本申请,本申请还提供了一种非易失性存储介质,其中存储有数据处理程序,这种数据处理程序可用于执行本申请上述方法实例中的任何一种实例。
图14模块对应的机器可读指令可以使计算机上操作的操作***等来完成这里描述的部分或者全部操作。非易失性计算机可读存储介质可以是***计算机内的扩展板中所设置的存储器中或者写到与计算机相连接的扩展单元中设置的存储器。安装在扩展板或者扩展单元上的CPU等可以根据指令执行部分和全部实际操作。
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明保护的范围之内。

Claims (40)

  1. 一种媒体内容发送方法,应用于第一客户端,所述方法包括:
    展示虚拟空间,所述虚拟空间中包括至少一个虚拟载体和所述第一客户端对应的虚拟控制体;
    当所述虚拟控制体抓取所述至少一个虚拟载体中一虚拟载体时,当所述虚拟载体的状态为第一状态时,获取输入的媒体内容,将所述媒体内容及所述虚拟载体的标识发送给数据内容服务器,以使所述数据内容服务器将所述媒体内容与所述虚拟载体的标识相关联,并通知所述应用服务器设置所述虚拟载体的状态为第二状态;
    所述虚拟控制体控制所述虚拟载体为释放状态。
  2. 根据权利要求1所述的方法,其中,所述展示虚拟空间包括:
    接收应用服务器发送的所述至少一个虚拟载体的信息及所述虚拟控制体的信息,其中,所述至少一个虚拟载体的信息包括:各虚拟载体的标识、状态及在所述虚拟空间中的实时位置数据,所述虚拟控制体的信息包括所述虚拟控制体的初始位置信息;
    根据所述至少一个虚拟载体的信息及所述虚拟控制体的信息展示所述虚拟空间。
  3. 根据权利要求1所述的方法,其中,所述当所述虚拟控制体抓取所述至少一个虚拟载体中一虚拟载体时,当所述虚拟载体的状态为第一状态时,获取输入的媒体内容包括:
    获取所述第一客户端关联的控制器的实时位置数据;
    接收应用服务器发送的所述虚拟载体的实时位置数据;
    当根据所述虚拟载体的实时位置数据和所述控制器的实时位置数据确定选取所述虚拟载体时,响应于所述控制器发送的第一交互消息, 更新所述虚拟控制体及所述虚拟载体在所述虚拟空间中的位置数据和/或运动数据,所述虚拟控制体抓取到所述虚拟载体;
    当所述虚拟载体的状态为第一状态时,获取输入的媒体内容。
  4. 根据权利要求1所述的方法,其中,所述虚拟控制体控制所述虚拟载体为释放状态包括:
    响应于所述第一客户端关联的控制器发送的第二交互消息,更新所述虚拟载体及所述虚拟控制体在所述虚拟空间中的位置数据和运动数据;
    根据所述虚拟载体及所述虚拟控制体在所述虚拟空间中的位置数据和运动数据显示所述虚拟控制体释放所述虚拟载体。
  5. 根据权利要求2所述的方法,其中,所述虚拟空间还包含所述第一客户端对应的第一虚拟角色;
    其中,所述接收应用服务器发送的所述至少一个虚拟载体的信息及所述虚拟控制体的信息包括:当所述第一虚拟角色与所述至少一个虚拟载体中各虚拟载体的空间关系满足预定条件时,所述应用服务器向所述第一客户端发送所述至少一个虚拟载体的信息。
  6. 根据权利要求3所述的方法,其中,当根据所述虚拟载体的实时位置数据及所述控制器的实时位置数据确定所述虚拟控制体与所述虚拟载体发生碰撞时,确定选取所述虚拟载体。
  7. 根据权利要求3所述的方法,其中,所述虚拟空间还包括与所述虚拟控制体关联的虚拟射线,所述虚拟射线是从所述虚拟控制体发出的;
    所述方法进一步包括:
    根据所述控制器的所述实时位置数据确定所述虚拟射线在所述虚拟空间中的实时位置数据;
    其中,当根据所述虚拟射线的所述实时位置数据及所述虚拟控制体的所述实时位置数据确定所述虚拟射线与所述虚拟载体发生碰撞时,确定选取所述虚拟载体。
  8. 根据权利要求1所述的方法,其中,所述获取输入的媒体内容包括:
    响应于来自所述第一客户端关联的控制器发送的第一功能消息,通过自身关联的数据采集装置开始接收外部输入的媒体数据;
    响应于来自所述控制器发送的第二功能消息,停止接收媒体数据;
    根据接收到的媒体数据生成媒体内容。
  9. 根据权利要求4所述的方法,其中,所述响应于所述第一客户端关联的控制器发送的第二交互消息,更新所述虚拟载体及所述虚拟控制体在所述虚拟空间中的位置数据和运动数据;
    响应于所述第二交互消息,根据所述控制器的实时位置数据,确定所述虚拟载体的运动轨迹和初始运动数据,并更新所述虚拟控制体的实时位置数据和运动数据;
    根据所述运动轨迹和初始运动数据,更新所述虚拟载体的实时位置数据;
    其中,所述根据所述虚拟载体及所述虚拟控制体在所述虚拟空间中的位置数据和运动数据显示所述虚拟控制体释放所述虚拟载体包括:
    根据所述虚拟控制体的实时位置数据、运动数据以及所述虚拟载体的实时位置数据显示所述虚拟载体沿所述运动轨迹运动。
  10. 根据权利要求1所述的方法,进一步包括:
    当所述虚拟控制体抓取所述至少一个虚拟载体中一虚拟载体时,如果所述虚拟载体的状态为第二状态,向所述数据内容服务器发送媒体内容请求消息,所述媒体内容请求消息中携带所述虚拟载体的标识,以使 数据内容服务器查找所述虚拟载体的标识关联的媒体内容;
    接收数据内容服务器响应于所述媒体内容请求消息而发送的所述媒体内容。
  11. 根据权利要求1所述的方法,进一步包括:
    当所述虚拟控制体抓取所述至少一个虚拟载体中一虚拟载体时,如果所述虚拟载体的状态为第二状态,则根据所述虚拟载体的标识查找与所述标识相关联的媒体内容。
  12. 根据权利要求10或11所述的方法,进一步包括:
    当确定抓取所述虚拟载体时,响应于所述控制器发送的第三功能消息,播放所述媒体内容;响应于所述控制器发送的第四功能消息,停止播放所述媒体内容。
  13. 根据权利要求10或11所述的方法,其中,所述媒体内容包括语音,所述方法进一步包括:
    获取所述第一客户端关联的头戴设备的实时位置数据;
    根据所述头戴设备的实时位置数据确定虚拟空间中与所述第一客户端关联的第一虚拟角色的头部的实时位置数据;
    获取所述第一客户端关联的控制器的实时位置数据,根据所述控制器的实时位置数据确定所述虚拟载体的实时位置数据;
    根据所述第一虚拟角色的头部的实时位置数据及所述虚拟载体的实时位置数据,确定所述虚拟载体相对于所述第一虚拟角色的头部的实时距离及实时方向;
    根据所述实时距离及所述实时方向将所述语音转换为具有多维度空间音效的语音;
    响应于所述控制器发送的第三功能消息,播放所述具有多维度空间音效的语音;响应于所述控制器发送的第四功能消息,停止播放所述具 有多维度空间音效的语音。
  14. 根据权利要求12或13所述的方法,进一步包括:
    获取所述第一客户端关联的头戴设备的实时位置数据,
    根据所述头戴设备的实时位置数据确定虚拟空间中所述第一虚拟角色的头部的实时位置数据;
    当所述第一虚拟角色的头部的实时位置数据与所述虚拟载体的实时位置数据满足预设条件时,通知所述应用服务器设置所述虚拟载体的状态为第三状态,通知所述应用服务器将所述虚拟载体的状态发送给一个或多个第二客户端,所述一个或多个第二客户端中各第二客户端对应的虚拟角色与所述第一虚拟角色的空间关系满足预定条件,所述各第二客户端响应于接收到的所述虚拟载体的状态,获取与所述虚拟载体关联的媒体内容并播放。
  15. 根据权利要求12-14任一项所述的方法,其中,所述方法进一步包括:
    监听所述控制器的预定手势操作;
    当监听到所述控制器的所述预定手势操作时,生成互动信息;
    将该互动信息发送给所述应用服务器,以使应用服务器根据该互动信息更新所述虚拟载体的互动信息,当在所述虚拟空间中第二客户端对应的第二虚拟角色与所述虚拟载体的空间关系满足预设条件时,将所述虚拟载体的互动信息发送给所述第二客户端,以使所述第二客户端,当确定选取所述虚拟载体时,在所述虚拟空间中展示所述虚拟载体的互动信息。
  16. 根据权利要求15所述的方法,其中,
    当检测到所述控制器的震动后,在播放第一预设数量的图像帧的过程中监听到所述控制器的震动次数满足第一预设条件,且在之后播放第 二预设数量的图像帧的过程中监听到所述控制器的震动次数满足第二预设条件时,确定监听到所述控制器的所述预定手势操作。
  17. 根据权利要求15所述的方法,进一步包括:
    当从监听到所述控制器的震动次数满足第一预设条件到监听到所述控制器的震动次数满足第二预设条件之间的时间间隔超过预定时长时,确定监听到所述控制器的所述预定手势操作。
  18. 一种媒体内容发送方法,应用于应用服务器,所述方法包括:
    向第一客户端发送至少一个虚拟载体的信息及所述第一客户端关联的虚拟控制体的信息,其中,所述至少一个虚拟载体的信息包括:各虚拟载体的标识、状态及在所述虚拟空间中的实时位置数据,所述虚拟控制体的信息包括所述虚拟控制体的初始位置信息;以使所述第一客户端根据所述至少一个虚拟载体的信息及所述虚拟控制体的信息展示所虚拟空间,当所述虚拟控制体抓取所述至少一个虚拟载体中一虚拟载体时,当所述虚拟载体的状态为第一状态时,获取输入的媒体内容,将所述媒体内容及所述虚拟载体的标识发送给数据内容服务器,以使所述数据内容服务器将所述媒体内容与所述虚拟载体的标识相关联;
    接收所述第一客户端发送的通知消息,根据该通知消息将所述虚拟载体的状态设置为第二状态;
    将所述虚拟载体的信息发送给第二客户端,以使所述第二客户端,从所述数据内容服务器获取与所述虚拟载体的标识相关联的媒体内容。
  19. 根据权利要求18所述的方法,进一步包括:
    接收所述第一客户端发送的互动信息及所述虚拟载体的标识,其中所述第一客户端当确定抓取到所述虚拟载体时,且监听到与所述第一客户端关联的控制器的预定手势操作时,生成所述互动信息;
    根据所述互动信息更新所述虚拟载体的互动信息,并将所述虚拟载 体的互动信息与所述虚拟载体的标识相关联;
    当在所述虚拟空间中第二客户端关联的虚拟对象与所述虚拟载体的空间关系满足预设条件时,将所述虚拟载体的互动信息发送给所述第二客户端,以使所述第二客户端,当确定选取所述虚拟载体时,在虚拟空间中展示所述虚拟载体的互动信息。
  20. 一种媒体内容发送装置,所述装置包括:
    展示模块,展示虚拟空间,所述虚拟空间中包括至少一个虚拟载体和所述第一客户端对应的虚拟控制体;
    媒体内容发送模块,用以当所述虚拟控制体抓取所述至少一个虚拟载体中一虚拟载体时,当所述虚拟载体的状态为第一状态时,获取输入的媒体内容,将所述媒体内容及所述虚拟载体的标识发送给数据内容服务器,以使所述数据内容服务器将所述媒体内容与所述虚拟载体的标识相关联,并通知所述应用服务器设置所述虚拟载体的状态为第二状态
    释放模块,用以所述虚拟控制体控制所述虚拟载体为释放状态。
  21. 根据权利要求20所述的装置,其中,所述展示模块,还用以:
    接收应用服务器发送的所述至少一个虚拟载体的信息及所述虚拟控制体的信息,其中,所述至少一个虚拟载体的信息包括:各虚拟载体的标识、状态及在所述虚拟空间中的实时位置数据,所述虚拟控制体的信息包括所述虚拟控制体的初始位置信息;
    根据所述至少一个虚拟载体的信息及所述虚拟控制体的信息展示所述虚拟空间。
  22. 根据权利要求20所述的装置,其中,媒体内容发送模块,用以:
    获取所述第一客户端关联的控制器的实时位置数据;
    接收应用服务器发送的所述虚拟载体的实时位置数据;
    当根据所述虚拟载体的实时位置数据和所述控制器的实时位置数据确定选取所述虚拟载体时,响应于所述控制器发送的第一交互消息,更新所述虚拟控制体及所述虚拟载体在所述虚拟空间中的位置数据和/或运动数据,所述虚拟控制体抓取到所述虚拟载体;当所述虚拟载体的状态为第一状态时,获取输入的媒体内容。
  23. 根据权利要求20所述的装置,其中,所述释放模块,用以:
    响应于所述第一客户端关联的控制器发送的第二交互消息,更新所述虚拟载体及所述虚拟控制体在所述虚拟空间中的位置数据和运动数据;
    根据所述虚拟载体及所述虚拟控制体在所述虚拟空间中的位置数据和运动数据显示所述虚拟控制体释放所述虚拟载体。
  24. 根据权利要求21所述的装置,其中,所述虚拟空间还包含所述第一客户端对应的第一虚拟角色;所述展示模块,还用以:
    当所述第一虚拟角色与所述至少一个虚拟载体中各虚拟载体的空间关系满足预定条件时,所述应用服务器向所述第一客户端发送所述至少一个虚拟载体的信息。
  25. 根据权利要求22所述的装置,其中,所述媒体内容发送模块,还用以:
    当根据所述虚拟载体的实时位置数据及所述控制器的实时位置数据确定所述虚拟控制体与所述虚拟载体发生碰撞时,确定选取所述虚拟载体。
  26. 根据权利要求22所述的装置,其中,所述虚拟空间还包括与所述虚拟控制体关联的虚拟射线,所述虚拟射线是从所述虚拟控制体发出的;
    所述媒体内容发送模块,还用以:
    根据所述控制器的所述实时位置数据确定所述虚拟射线在所述虚拟空间中的实时位置数据;
    其中,当根据所述虚拟射线的所述实时位置数据及所述虚拟控制体的所述实时位置数据确定所述虚拟射线与所述虚拟载体发生碰撞时,确定选取所述虚拟载体。
  27. 根据权利要求20所述的装置,其中,所述媒体内容发送模块,还用以:
    响应于来自所述第一客户端关联的控制器发送的第一功能消息,通过自身关联的数据采集装置开始接收外部输入的媒体数据;
    响应于来自所述控制器发送的第二功能消息,停止接收媒体数据;
    根据接收到的媒体数据生成媒体内容。
  28. 根据权利要求23所述的装置,其中,所述释放模块,还用以:
    响应于所述第二交互消息,根据所述控制器的实时位置数据,确定所述虚拟载体的运动轨迹和初始运动数据,并更新所述虚拟控制体的实时位置数据和运动数据;
    根据所述运动轨迹和初始运动数据,更新所述虚拟载体的实时位置数据;
    所述释放模块,还用以:
    根据所述虚拟控制体的实时位置数据、运动数据以及所述虚拟载体的实时位置数据显示所述虚拟载体沿所述运动轨迹运动。
  29. 根据权利要求20所述的装置,其中,所述装置还包括媒体内容获取模块,用以:
    当所述虚拟控制体抓取所述至少一个虚拟载体中一虚拟载体时,如果所述虚拟载体的状态为第二状态,向所述数据内容服务器发送媒体内容请求消息,所述媒体内容请求消息中携带所述虚拟载体的标识,以使 数据内容服务器查找所述虚拟载体的标识关联的媒体内容;
    接收数据内容服务器响应于所述媒体内容请求消息而发送的所述媒体内容。
  30. 根据权利要求20所述的装置,其中,所述媒体内容获取模块,还用以:
    当所述虚拟控制体抓取所述至少一个虚拟载体中一虚拟载体时,如果所述虚拟载体的状态为第二状态,则根据所述虚拟载体的标识查找与所述标识相关联的媒体内容。
  31. 根据权利要求29或30所述的装置,其中,所述装置还包括播放模块,用以:
    当确定抓取所述虚拟载体时,响应于所述控制器发送的第三功能消息,播放所述媒体内容;响应于所述控制器发送的第四功能消息,停止播放所述媒体内容。
  32. 根据权利要求29或30所述的装置,其中,所述媒体内容包括语音,所述播放模块,还用以:
    获取所述第一客户端关联的头戴设备的实时位置数据;
    根据所述头戴设备的实时位置数据确定虚拟空间中与所述第一客户端关联的第一虚拟角色的头部的实时位置数据;
    获取所述第一客户端关联的控制器的实时位置数据,根据所述控制器的实时位置数据确定所述虚拟载体的实时位置数据;根据所述第一虚拟角色的头部的实时位置数据及所述虚拟载体的实时位置数据,确定所述虚拟载体相对于所述第一虚拟角色的头部的实时距离及实时方向;
    根据所述实时距离及所述实时方向将所述语音转换为具有多维度空间音效的语音;
    响应于所述控制器发送的第三功能消息,播放所述具有多维度空间 音效的语音;响应于所述控制器发送的第四功能消息,停止播放所述具有多维度空间音效的语音。
  33. 根据权利要求31或32所述的装置,其中,所述播放模块,还用以:
    获取所述第一客户端关联的头戴设备的实时位置数据,
    根据所述头戴设备的实时位置数据确定虚拟空间中所述第一虚拟角色的头部的实时位置数据;
    当所述第一虚拟角色的头部的实时位置数据与所述虚拟载体的实时位置数据满足预设条件时,通知所述应用服务器设置所述虚拟载体的状态为第三状态;以使当所述应用服务器将所述虚拟载体的状态发送给所述第二客户端时,所述第二客户端当所述虚拟载体的状态为所述第三状态时,获取与所述虚拟载体关联的媒体内容并播放。
  34. 根据权利要求31-33所述的装置,其中,所述装置还包括互动模块,用以:
    监听所述控制器的预定手势操作;
    当监听到所述控制器的所述预定手势操作时,生成互动信息;
    将该互动信息发送给所述应用服务器,以使应用服务器根据该互动信息更新所述虚拟载体的互动信息,当在所述虚拟空间中第二客户端对应的第二虚拟角色与所述虚拟载体的空间关系满足预设条件时,将所述虚拟载体的互动信息发送给所述第二客户端,以使所述第二客户端,当确定选取所述虚拟载体时,在所述虚拟空间中展示所述虚拟载体的互动信息。
  35. 根据权利要求34所述的装置,其中,所述互动模块,还用以:
    当检测到所述控制器的震动后,在播放第一预设数量的图像帧的过程中监听到所述控制器的震动次数满足第一预设条件,且在之后播放第 二预设数量的图像帧的过程中监听到所述控制器的震动次数满足第二预设条件时,确定监听到所述控制器的所述预定手势操作。
  36. 根据权利要求34所述的装置,所述互动模块,还用以:
    当从监听到所述控制器的震动次数满足第一预设条件到监听到所述控制器的震动次数满足第二预设条件之间的时间间隔超过预定时长时,确定监听到所述控制器的所述预定手势操作。
  37. 一种媒体内容发送装置,应用于应用服务器,所述装置包括:
    第一信息发送模块,用以向第一客户端发送至少一个虚拟载体的信息及所述第一客户端关联的虚拟控制体的信息,其中,所述至少一个虚拟载体的信息包括:各虚拟载体的标识、状态及在所述虚拟空间中的实时位置数据,所述虚拟控制体的信息包括所述虚拟控制体的初始位置信息;以使所述第一客户端根据所述至少一个虚拟载体的信息及所述虚拟控制体的信息展示所虚拟空间,当所述虚拟控制体抓取所述至少一个虚拟载体中一虚拟载体时,当所述虚拟载体的状态为第一状态时,获取输入的媒体内容,将所述媒体内容及所述虚拟载体的标识发送给数据内容服务器,以使所述数据内容服务器将所述媒体内容与所述虚拟载体的标识相关联;
    消息接收模块,用以接收所述第一客户端发送的通知消息,根据该通知消息将所述虚拟载体的状态设置为第二状态;
    第二信息发送模块,用以将所述虚拟载体的信息发送给第二客户端,以使所述第二客户端,从所述数据内容服务器获取与所述虚拟载体的标识相关联的媒体内容。
  38. 根据权利要求37所述的装置,其中,所述装置还包括:
    互动信息接收模块,用以接收所述第一客户端发送的互动信息及所述虚拟载体的标识,其中所述第一客户端当确定抓取到所述虚拟载体 时,且监听到与所述第一客户端关联的控制器的预定手势操作时,生成所述互动信息;
    互动信息更新模块,用以根据所述互动信息更新所述虚拟载体的互动信息,并将所述虚拟载体的互动信息与所述虚拟载体的标识相关联;
    互动信息发送模块,用以当在所述虚拟空间中第二客户端关联的虚拟对象与所述虚拟载体的空间关系满足预设条件时,将所述虚拟载体的互动信息发送给所述第二客户端,以使所述第二客户端,当确定选取所述虚拟载体时,在虚拟空间中展示所述虚拟载体的互动信息。
  39. 一种计算机可读存储介质,存储有计算机可读指令,可以使至少一个处理器执行如权利要求1-17任一项所述的方法。
  40. 一种计算机可读存储介质,存储有计算机可读指令,可以使至少一个处理器执行如权利要求18-19任一项所述的方法。
PCT/CN2018/074074 2018-01-25 2018-01-25 媒体内容发送方法、装置及存储介质 WO2019144330A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/074074 WO2019144330A1 (zh) 2018-01-25 2018-01-25 媒体内容发送方法、装置及存储介质
CN201880003415.0A CN110431513B (zh) 2018-01-25 2018-01-25 媒体内容发送方法、装置及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/074074 WO2019144330A1 (zh) 2018-01-25 2018-01-25 媒体内容发送方法、装置及存储介质

Publications (1)

Publication Number Publication Date
WO2019144330A1 true WO2019144330A1 (zh) 2019-08-01

Family

ID=67395811

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/074074 WO2019144330A1 (zh) 2018-01-25 2018-01-25 媒体内容发送方法、装置及存储介质

Country Status (2)

Country Link
CN (1) CN110431513B (zh)
WO (1) WO2019144330A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111784850B (zh) * 2020-07-03 2024-02-02 深圳市瑞立视多媒体科技有限公司 基于虚幻引擎的物体抓取仿真方法及相关设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106658146A (zh) * 2016-12-28 2017-05-10 上海翌创网络科技股份有限公司 基于虚拟现实的弹幕方法
CN107132917A (zh) * 2017-04-25 2017-09-05 腾讯科技(深圳)有限公司 用于虚拟现实场景中的手型显示方法及装置
WO2017210035A1 (en) * 2016-06-02 2017-12-07 Microsoft Technology Licensing, Llc Automatic audio attenuation on immersive display devices
CN107562201A (zh) * 2017-09-08 2018-01-09 网易(杭州)网络有限公司 定向交互方法、装置、电子设备及存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10062208B2 (en) * 2015-04-09 2018-08-28 Cinemoi North America, LLC Systems and methods to provide interactive virtual environments
US10331312B2 (en) * 2015-09-08 2019-06-25 Apple Inc. Intelligent automated assistant in a media environment
KR20170082028A (ko) * 2016-01-05 2017-07-13 주식회사 비블톡 림 모션 장치

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017210035A1 (en) * 2016-06-02 2017-12-07 Microsoft Technology Licensing, Llc Automatic audio attenuation on immersive display devices
CN106658146A (zh) * 2016-12-28 2017-05-10 上海翌创网络科技股份有限公司 基于虚拟现实的弹幕方法
CN107132917A (zh) * 2017-04-25 2017-09-05 腾讯科技(深圳)有限公司 用于虚拟现实场景中的手型显示方法及装置
CN107562201A (zh) * 2017-09-08 2018-01-09 网易(杭州)网络有限公司 定向交互方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN110431513B (zh) 2020-11-27
CN110431513A (zh) 2019-11-08

Similar Documents

Publication Publication Date Title
US9632683B2 (en) Methods, apparatuses and computer program products for manipulating characteristics of audio objects by using directional gestures
JP7095602B2 (ja) 情報処理装置、情報処理方法及び記録媒体
US11250636B2 (en) Information processing device, information processing method, and program
WO2020082902A1 (zh) 视频的音效处理方法及相关产品
JP2020091504A (ja) 仮想空間中のアバター表示システム、仮想空間中のアバター表示方法、コンピュータプログラム
CN109660817B (zh) 视频直播方法、装置及***
CN104813642A (zh) 用于触发手势辨识模式以及经由非触摸手势的装置配对和共享的方法、设备和计算机可读媒体
CN111045511B (zh) 基于手势的操控方法及终端设备
CN102708120A (zh) 生活流式传输
JP2020120336A (ja) プログラム、方法、および情報処理装置
US11644902B2 (en) Gesture-based content transfer
WO2022228068A1 (zh) 电子设备的图像获取方法、装置、***及电子设备
CN110087149A (zh) 一种视频图像分享方法、装置及移动终端
CN109151565A (zh) 播放语音的方法、装置、电子设备及存储介质
CN111628925A (zh) 歌曲交互方法、装置、终端及存储介质
CN112969093A (zh) 互动业务处理方法、装置、设备及存储介质
JP6470374B1 (ja) 仮想現実を提供するためにコンピュータで実行されるプログラムおよび情報処理装置
WO2015080212A1 (ja) コンテンツの評価方法、装置、システム、サーバ装置及び端末装置
WO2019144330A1 (zh) 媒体内容发送方法、装置及存储介质
WO2024001799A1 (zh) 虚拟现实vr设备的防碰撞方法和电子设备
CN106462251A (zh) 显示控制设备、显示控制方法以及程序
KR20210132157A (ko) 착용가능한 헤드 마운트 디스플레이들을 위한 장치, 시스템들, 및 방법들
CN110471895A (zh) 分享方法及终端设备
WO2021078182A1 (zh) 一种播放方法以及播放***
CN109547696A (zh) 一种拍摄方法及终端设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18902855

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18902855

Country of ref document: EP

Kind code of ref document: A1