WO2022242380A1 - 用于交互的方法、装置、设备以及存储介质 - Google Patents

用于交互的方法、装置、设备以及存储介质 Download PDF

Info

Publication number
WO2022242380A1
WO2022242380A1 PCT/CN2022/086898 CN2022086898W WO2022242380A1 WO 2022242380 A1 WO2022242380 A1 WO 2022242380A1 CN 2022086898 W CN2022086898 W CN 2022086898W WO 2022242380 A1 WO2022242380 A1 WO 2022242380A1
Authority
WO
WIPO (PCT)
Prior art keywords
interactive
virtual model
interactive object
message
rendering
Prior art date
Application number
PCT/CN2022/086898
Other languages
English (en)
French (fr)
Inventor
孙林
张子隆
苏丽伟
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2022242380A1 publication Critical patent/WO2022242380A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/38Creation or generation of source code for implementing user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering

Definitions

  • the present disclosure relates to the field of computer technology, and in particular to methods, devices, devices and storage media for interaction.
  • the embodiment of the present disclosure provides an interaction solution.
  • an interaction method applied to a terminal device where the terminal device includes a target application, and the method includes: in response to receiving an operation instruction for an interactive object in the target application, according to the The above operation instruction acquires the interactive animation of the interactive object; the interactive animation of the interactive object is played in the first layer of the page of the target application, and the first layer is added in the page of the target application transparent layer.
  • the embodiments of the present disclosure can add interactive virtual objects to the target application without changing the platform and framework of the target application, reducing the business cost and technical threshold required for virtual object integration.
  • the terminal device is installed with a software development kit SDK, and in response to receiving an operation instruction for an interactive object in the target application, acquiring the interaction according to the operation instruction
  • An interactive animation of an object including: in response to receiving an operation instruction on an interactive object in the target application, the target application sends a first message to the SDK, so that the SDK obtains the The interactive animation of the interactive object, wherein the first message includes instruction information of the virtual model of the interactive object and/or interaction information for controlling the posture of the interactive object; the target application acquires the Interactive animation of interactive objects.
  • the existing application program APP in the terminal device by installing an SDK that has an interactive interface and can interact with the existing APP, the existing APP can be made without changing the platform and framework of the existing APP.
  • the APP has the function of interacting with virtual objects.
  • the SDK includes a rendering engine and at least one virtual model; the SDK obtains the interaction animation of the interactive object according to the first message, including: the SDK according to the first message The message determines a target virtual model from the at least one virtual model as the interactive object; and the SDK uses the rendering engine to render the target virtual model to obtain an interactive animation of the interactive object.
  • the rendering engine and the virtual model at the terminal device by setting the rendering engine and the virtual model at the terminal device, the rendering of the virtual model is completed in the terminal device, thereby improving the rendering effect and quality of the interactive object.
  • the SDK obtains the interactive animation of the interactive object according to the first message
  • the method further includes: the SDK sends the first message to a server, and the server includes A rendering engine and at least one virtual model; the SDK receives a second message from the server in response to the first message, the second message includes the interactive animation of the interactive object, and the interactive animation of the interactive object is used
  • the rendering engine renders a target virtual model, and the target virtual model is determined as the interaction object from the at least one virtual model according to the first message.
  • the operation instruction when the operation instruction includes a startup instruction, the first message further includes instance request information; the SDK receives a second response from the server to the first message. message, the second message includes the interactive animation of the interactive object, including: the SDK receives the rendering instance information returned by the server, and the rendering instance information is used to describe the rendering instance; The information obtains encoded audio and video frames from the server, and the SDK decodes the encoded audio and video frames to obtain the interactive animation of the interactive object.
  • the encoded audio and video frames are obtained by encoding the audio and video frames generated by the server using the rendering engine to render the target virtual model.
  • the rendering engine and the virtual model are set at the server, and the encoded audio and video frames are obtained from the server by using the rendering instance information, and decoded to obtain the interactive animation of the interactive object, which reduces the need for terminals.
  • Equipment performance requirements reducing the occupation of terminal equipment resources.
  • the interactive animation is generated according to preset parameters of the target virtual model.
  • the interaction animation is generated according to control parameters of the target virtual model, and the control parameters are obtained according to the interaction instruction.
  • the method further includes: displaying prompt information generated according to the operation instruction in a second layer of the page of the target application, the second layer being the A layer with controllable transparency is added to the page of the target application described above.
  • the prompt information generated according to the operation instruction can be displayed on the second layer while interacting with the interactive object, which enriches the target application.
  • the interactive function of the application improves the interactive experience of the target object.
  • the method further includes: the target application acquires setting information of a preset layer from the SDK; the preset layer includes the first layer and/or the The second layer; the target application sets and displays the preset layer on the page of the target application according to the setting information of the preset layer.
  • an interaction method is proposed, which is applied to a server, the server includes a rendering engine, and the method includes: receiving a first message from a terminal device, the first message including a virtual model of an interactive object instruction information and/or interaction information controlling the posture of the interactive object, and instance request information; according to the instance request information, generate rendering instance information of the target virtual model, and send the rendering instance to the terminal device information.
  • the rendering instance information is used to describe the rendering instance; use the rendering engine to render the target virtual model to obtain audio and video frames; encode the audio and video frames to obtain encoded audio and video frames; respond to A request from the terminal device for acquiring audio and video frames based on the rendering instance information, and sending the encoded audio and video frames to the terminal device according to the rendering instance information, so that the terminal device can process the encoded audio and video frames
  • the audio and video frames are decoded to obtain the interactive animation of the interactive object.
  • the server generates rendering instance information and returns it to the terminal device according to the instance request information from the terminal device, and pushes the encoded audio and video frames according to the audio and video frame acquisition request based on the rendering instance information from the terminal device
  • the terminal device can quickly and efficiently obtain interactive animations and improve the interactive experience of the target object.
  • the server includes a rendering engine and at least one virtual model; the target virtual model is determined by the server from the at least one virtual model according to the first message as the interaction object.
  • the method further includes: in response to the first message being generated according to the startup instruction received by the terminal device, generating the interactive object according to preset parameters of the target virtual model response animation.
  • the method further includes: in response to the first message being generated according to an interaction instruction received by the terminal device, generating control parameters of the target virtual model according to the interaction instruction, And generate a response animation of the interactive object according to the control parameters.
  • an interaction device is proposed, the device is applied to a terminal device, the terminal device includes a target application, and the device includes: an acquisition unit, configured to respond to receiving a request for An operation instruction of the interactive object, acquiring the interactive animation of the interactive object according to the operating instruction; a playback unit, playing the interactive animation of the interactive object in the first layer of the page of the target application, the first image
  • the layer is a transparent layer added in the page of the target application.
  • the terminal device is installed with a software development kit SDK, and in response to receiving an operation instruction for an interactive object in the target application, the acquisition unit is configured to When the operation instruction acquires the interaction animation of the interactive object, it is specifically used to: send a first message to the SDK in response to receiving an operation instruction on the interactive object in the target application, so that the SDK The first message acquires the interactive animation of the interactive object, wherein the first message includes instruction information for the virtual model of the interactive object and/or interaction information for controlling the posture of the interactive object; from the SDK Get the interactive animation of the interactive object.
  • the device further includes a layer setting unit, configured to obtain the setting information of the first layer from the SDK, and the setting information of the first layer is the SDK generated according to the first message.
  • the SDK includes a rendering engine and at least one virtual model; the SDK determines a target virtual model from the at least one virtual model according to the first message as the interaction object;
  • the device further includes a first rendering unit, configured to: use the rendering engine to render the target virtual model to obtain an interactive animation of the interactive object.
  • the device further includes a communication unit, configured to: send the first message to a server, the server includes a rendering engine and at least one virtual model; receive the first message from the server The second message in response, the second message includes the interactive animation of the interactive object, the interactive animation of the interactive object is obtained by using the rendering engine to render the target virtual model, and the target virtual model is based on the The first message is determined as the interaction object from the at least one virtual model.
  • the operation instruction when the operation instruction includes a startup instruction, the first message further includes instance request information; the SDK receives a second response from the server to the first message. message, the second message includes the interactive animation of the interactive object, and the device further includes an instance request unit, configured to: receive the rendering instance information returned by the server through the SDK, and the rendering instance information is used to describe Rendering instance; the SDK obtains encoded audio and video frames from the server according to the rendering instance information, and decodes the encoded audio and video frames to obtain an interactive animation of the interactive object, wherein the encoded The audio and video frames are obtained by encoding the audio and video frames generated by the server using the rendering engine to render the target virtual model.
  • the interactive animation is generated according to preset parameters of the target virtual model.
  • the interaction animation is generated according to a control parameter of the target virtual model, and the control parameter is obtained according to the interaction instruction.
  • the device further includes a display unit, configured to display prompt information generated according to the operation instruction in the second layer of the page of the target application, the second image A layer is a layer with controllable transparency added in the page of the target application.
  • an interaction device which is applied to a server, the server includes a rendering engine, and the device includes: a receiving unit, configured to receive a first message from a terminal device, and the first message includes a response to Indication information of the virtual model of the interactive object and/or interaction information controlling the posture of the interactive object, and instance request information; a generating unit configured to: generate rendering instance information of the target virtual model according to the instance request information , sending the rendering instance information to the terminal device, wherein the rendering instance information is used to describe the rendering instance; the second rendering unit is configured to use the rendering engine to render the target virtual model to obtain audio and video frame; an encoding unit, configured to encode the audio and video frame to obtain an encoded audio and video frame; a sending unit, configured to respond to an audio and video frame acquisition request based on the rendering instance information from the terminal device, according to The rendering instance information sends the encoded audio and video frames to the terminal device, so that the terminal device decodes the encoded audio and video frames to obtain the
  • the server further includes at least one virtual model; the target virtual model is determined by the server as the interaction object from the at least one virtual model according to the first message .
  • the device further includes a first animation generation unit, configured to respond to the fact that the first message is generated according to the startup instruction received by the terminal device, and according to the prediction of the target virtual model, Set parameters to generate the response animation of the interactive object.
  • a first animation generation unit configured to respond to the fact that the first message is generated according to the startup instruction received by the terminal device, and according to the prediction of the target virtual model, Set parameters to generate the response animation of the interactive object.
  • the apparatus further includes a second animation generating unit, configured to generate the animation according to the interaction instruction received by the terminal device in response to the first message being generated according to the interaction instruction received by the terminal device.
  • control parameters of the target virtual model and generate a response animation of the interactive object according to the control parameters.
  • an electronic device includes a memory and a processor, the memory is used to store computer instructions that can be run on the processor, and the processor is used to execute the computer instructions Implement the interaction method proposed in any implementation manner of the present disclosure.
  • a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the interaction method proposed in any implementation manner of the present disclosure is implemented.
  • a computer program product including a computer program, and when the program is executed by a processor, the interaction method proposed in any implementation manner of the present disclosure is implemented.
  • Fig. 1 shows a flowchart of an interaction method proposed by at least one embodiment of the present disclosure
  • Fig. 2 shows a schematic diagram of displaying interactive objects in the interaction method proposed by at least one embodiment of the present disclosure
  • Fig. 3 shows a schematic diagram of a process of starting an interactive object in a target application in an interaction method proposed by at least one embodiment of the present disclosure
  • Fig. 4 shows a flowchart of another interaction method according to at least one embodiment of the present disclosure
  • Fig. 5 shows a schematic structural diagram of an interaction device according to at least one embodiment of the present disclosure
  • Fig. 6 shows a schematic structural diagram of another interaction device according to at least one embodiment of the present disclosure
  • Fig. 7 shows a schematic structural diagram of an electronic device according to at least one embodiment of the present disclosure.
  • At least one embodiment of the present disclosure provides an interaction method, and the interaction method may be executed by an electronic device such as a terminal device or a server.
  • the terminal device may be a fixed terminal or a mobile terminal, such as a mobile phone, a tablet computer, a game machine, a desktop computer, an advertising machine, an all-in-one machine, a vehicle terminal, etc.
  • the server includes a local server or a cloud server.
  • the method can also be implemented by a processor invoking computer-readable instructions stored in a memory.
  • the interactive object may be any avatar capable of interacting with the target object, such as a virtual character, a virtual animal, a virtual item, a cartoon image, and the like.
  • the presentation form of the avatar can be either 2D or 3D, which is not limited in the present disclosure.
  • the target object may be a user, a robot, or other smart devices.
  • Fig. 1 shows a flowchart of an interaction method according to at least one embodiment of the present disclosure
  • the interaction method may be applied to a terminal device
  • the terminal device includes a target application
  • the target application may be an application installed in the terminal device A program APP, a web client, a small program embedded in an application program, or a system program in the terminal device, etc.
  • the method includes step 101 - step 102 .
  • step 101 in response to receiving an operation instruction on an interactive object in the target application, an interaction animation of the interactive object is acquired according to the operation instruction.
  • the operation instruction may be an instruction input by the user through a target application (target APP) in the terminal device.
  • target APP target application
  • an operation instruction may be input by operating on the operation interface of the target APP, such as touching a button on the operation interface, or by inputting text.
  • the operation instruction may be a direct operation on the interactive object, such as instructing the image setting of the interactive object, or instructing the interactive object to perform a setting action.
  • the operation instruction may also be an operation instruction for other functions in the target application. For example, by establishing an association between native functions and interactive objects in the target application, operations on the interactive objects can be triggered while operating other functions of the target application.
  • the interaction animation of the interactive object may be acquired according to the operation instruction.
  • the interaction may be obtained according to the type of the operation instruction, or the instruction information for the virtual model of the interactive object in the operation instruction, or the interaction information controlling the posture of the interactive object in the operation instruction.
  • Interactive animation of objects may be obtained according to the type of the operation instruction, or the instruction information for the virtual model of the interactive object in the operation instruction, or the interaction information controlling the posture of the interactive object in the operation instruction.
  • step 102 the interactive animation of the interactive object is played on the first layer of the page of the target application.
  • the first layer is a transparent layer added in the page of the target application.
  • a transparent layer may be added to the page of the target application to display the interactive animation of the interactive object when an instruction to start the interactive object in the target application is received.
  • the first layer is usually located above the display layer of the target application. Since the first layer is a transparent layer, when the first layer is located above the display layer of the target application, display the interactive object on the first layer, and make When the position, posture, expression, action, etc. of the interactive object cooperate with the target application, the effect of embedding the interactive object into the target application and interacting with the content of the target application can be presented.
  • the interactive animation of the interactive object in response to receiving an operation instruction for an interactive object in the target application of the terminal device, is acquired according to the operation instruction, and is displayed on the page of the target application The interactive animation of the interactive object is played on the first layer, where the first layer is a transparent layer added to the page of the target application.
  • the first layer is a transparent layer added to the page of the target application.
  • Fig. 2 shows a schematic diagram of displaying interactive objects in a target application.
  • the native layer 201 is the original layer in the page of the target application, and the interactive animation of the interactive object is played on the first layer 202, and the first layer 202 is based on the interactive object
  • the layer newly added by the startup command is located above the native layer 201 and is transparent, so it can visually present the effect that the interactive object is embedded in the target application.
  • a second layer 203 may also be added to the page of the target application for displaying prompt information generated according to the operation instruction, and the prompt information includes one of images, texts, special effects, etc. item or items.
  • the second layer 203 may be set on the first layer 202, and the transparency of the second layer 203 is controllable. For example, the transparency of the second layer may be set higher than a set threshold to avoid occluding the interactive object.
  • the prompt information may be generated by the software development kit SDK according to the operation instruction. For example, when the operation instruction instructs the interactive object to output a voice, the prompt information may be the text content of the voice.
  • the prompt information generated according to the operation instruction can be displayed on the second layer while interacting with the interactive object, which enriches the target application.
  • the interactive function of the application improves the interactive experience of the target object.
  • the terminal device is installed with a software development kit SDK.
  • the target application sends a first message to the SDK, so that the SDK acquires an interactive animation of the interactive object according to the first message, And return the interactive animation of the interactive object to the target application.
  • the first message includes indication information of the operation instruction and/or interaction information indicated by the operation instruction.
  • An interactive interface may be set between the SDK and the target application, so as to transmit instructions and data through the interactive interface.
  • the target application When the target application receives an operation instruction for an interactive object, it may send a first message to the SDK through the interactive interface. If the SDK receives the first message, it acquires the interaction of the interactive object according to the indication information of the operation instruction in the first message and/or the interaction information indicated by the operation instruction. animation, and return the interaction animation to the target application through the interaction interface.
  • the interactive animation of the interactive object is obtained by the SDK, and the interactive animation is sent to the target application through an interactive interface with the target application, so that The interactive animation is displayed on the first layer of the page.
  • the platform and framework of the existing APP can be used without changing the platform and framework of the existing APP.
  • Existing APP has virtual object interaction function.
  • the setting information of the preset layer can be obtained from the SDK, and the preset layer includes the first layer and/or the second layer; and according to the setting information of the preset layer , setting and displaying the preset layer on the page of the target application.
  • the setting information of the preset layer can be obtained through the target object inputting the setting command in the target application, or can be obtained through the code information pre-written in the SDK. According to the type of business triggered by the operation instruction of the target object, you can request to trigger setting and display the corresponding preset layer, such as setting and displaying the first layer or the second layer, or displaying the first layer and the second layer at the same time Floor.
  • the setting information of the preset layer may include the transparency of the preset layer, the positional relationship with the native layer, and the like.
  • a trigger setting may be requested and the first layer displayed.
  • the first layer can be set to be transparent according to the setting information and located on the native layer.
  • a trigger setting and display of the second layer may be requested.
  • the second layer may be set to be translucent according to the setting information and located on the first layer, so as to display prompt information including text corresponding to the output voice of the interactive object. In this way, it is helpful for the target object to better receive the information output by the interactive object.
  • the operation instruction input in the target application may also request triggering setting and displaying the first layer and the second layer at the same time, the operation instruction may be, for example, an instruction indicating to activate the interactive object and display the text corresponding to the interactive voice Wait.
  • the setting and displaying manners of the first layer and the second layer can be described above, and will not be repeated here.
  • the virtual model of the interactive object and the rendering engine may be set in the terminal device, and integrated into the target application as an SDK, so as to complete the creation of the virtual model of the interactive object in the terminal device Render to get the interactive animation of the interactive object.
  • the SDK includes a rendering engine and at least one virtual model of the interactive object.
  • the SDK can include a virtual model.
  • the SDK includes only one virtual model, when receiving the startup instruction to start the interactive object, the virtual model is rendered, and what is finally displayed on the first layer is based on the virtual model.
  • the SDK may also include a plurality of virtual models, such as virtual models of real people, animals, and cartoons.
  • the first message specifically, according to the indication information of the virtual model of the interactive object included in the first message, from the plurality of The target virtual model is determined in the virtual model, and the target virtual model is rendered, and finally the interactive animation generated according to the target virtual model is displayed on the first layer.
  • each virtual model in the SDK has a unique corresponding number.
  • the operation instruction input by the user through the target application indicates the number of the interactive object selected to start, that is, the indication information of the operation instruction included in the first message indicates the number of the interactive object, then through the first message,
  • the target avatar may be determined from the at least one avatar based on the indicated number of the interactive object.
  • the SDK also includes a variety of accessories such as clothing and hairstyles, and the user can input the instruction information for the accessories in the target application, so as to realize the transformation of the image of the displayed interactive object.
  • the rendering engine and the virtual model at the terminal device by setting the rendering engine and the virtual model at the terminal device, the rendering of the virtual model is completed in the terminal device, thereby improving the rendering effect and quality of the interactive object.
  • the virtual model of the interactive object and the rendering engine may be set on the server side, and the rendering of the virtual model of the interactive object is completed on the server side to obtain the interactive animation of the interactive object.
  • the server includes a rendering engine and at least one virtual model of the interactive object. After the server receives the first message, it may determine the target virtual model from the at least one virtual model according to the first message, and use a rendering engine to render the target virtual model to obtain the interaction Interactive animation of objects. And, the server returns the second message including the interactive animation to the SDK.
  • the method for determining the target virtual model at the server side is similar to that of the terminal device, and will not be repeated here.
  • the first message further includes instance request information, which is used to request to acquire the rendering instance of the interactive object.
  • the method further includes: receiving, through the SDK, rendering instance information returned by the server, where the rendering instance information is used to describe the rendering instance; The information acquires encoded audio and video frames from the server, and decodes the encoded audio and video frames to obtain the interactive animation of the interactive object.
  • the encoded audio and video frames are obtained by encoding the audio and video frames generated by the rendering engine.
  • the server includes at least three modules, namely a rendering instance management service module 301 , a rendering service module (rendering engine) 302 and an audio and video service module 303 .
  • the SDK 304 In response to receiving an instruction to start the interactive object in the target application, that is, an instruction to start displaying the interactive object in the target application, the SDK 304 sends a first message to the server, and the second A message includes instance request information.
  • the rendering instance management service module 301 receives the instance request information, and determines the virtual model information of the interactive object according to the indication information of the startup instruction in the first message, that is, obtains the virtual model information used to indicate the virtual model.
  • the instance rendering service state is synchronized to the rendering service module 302 .
  • the rendering service module 302 renders the virtual model of the interactive object to obtain a plurality of audio and video frames.
  • the rendering service module 302 transmits the generated audio and video frames to the audio and video service module 303 on the one hand, and returns the rendering instance information to the SDK 304 on the other hand.
  • the audio and video service module 303 encodes the audio and video frames to obtain encoded audio and video frames.
  • the SDK 304 obtains the encoded audio and video frames from the audio and video service module 303 according to the rendering instance information, and decodes the encoded audio and video frames to obtain the interactive animation of the interactive object.
  • the SDK 304 In response to receiving a closing instruction for the interactive object in the target application, the SDK 304 requests the rendering instance management service module 301 to release rendering resources, and the rendering instance management service module 301 reclaims the rendering resources for backup.
  • the rendering engine and the virtual model are set at the server, and the encoded audio and video frames are obtained from the server by using the rendering instance information, and decoded to obtain the interactive animation of the interactive object, which reduces the need for terminals.
  • Equipment performance requirements reducing the occupation of terminal equipment resources.
  • the operation instruction when the operation instruction includes an activation instruction, rendering is performed according to preset parameters of the target virtual model, so that the interactive object presents a preset posture. For example, after the interactive object is activated, the interactive object is made to present gestures of smiling and waving.
  • the operation instruction includes an interaction instruction, and the interaction instruction is used to control the posture of the interactive object.
  • the control parameters of the target virtual model can be generated according to the interaction instruction, so as to Driving the interactive object to perform corresponding actions.
  • the interaction instruction is to instruct the interactive object to speak the specified content
  • the interactive object is driven according to the mouth shape control parameters for speaking the specified content while the voice plays the specified content, so as to The interactive object is caused to lip-syche accordingly.
  • the response manner of the interactive object to various interactive instructions may be preset, which is not limited in the present disclosure.
  • Fig. 4 shows a flowchart of another interaction method according to at least one embodiment of the present disclosure, the method is applied to a server, and the server includes a rendering engine. As shown in FIG. 4 , the method includes steps 401-405.
  • step 401 a first message from a terminal device is received.
  • the first message includes instruction information for the virtual model of the interactive object and/or interaction information for controlling the posture of the interactive object, and instance request information, and the instance request information is used to request to obtain the interaction The rendered instance of the object.
  • step 402 according to the instance request information, the rendering instance information of the target rendering model is generated, and the rendering instance information is sent to the terminal device, wherein the rendering instance information is used to describe the rendering instance.
  • step 403 use the rendering engine to render the target virtual model to obtain audio and video frames.
  • step 404 the audio and video frames are encoded to obtain encoded audio and video frames.
  • step 405 receive an audio and video frame acquisition request based on the rendering instance information from the terminal device, and send the encoded audio and video frame to the terminal device according to the rendering instance information, so that the terminal The device decodes the encoded audio and video frames to obtain the interactive animation.
  • the server generates rendering instance information and returns it to the terminal device according to the instance request information from the terminal device, and pushes the encoded audio and video frames according to the audio and video frame acquisition request based on the rendering instance information from the terminal device
  • the terminal device can quickly and efficiently obtain interactive animations and improve the interactive experience of the target object.
  • the server includes at least one virtual model of the interactive object; the target virtual model is determined from the at least one virtual model according to the first message.
  • the method further includes: in response to the first message being generated according to the start instruction received by the terminal device, generating a response animation of the interactive object according to preset parameters of the target virtual model.
  • the method further includes: in response to the first message being generated according to the interaction instruction received by the terminal device, generating the control parameters of the target virtual model according to the interaction instruction, and generating the target virtual model control parameters according to the control parameter to animate the response of the interactive object.
  • At least one embodiment of the present disclosure further provides an interaction device applied to a terminal device, where the terminal device includes a target application.
  • the device includes: an acquisition unit 501 configured to, in response to receiving an operation instruction for an interactive object in the target application, acquire an interactive animation of the interactive object according to the operation instruction; a playback unit 502. Play the interactive animation of the interactive object on a first layer of the target application page, where the first layer is a transparent layer added to the target application page.
  • the terminal device is installed with a software development kit SDK, and in response to receiving an operation instruction for the interactive object in the target application, the acquisition unit 501 is used to When acquiring the interactive animation of the interactive object, it is specifically used to: send a first message to the SDK in response to receiving an operation instruction for the interactive object in the target application, so that the SDK Get the interactive animation of the interactive object through a message, wherein the first message includes instruction information for the virtual model of the interactive object and/or interaction information for controlling the posture of the interactive object; obtain the interactive animation from the SDK Interactive animation of interactive objects.
  • the device further includes a layer setting unit, configured to acquire setting information of the first layer from the SDK, and the setting information of the first layer is set by the SDK according to the first layer. A message is generated.
  • the SDK includes a rendering engine and at least one virtual model; the SDK determines a target virtual model from the at least one virtual model according to the first message as the interaction object; the device further includes The first rendering unit is configured to: use the rendering engine to render the target virtual model to obtain an interactive animation of the interactive object.
  • the device further includes a communication unit, configured to: send the first message to a server, the server includes a rendering engine and at least one virtual model; receive a second response from the server to the first message message, the second message includes the interactive animation of the interactive object, the interactive animation of the interactive object is obtained by using the rendering engine to render the target virtual model, and the target virtual model is obtained according to the first message
  • the interaction object is determined from the at least one virtual model.
  • the first message further includes instance request information
  • the SDK receives a second message that the server responds to the first message, and the The second message includes the interactive animation of the interactive object
  • the device further includes an instance request unit, configured to: receive the rendering instance information returned by the server through the SDK, and the rendering instance information is used to describe the rendering instance;
  • the SDK obtains encoded audio and video frames from the server according to the rendering instance information, and decodes the encoded audio and video frames to obtain the interactive animation of the interactive object.
  • the encoded audio and video frames are obtained by encoding the audio and video frames generated by the server using the rendering engine to render the target virtual model.
  • the interactive animation is generated according to preset parameters of the target virtual model.
  • the interaction animation is generated according to control parameters of the target virtual model, and the control parameters are obtained according to the interaction instruction.
  • the device further includes a display unit, configured to display the prompt information generated according to the operation instruction on the second layer of the page of the target application, the second layer being the A layer with controllable transparency is added to the page of the target application described above.
  • At least one embodiment of the present disclosure further provides an interaction device applied to a server, where the server includes a rendering engine.
  • the apparatus includes: a receiving unit 601, configured to receive a first message from a terminal device, where the first message includes instruction information for a virtual model of an interactive object and/or controls the interactive object.
  • generating unit 602 configured to generate rendering instance information of the target virtual model according to the instance request information, and send the rendering instance information to the terminal device, wherein the The rendering instance information is used to describe the rendering instance; the second rendering unit 603 is used to use the rendering engine to render the target virtual model to obtain audio and video frames; the encoding unit 604 is used to encode the audio and video frames , to obtain encoded audio and video frames; the sending unit 605 is configured to respond to the audio and video frame acquisition request from the terminal device based on the rendering instance information, and send the encoded audio and video frames to the terminal device according to the rendering instance information audio and video frames, so that the terminal device decodes the encoded audio and video frames to obtain the interactive animation of the interactive object.
  • the server further includes at least one virtual model; the target virtual model is determined by the server as the interaction object from the at least one virtual model according to the first message.
  • the apparatus further includes a first animation generation unit, configured to generate the animation according to preset parameters of the target virtual model in response to the first message being generated according to the startup instruction received by the terminal device. Animate the response of the interactive object described above.
  • the apparatus further includes a second animation generation unit, configured to generate an image of the target virtual model according to the interaction instruction in response to the first message being generated according to the interaction instruction received by the terminal device.
  • control parameters configured to generate a response animation of the interactive object according to the control parameters.
  • At least one embodiment of the present disclosure also provides an electronic device. As shown in FIG. The interaction method described in any embodiment of the present disclosure is implemented when the computer instructions are executed.
  • At least one embodiment of the present specification further provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the interaction method described in any embodiment of the present disclosure is implemented.
  • At least one embodiment of the present specification further provides a computer program product, including a computer program, when the program is executed by a processor, the interaction method described in any embodiment of the present disclosure is implemented.
  • one or more embodiments of this specification may be provided as a method, system or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, one or more embodiments of the present description may employ a computer program embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein. The form of the product.
  • each embodiment in this specification is described in a progressive manner, the same and similar parts of each embodiment can be referred to each other, and each embodiment focuses on the differences from other embodiments.
  • the description is relatively simple, and for relevant parts, please refer to part of the description of the method embodiment.
  • Embodiments of the subject matter and functional operations described in this specification can be implemented in digital electronic circuitry, tangibly embodied computer software or firmware, computer hardware including the structures disclosed in this specification and their structural equivalents, or in A combination of one or more of .
  • Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, that is, one or more of computer program instructions encoded on a tangible, non-transitory program carrier for execution by or to control the operation of data processing apparatus. Multiple modules.
  • the program instructions may be encoded on an artificially generated propagated signal, such as a machine-generated electrical, optical or electromagnetic signal, which is generated to encode and transmit information to a suitable receiver device for transmission by the data
  • the processing means executes.
  • a computer storage medium may be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.
  • the processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform corresponding functions by operating on input data and generating output.
  • the processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, such as an FPGA (Field Programmable Gate Array) or an ASIC (Application Specific Integrated Circuit).
  • FPGA Field Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • Computers suitable for the execution of a computer program include, for example, general and/or special purpose microprocessors, or any other type of central processing unit.
  • a central processing unit will receive instructions and data from a read only memory and/or a random access memory.
  • the essential components of a computer include a central processing unit for implementing or executing instructions and one or more memory devices for storing instructions and data.
  • a computer will also include, or be operatively coupled to, one or more mass storage devices for storing data, such as magnetic or magneto-optical disks, or optical disks, to receive data therefrom or to It transmits data, or both.
  • mass storage devices for storing data, such as magnetic or magneto-optical disks, or optical disks, to receive data therefrom or to It transmits data, or both.
  • a computer is not required to have such a device.
  • a computer may be embedded in another device such as a mobile phone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a device such as a Universal Serial Bus (USB) ) portable storage devices like flash drives, to name a few.
  • PDA personal digital assistant
  • GPS Global Positioning System
  • USB Universal Serial Bus
  • Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media, and memory devices, including, for example, semiconductor memory devices (such as EPROM, EEPROM, and flash memory devices), magnetic disks (such as internal hard disks or removable disks), magneto-optical disks, and CD ROM and DVD-ROM disks.
  • semiconductor memory devices such as EPROM, EEPROM, and flash memory devices
  • magnetic disks such as internal hard disks or removable disks
  • magneto-optical disks and CD ROM and DVD-ROM disks.
  • the processor and memory can be supplemented by, or incorporated in, special purpose logic circuitry.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种用于交互的方法、装置、设备以及存储介质,所述方法应用于终端设备,所述终端设备包括目标应用,所述方法包括:响应于在所述目标应用中接收到对于交互对象的操作指令,根据所述操作指令获取所述交互对象的交互动画(101);在所述目标应用的页面的第一图层中播放所述交互对象的交互动画,所述第一图层为在所述目标应用的页面中增加的透明图层(102)。

Description

用于交互的方法、装置、设备以及存储介质
相关申请的交叉引用
本申请要求于2021年5月19日提交的、申请号为202110545352.5的中国专利申请的优先权,该中国专利申请公开的全部内容以引用的方式并入本文中。
技术领域
本公开涉及计算机技术领域,具体涉及用于交互的方法、装置、设备以及存储介质。
背景技术
在终端设备的应用程序中添加可以进行交互的虚拟对象,需要从应用程序项目初始选定实现方案时,就将渲染引擎、动画引擎等因素考虑进去。并且,在应用程序中添加虚拟对象极大地增加了应用程序的开发难度和框架复杂度。
发明内容
本公开实施例提供一种交互方案。
根据本公开的一方面,提供一种交互方法,应用于终端设备,所述终端设备包括目标应用,所述方法包括:响应于在所述目标应用中接收到对于交互对象的操作指令,根据所述操作指令获取所述交互对象的交互动画;在所述目标应用的页面的第一图层中播放所述交互对象的交互动画,所述第一图层为在所述目标应用的页面中增加的透明图层。
本公开实施例可以在不改变目标应用的平台和框架的情况下,在所述目标应用中添加可以进行交互的虚拟对象,降低了虚拟对象集成所需的业务成本以及技术门槛。
结合本公开提供的任一实施方式,所述终端设备安装有软件开发工具包SDK,所述响应于在所述目标应用中接收到对于交互对象的操作指令,根据所述操作指令获取所述交互对象的交互动画,包括:响应于在所述目标应用中接收到对于交互对象的操作指令,所述目标应用向所述SDK发送第一消息,以使所述SDK根据所述第一消息获取所述交互对象的交互动画,其中,所述第一消息包括所述交互对象的虚拟模型的指示信息 和/或控制所述交互对象的姿态的交互信息;所述目标应用从所述SDK获取所述交互对象的交互动画。
对于所述终端设备中的已有的应用程序APP,通过安装与所述已有APP具有交互接口并可以进行交互的SDK,可在不改变已有APP的平台和框架的情况下,使已有APP具有了虚拟对象交互功能。
结合本公开提供的任一实施方式,所述SDK包括渲染引擎以及至少一个虚拟模型;所述SDK根据所述第一消息获取所述交互对象的交互动画,包括:所述SDK根据所述第一消息从所述至少一个虚拟模型中确定目标虚拟模型作为所述交互对象;以及所述SDK利用所述渲染引擎对所述目标虚拟模型进行渲染,得到所述交互对象的交互动画。
在本公开实施例中,通过将渲染引擎以及虚拟模型设置在终端设备处,以在终端设备中完成虚拟模型的渲染,提高了交互对象的渲染效果和质量。
结合本公开提供的任一实施方式,所述SDK根据所述第一消息获取所述交互对象的交互动画,所述方法还包括:所述SDK向服务器发送所述第一消息,所述服务器包括渲染引擎以及至少一个虚拟模型;所述SDK接收所述服务器对于所述第一消息回应的第二消息,所述第二消息包括所述交互对象的交互动画,所述交互对象的交互动画是利用所述渲染引擎对目标虚拟模型进行渲染得到的,所述目标虚拟模型是根据所述第一消息从所述至少一个虚拟模型中确定作为所述交互对象的。
结合本公开提供的任一实施方式,在所述操作指令包括启动指令的情况下,所述第一消息还包括实例请求信息;所述SDK接收所述服务器对于所述第一消息回应的第二消息,所述第二消息包括所述交互对象的交互动画,包括:所述SDK接收所述服务器返回的渲染实例信息,所述渲染实例信息用于描述渲染实例;所述SDK根据所述渲染实例信息从所述服务器获取编码的音视频帧,并且所述SDK对所述编码的音视频帧进行解码,得到所述交互对象的交互动画。其中,所述编码的音视频帧是通过对所述服务器利用所述渲染引擎渲染所述目标虚拟模型所生成的音视频帧进行编码得到的。
在本公开实施例中,将渲染引擎以及虚拟模型设置在服务器处,通过利用渲染实例信息从所述服务器获取编码的音视频帧,并进行解码得到所述交互对象的交互动画,降低了对于终端设备性能的要求,减小了对于终端设备资源的占用。
结合本公开提供的任一实施方式,在所述操作指令包括启动指令的情况下,所述交互动画根据所述目标虚拟模型的预设参数生成。
结合本公开提供的任一实施方式,在所述操作指令包括交互指令的情况下,所述交互动画根据所述目标虚拟模型的控制参数生成,所述控制参数根据所述交互指令获得。
结合本公开提供的任一实施方式,所述方法还包括:在所述目标应用的页面的第二图层中,显示根据所述操作指令生成的提示信息,所述第二图层为在所述目标应用的页面中增加的透明度可控的图层。
在本公开实施例中,通过在目标应用的页面中增加第二图层,可以在与交互对象进行交互的同时,在所述第二图层上显示根据操作指令生成的提示信息,丰富了目标应用的交互功能,提高了目标对象的交互体验。
结合本公开提供的任一实施方式,所述方法还包括:所述目标应用从所述SDK获取预设图层的设置信息;所述预设图层包括所述第一图层和/或所述第二图层;所述目标应用根据所述预设图层的设置信息,在所述目标应用的页面中设置并显示所述预设图层。
根据本公开的一方面,提出一种交互方法,应用于服务器,所述服务器包括渲染引擎,所述方法包括:接收来自终端设备的第一消息,所述第一消息包括对交互对象的虚拟模型的指示信息和/或控制所述交互对象的姿态的交互信息,以及实例请求信息;根据所述实例请求信息,生成所述目标虚拟模型的渲染实例信息,向所述终端设备发送所述渲染实例信息。其中,所述渲染实例信息用于描述渲染实例;利用所述渲染引擎对所述目标虚拟模型进行渲染,得到音视频帧;对所述音视频帧进行编码,得到编码的音视频帧;响应于来自所述终端设备的基于所述渲染实例信息的音视频帧获取请求,根据所述渲染实例信息向所述终端设备发送所述编码的音视频帧,以使所述终端设备对所述编码的音视频帧进行解码,得到所述交互对象的交互动画。
在本公开实施例中,服务器根据来自终端设备的实例请求信息,生成渲染实例信息返回至终端设备,并根据来自终端设备的基于渲染实例信息的音视频帧获取请求将编码的音视频帧推流至终端设备,使得终端设备能够快速、高效地得到交互动画,提高目标对象的交互体验。
结合本公开提供的任一实施方式,所述服务器包括渲染引擎以及至少一个虚拟模型;所述目标虚拟模型是所述服务器根据所述第一消息从所述至少一个虚拟模型中确定作为所述交互对象的。
结合本公开提供的任一实施方式,所述方法还包括:响应于所述第一消息是根据终端设备所接收的启动指令生成的,根据所述目标虚拟模型的预设参数生成所述交互对 象的回应动画。
结合本公开提供的任一实施方式,所述方法还包括:响应于所述第一消息是根据终端设备所接收的交互指令生成的,根据所述交互指令生成所述目标虚拟模型的控制参数,并根据所述控制参数生成所述交互对象的回应动画。
根据本公开的一方面,提出一种交互装置,所述装置应用于终端设备,所述终端设备包括目标应用,所述装置包括:获取单元,用于响应于在所述目标应用中接收到对于交互对象的操作指令,根据所述操作指令获取所述交互对象的交互动画;播放单元,在所述目标应用的页面的第一图层中播放所述交互对象的交互动画,所述第一图层为在所述目标应用的页面中增加的透明图层。
结合本公开提供的任一实施方式,所述终端设备安装有软件开发工具包SDK,所述响应于在所述目标应用中接收到对于交互对象的操作指令,所述获取单元在用于根据所述操作指令获取所述交互对象的交互动画时,具体用于:响应于在所述目标应用中接收到对于交互对象的操作指令,向所述SDK发送第一消息,以使所述SDK根据所述第一消息获取所述交互对象的交互动画,其中,所述第一消息包括对所述交互对象的虚拟模型的指示信息和/或控制所述交互对象的姿态的交互信息;从所述SDK获取所述交互对象的交互动画。
结合本公开提供的任一实施方式,所述装置还包括图层设置单元,用于从所述SDK获取所述第一图层的设置信息,所述第一图层的设置信息是所述SDK根据所述第一消息生成的。
结合本公开提供的任一实施方式,所述SDK包括渲染引擎以及至少一个虚拟模型;所述SDK根据所述第一消息从所述至少一个虚拟模型中确定目标虚拟模型作为所述交互对象;所述装置还包括第一渲染单元,用于:利用所述渲染引擎对所述目标虚拟模型进行渲染,得到所述交互对象的交互动画。
结合本公开提供的任一实施方式,所述装置还包括通信单元,用于:向服务器发送所述第一消息,所述服务器包括渲染引擎以及至少一个虚拟模型;接收服务器对于所述第一消息回应的第二消息,所述第二消息包括所述交互对象的交互动画,所述交互对象的交互动画是利用所述渲染引擎对目标虚拟模型进行渲染得到的,所述目标虚拟模型是根据所述第一消息从所述至少一个虚拟模型中确定作为所述交互对象的。
结合本公开提供的任一实施方式,在所述操作指令包括启动指令的情况下,所述 第一消息还包括实例请求信息;所述SDK接收所述服务器对于所述第一消息回应的第二消息,所述第二消息包括所述交互对象的交互动画,所述装置还包括实例请求单元,用于:通过所述SDK接收所述服务器返回的渲染实例信息,所述渲染实例信息用于描述渲染实例;所述SDK根据所述渲染实例信息从所述服务器获取编码的音视频帧,并对所述编码的音视频帧进行解码,得到所述交互对象的交互动画,其中,所述编码的音视频帧是通过对所述服务器利用所述渲染引擎渲染所述目标虚拟模型所生成的音视频帧进行编码得到的。
结合本公开提供的任一实施方式,在所述操作指令包括启动指令的情况下,所述交互动画根据所述目标虚拟模型的预设参数生成。
结合本公开提供的任一实施方式,在所述操作指令包括交互指令的情况下,所述交互动画根据目标虚拟模型的控制参数生成,所述控制参数根据所述交互指令获得。
结合本公开提供的任一实施方式,所述装置还包括显示单元,用于在所述目标应用的页面的第二图层中,显示根据所述操作指令生成的提示信息,所述第二图层为在所述目标应用的页面中增加的透明度可控的图层。
根据本公开的一方面,提供一种交互装置,应用于服务器,所述服务器包括渲染引擎,所述装置包括:接收单元,用于接收来自终端设备的第一消息,所述第一消息包括对交互对象的虚拟模型的指示信息和/或控制所述交互对象的姿态的交互信息,以及实例请求信息;生成单元,用于:根据所述实例请求信息,生成所述目标虚拟模型的渲染实例信息,向所述终端设备发送所述渲染实例信息,其中,所述渲染实例信息用于描述渲染实例;第二渲染单元,用于利用所述渲染引擎对所述目标虚拟模型进行渲染,得到音视频帧;编码单元,用于对所述音视频帧进行编码,得到编码的音视频帧;发送单元,用于响应于来自所述终端设备的基于所述渲染实例信息的音视频帧获取请求,根据所述渲染实例信息向所述终端设备发送所述编码的音视频帧,以使所述终端设备对所述编码的音视频帧进行解码,得到所述交互对象的交互动画。
结合本公开提供的任一实施方式,所述服务器还包括至少一个虚拟模型;所述目标虚拟模型是所述服务器根据所述第一消息从所述至少一个虚拟模型中确定作为所述交互对象的。
结合本公开提供的任一实施方式,所述装置还包括第一动画生成单元,用于响应于所述第一消息是根据终端设备所接收的启动指令生成的,根据所述目标虚拟模型的预 设参数生成所述交互对象的回应动画。
结合本公开提供的任一实施方式,所述装置还包括第二动画生成单元,用于响应于所述第一消息是根据终端设备所接收的交互指令生成的,根据所述交互指令生成所述目标虚拟模型的控制参数,并根据所述控制参数生成所述交互对象的回应动画。
根据本公开的一方面,提出一种电子设备,所述设备包括存储器、处理器,所述存储器用于存储可在处理器上运行的计算机指令,所述处理器用于在执行所述计算机指令时实现本公开任一实施方式所提出的交互方法。
根据本公开的一方面,提出一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行时实现本公开任一实施方式所提出的交互方法。
根据本公开的一方面,提出一种计算机程序产品,包括计算机程序,所述程序被处理器执行时实现本公开任一实施方式所提出的交互方法。
附图说明
为了更清楚地说明本说明书一个或多个实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍。下面描述中的附图仅仅是本说明书一个或多个实施例中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1示出本公开至少一个实施例提出的一种交互方法的流程图;
图2示出本公开至少一个实施例提出的交互方法中展示交互对象的示意图;
图3示出本公开至少一个实施例提出的交互方法中在目标应用中启动交互对象的过程示意图;
图4示出根据本公开至少一个实施例的另一种交互方法的流程图;
图5示出根据本公开至少一个实施例的一种交互装置的结构示意图;
图6示出根据本公开至少一个实施例的另一种交互装置的结构示意图;
图7示出根据本公开至少一个实施例的一种电子设备的结构示意图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。除非另有表示, 不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。
本公开至少一个实施例提供了一种交互方法,所述交互方法可以由终端设备或服务器等电子设备执行。其中,所述终端设备可以是固定终端或移动终端,例如手机、平板电脑、游戏机、台式机、广告机、一体机、车载终端等,所述服务器包括本地服务器或云端服务器等。所述方法还可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。
在本公开实施例中,交互对象可以是任意一种能够与目标对象进行交互的虚拟形象,其可以是虚拟人物,还可以是虚拟动物、虚拟物品、卡通形象等。虚拟形象的展现形式既可以是2D形式也可以是3D形式,本公开对此并不限定。所述目标对象可以是用户,也可以是机器人,还可以是其他智能设备。
图1示出根据本公开至少一个实施例的交互方法的流程图,该交互方法可以应用于终端设备,所述终端设备包括目标应用,所述目标应用可以是在所述终端设备中安装的应用程序APP、Web客户端、应用程序中嵌入的小程序,或者所述终端设备中的***程序等。如图1所示,所述方法包括步骤101~步骤102。
在步骤101中,响应于在所述目标应用中接收到对于交互对象的操作指令,根据所述操作指令获取所述交互对象的交互动画。
所述操作指令可以是用户通过终端设备中的目标应用(目标APP)输入的指令。例如可以通过在所述目标APP的操作界面上进行操作,比如触摸所述操作界面上的按键,或者通过输入文本来输入操作指令。所述操作指令可以是对于交互对象的直接操作,例如指示交互对象的形象设置,或者指示交互对象执行设定动作。所述操作指令也可以是对于目标应用中其他功能的操作指令。例如,通过建立目标应用中的原生功能与交互对象之间的关联,可以在操作目标应用的其他功能时,同时触发对于交互对象的操作。
在接收到所述操作指令的情况下,可以根据所述操作指令获取所述交互对象的交互动画。例如,可以根据所述操作指令的类型,或者所述操作指令中对于所述交互对象的虚拟模型的指示信息,或所述操作指令中控制所述交互对象的姿态的交互信息,获取所述交互对象的交互动画。
在步骤102中,在所述目标应用的页面的第一图层中播放所述交互对象的交互动画。其中,所述第一图层为在所述目标应用的页面中增加的透明图层。
例如,可以在接收到在所述目标应用中启动交互对象的启动指令的情况下,在所述目标应用的页面中增加透明图层,以显示所述交互对象的交互动画。所述第一图层通常位于所述目标应用的显示图层之上。由于所述第一图层是透明图层,在所述第一图层位于所述目标应用的显示图层之上的情况下,在所述第一图层上显示所述交互对象,并使所述交互对象的位置、姿态、表情、动作等与所述目标应用相配合,则可以呈现将所述交互对象嵌入目标应用中,并与所述目标应用的内容互动的效果。
在本公开实施例中,响应于在所述终端设备的目标应用中接收到对于交互对象的操作指令,根据所述操作指令获取所述交互对象的交互动画,并在所述目标应用的页面的第一图层中播放所述交互对象的交互动画,其中,所述第一图层为在所述目标应用的页面中增加的透明图层。这样,可以在不改变目标应用的平台和框架的情况下,在所述目标应用中添加可以进行交互的虚拟对象,降低了虚拟对象集成所需的业务成本以及技术门槛。
图2示出了在目标应用中展示交互对象的示意图。如图2所示,原生图层201是目标应用的页面中原有的图层,在第一图层202上播放所述交互对象的交互动画,所述第一图层202是根据所述交互对象的启动指令新增的图层,该图层位于所述原生图层201之上并且是透明的,因此可以在视觉上呈现出交互对象嵌入所述目标应用的效果。
在一些实施例中,在所述目标应用的页面中还可以增加第二图层203,用于显示根据所述操作指令生成的提示信息,所述提示信息包括图像、文本、特效等中的一项或多项。其中,所述第二图层203可以设置在所述第一图层202之上,并且所述第二图层203的透明度可控。例如,可以将所述第二图层的透明度设置为高于设定阈值,以避免遮挡所述交互对象。所述提示信息可以是软件开发工具包SDK根据所述操作指令生成的,例如,在所述操作指令指示交互对象输出语音的情况下,所述提示信息可以是该语音的文本内容。本领域技术人员应当理解,所述第一图层、所述第二图层与所述原生图层之间的位置关系,以及各个图层的透明度可以根据实际情况进行具体设置,本公开对此不 进行限制。
在本公开实施例中,通过在目标应用的页面中增加第二图层,可以在与交互对象进行交互的同时,在所述第二图层上显示根据操作指令生成的提示信息,丰富了目标应用的交互功能,提高了目标对象的交互体验。
在一些实施例中,所述终端设备安装有软件开发工具包SDK。响应于在所述目标应用中接收到对于交互对象的操作指令,所述目标应用向所述SDK发送第一消息,以使所述SDK根据所述第一消息获取所述交互对象的交互动画,并将所述交互对象的交互动画返回给所述目标应用。其中,所述第一消息包括所述操作指令的指示信息和/或所述操作指令所指示的交互信息。
所述SDK与所述目标应用之间可以设置交互接口,以通过该交互接口进行指令和数据的传输。
在所述目标应用中接收到对于交互对象的操作指令的情况下,可以通过所述交互接口向所述SDK发送第一消息。所述SDK在接收到所述第一消息的情况下,则根据所述第一消息中的所述操作指令的指示信息和/或所述操作指令所指示的交互信息获取所述交互对象的交互动画,并通过所述交互接口将所述交互动画返回至所述目标应用。
在本公开实施例中,所述交互对象的交互动画由所述SDK获取,并通过与目标应用之间的交互接口,将所述交互动画发送给所述目标应用,以在所述目标应用的页面的第一图层上显示所述交互动画。对于所述终端设备中的已有的应用程序APP,通过安装与所述已有APP具有交互接口并可以进行交互的SDK,可在不改变所述已有APP的平台和框架的情况下,使已有APP具有了虚拟对象交互功能。
在一些实施例中,可以从所述SDK获取预设图层的设置信息,所述预设图层包括第一图层和/或第二图层;并根据所述预设图层的设置信息,在所述目标应用的页面中设置并显示所述预设图层。
所述预设图层的设置信息可以通过目标对象在目标应用中输入设置指令得到,也可以通过预先写在SDK中的代码信息得到。根据目标对象的操作指令所触发的业务类型,可以请求触发设置并显示相应的预设图层,例如设置并显示第一图层或第二图层,或同时显示第一图层和第二图层。所述预设图层的设置信息可以包括所述预设图层的透明度、与原生图层的位置关系等。
在一个示例中,在目标应用中接收到启动交互对象的启动指令的情况下,可以请 求触发设置并显示第一图层。所述第一图层可以根据设置信息被设置为透明、且位于原生图层之上,通过在该第一图层上显示交互对象,可以呈现将所述交互对象嵌入目标应用中的效果。
在一个示例中,在交互对象已启动,在目标应用中接收到交互指令,例如指示显示交互对象输出语音对应的文本的情况下,可以请求触发设置并显示第二图层。所述第二图层可以根据设置信息被设置为半透明、且位于第一图层之上,以显示包含所述交互对象输出语音对应的文本的提示信息。这样,有助于目标对象更好地接收所述交互对象输出的信息。
在一个示例中,在目标应用中输入的操作指令还可以同时请求触发设置并显示第一图层和第二图层,该操作指令例如可以为指示启动交互对象并显示交互语音对应的文本的指令等。其中,第一图层和第二图层的设置和显示方式可如上所述,在此不再赘述。
本领域技术人员应当理解,以上所述预设图层的设置信息仅为示例,本公开对于所述设置信息的具体指示内容不进行限制。
在一些实施例中,可以将交互对象的虚拟模型和渲染引擎设置在所述终端设备中,作为SDK集成到所述目标应用中,以在所述终端设备中完成所述交互对象的虚拟模型的渲染,以得到所述交互对象的交互动画。
在一个示例中,所述SDK包括渲染引擎以及所述交互对象的至少一个虚拟模型。所述SDK可以包括一个虚拟模型。在所述SDK只包括一个虚拟模型的情况下,在接收到启动所述交互对象的启动指令时,则对该虚拟模型进行渲染,最终在第一图层上所展示的则是根据所述虚拟模型生成的交互动画。所述SDK还可以包括多个虚拟模型,例如真人形象、动物形象、卡通形象等的虚拟模型。在包括多个虚拟模型的情况下,可以根据所述第一消息,具体而言,可以根据所述第一消息中所包含的对所述交互对象的虚拟模型的指示信息,从所述多个虚拟模型中确定目标虚拟模型,并对所述目标虚拟模型进行渲染,最终在第一图层中展示根据所述目标虚拟模型生成的交互动画。
在一个示例中,所述SDK中的每个虚拟模型具有唯一对应的编号。用户通过目标应用输入的操作指令指示选择启动的交互对象的编号,也即所述第一消息所包含的所述操作指令的指示信息指示所述交互对象的编号,则通过所述第一消息,可以基于所指示的交互对象的编号从所述至少一个虚拟模型中确定目标虚拟模型。
在一个示例中,所述SDK还包括多种衣着、发型等配件,用户可以通过在所述目 标应用输入对于配件的指示信息,从而实现所展示的交互对象的形象的变换。
在本公开实施例中,通过将渲染引擎以及虚拟模型设置在终端设备处,以在终端设备中完成虚拟模型的渲染,提高了交互对象的渲染效果和质量。
在一些实施例中,可以将交互对象的虚拟模型和渲染引擎设置在服务器端,在服务器端完成所述交互对象的虚拟模型的渲染,以得到所述交互对象的交互动画。
在一个示例中,所述服务器包括渲染引擎以及所述交互对象的至少一个虚拟模型。在所述服务器接收到所述第一消息后,可以根据所述第一消息从所述至少一个虚拟模型中确定目标虚拟模型,并利用渲染引擎对所述目标虚拟模型进行渲染,得到所述交互对象的交互动画。并且,所述服务器将包含所述交互动画的第二消息返回给所述SDK。在服务器端确定目标虚拟模型的方法与终端设备相似,在此不再赘述。
在所述操作指令包括启动指令的情况下,所述第一消息还包括实例请求信息,用于请求获取所述交互对象的渲染实例。在向所述服务器发送第一消息之后,所述方法还包括:通过所述SDK接收所述服务器返回的渲染实例信息,所述渲染实例信息用于描述渲染实例;所述SDK根据所述渲染实例信息从所述服务器获取编码的音视频帧,并对所述编码的音视频帧进行解码,得到所述交互对象的交互动画。其中,所述编码的音视频帧是通过对所述渲染引擎生成的音视频帧进行编码得到的。
参见图3所示的在目标应用中启动交互对象的过程示意图。如图3所示,所述服务器至少包括三个模块,分别为渲染实例管理服务模块301、渲染服务模块(渲染引擎)302以及音视频服务模块303。
响应于在所述目标应用中接收到对于所述交互对象的启动指令,也即开始在所述目标应用中展示交互对象的指令,所述SDK 304向所述服务器发送第一消息,所述第一消息中包含实例请求信息。所述渲染实例管理服务模块301接收到所述实例请求信息,并根据所述第一消息中所述启动指令的指示信息确定所述交互对象的虚拟模型信息,也即得到用于指示虚拟模型的实例渲染服务状态,并且将所述实例渲染服务状态同步至所述渲染服务模块302。所述渲染服务模块302对所述交互对象的虚拟模型进行渲染,得到多个音视频帧。所述渲染服务模块302一方面将所生成的所述音视频帧传输至所述音视频服务模块303,另一方面将渲染实例信息返回至所述SDK 304。所述音视频服务模块303对所述音视频帧进行编码,得到编码的音视频帧。所述SDK 304根据所述渲染实例信息从所述音视频服务模块303获取所述编码的音视频帧,并对所述编码的音视频帧 进行解码,得到所述交互对象的交互动画。
响应于所述目标应用中接收到对于所述交互对象的关闭指令,所述SDK 304请求所述渲染实例管理服务模块301释放渲染资源,所述渲染实例管理服务模块301回收所述渲染资源备用。
在本公开实施例中,将渲染引擎以及虚拟模型设置在服务器处,通过利用渲染实例信息从所述服务器获取编码的音视频帧,并进行解码得到所述交互对象的交互动画,降低了对于终端设备性能的要求,减小了对于终端设备资源的占用。
在一些实施例中,在所述操作指令包括启动指令的情况下,根据所述目标虚拟模型的预设参数进行渲染,以使所述交互对象呈现预设的姿态。例如,在启动所述交互对象后,使所述交互对象呈现微笑并挥手的姿态。
在一些实施例中,所述操作指令包括交互指令,所述交互指令用于控制所述交互对象的姿态,在这种情况下可以根据所述交互指令生成所述目标虚拟模型的控制参数,以驱动所述交互对象执行相应动作。例如,在所述交互指令为指示交互对象说出指定内容的情况下,则在语音播放所述指定内容的同时,根据说出所述指定内容的口型控制参数来驱动所述交互对象,以使所述交互对象作出相应的口型。在本公开实施例中,所述交互对象对于各种交互指令的回应方式可以是预先设置的,本公开对此不进行限制。
图4示出根据本公开至少一个实施例的另一种交互方法的流程图,所述方法应用于服务器,所述服务器包括渲染引擎。如图4所示,所述方法包括步骤401~405。
在步骤401中,接收来自终端设备的第一消息。
其中,所述第一消息包括对所述交互对象的虚拟模型的指示信息和/或控制所述交互对象的姿态的交互信息,以及实例请求信息,所述实例请求信息用于请求获取所述交互对象的渲染实例。
在步骤402中,根据所述实例请求信息,生成目标渲染模型的渲染实例信息,并向所述终端设备发送所述渲染实例信息,其中,所述渲染实例信息用于描述渲染实例。
在步骤403中,利用所述渲染引擎对目标虚拟模型进行渲染,得到音视频帧。
在步骤404中,对所述音视频帧进行编码,得到编码的音视频帧。
在步骤405中,接收来自所述终端设备的基于所述渲染实例信息的音视频帧获取请求,根据所述渲染实例信息向所述终端设备发送所述编码的音视频帧,以使所述终端 设备对所述编码的音视频帧进行解码,得到所述交互动画。
在本公开实施例中,服务器根据来自终端设备的实例请求信息,生成渲染实例信息返回至终端设备,并根据来自终端设备的基于渲染实例信息的音视频帧获取请求将编码的音视频帧推流至终端设备,使得终端设备能够快速、高效地得到交互动画,提高目标对象的交互体验。
在一些实施例中,所述服务器包括所述交互对象的至少一个虚拟模型;所述目标虚拟模型是根据所述第一消息从所述至少一个虚拟模型中确定的。
在一些实施例中,所述方法还包括:响应于所述第一消息是根据终端设备所接收的启动指令生成的,根据所述目标虚拟模型的预设参数生成所述交互对象的回应动画。
在一些实施例中,所述方法还包括:响应于所述第一消息是根据终端设备所接收的交互指令生成的,根据所述交互指令生成所述目标虚拟模型控制参数,并根据所述控制参数生成所述交互对象的回应动画。
本公开至少一个实施例还提供了一种交互装置,应用于终端设备,所述终端设备包括目标应用。如图5所示,所述装置包括:获取单元501,用于响应于在所述目标应用中接收到对于交互对象的操作指令,根据所述操作指令获取所述交互对象的交互动画;播放单元502,在所述目标应用的页面的第一图层中播放所述交互对象的交互动画,所述第一图层为在所述目标应用的页面中增加的透明图层。
在一些实施例中,所述终端设备安装有软件开发工具包SDK,所述响应于在所述目标应用中接收到对于交互对象的操作指令,所述获取单元501在用于根据所述操作指令获取所述交互对象的交互动画时,具体用于:响应于在所述目标应用中接收到对于交互对象的操作指令,向所述SDK发送第一消息,以使所述SDK根据所述第一消息获取所述交互对象的交互动画,其中,所述第一消息包括对所述交互对象的虚拟模型的指示信息和/或控制所述交互对象的姿态的交互信息;从所述SDK获取所述交互对象的交互动画。
在一些实施例中,所述装置还包括图层设置单元,用于从所述SDK获取所述第一图层的设置信息,所述第一图层的设置信息是所述SDK根据所述第一消息生成的。
在一些实施例中,所述SDK包括渲染引擎以及至少一个虚拟模型;所述SDK根据所述第一消息从所述至少一个虚拟模型中确定目标虚拟模型作为所述交互对象;所述装置还包括第一渲染单元,用于:利用所述渲染引擎对所述目标虚拟模型进行渲染,得 到所述交互对象的交互动画。
在一些实施例中,所述装置还包括通信单元,用于:向服务器发送所述第一消息,所述服务器包括渲染引擎以及至少一个虚拟模型;接收服务器对于所述第一消息回应的第二消息,所述第二消息包括所述交互对象的交互动画,所述交互对象的交互动画是利用所述渲染引擎对目标虚拟模型进行渲染得到的,所述目标虚拟模型是根据所述第一消息从所述至少一个虚拟模型中确定作为所述交互对象的。
在一些实施例中,在所述操作指令包括启动指令的情况下,所述第一消息还包括实例请求信息;所述SDK接收所述服务器对于所述第一消息回应的第二消息,所述第二消息包括所述交互对象的交互动画,所述装置还包括实例请求单元,用于:通过所述SDK接收所述服务器返回的渲染实例信息,所述渲染实例信息用于描述渲染实例;所述SDK根据所述渲染实例信息从所述服务器获取编码的音视频帧,并对所述编码的音视频帧进行解码,得到所述交互对象的交互动画。其中,所述编码的音视频帧是通过对所述服务器利用所述渲染引擎渲染所述目标虚拟模型所生成的音视频帧进行编码得到的。
在一些实施例中,在所述操作指令包括启动指令的情况下,所述交互动画根据所述目标虚拟模型的预设参数生成。
在一些实施例中,在所述操作指令包括交互指令的情况下,所述交互动画根据所述目标虚拟模型的控制参数生成,所述控制参数根据所述交互指令获得。
在一些实施例中,所述装置还包括显示单元,用于在所述目标应用的页面的第二图层中,显示根据所述操作指令生成的提示信息,所述第二图层为在所述目标应用的页面中增加的透明度可控的图层。
本公开至少一个实施例还提供了一种交互装置,应用于服务器,所述服务器包括渲染引擎。如图6所示,所述装置包括:接收单元601,用于接收来自终端设备的第一消息,所述第一消息包括对交互对象的虚拟模型的指示信息和/或控制所述交互对象的姿态的交互信息,以及实例请求信息;生成单元602,用于根据所述实例请求信息,生成所述目标虚拟模型的渲染实例信息,向所述终端设备发送所述渲染实例信息,其中,所述渲染实例信息用于描述渲染实例;第二渲染单元603,用于利用所述渲染引擎对所述目标虚拟模型进行渲染,得到音视频帧;编码单元604,用于对所述音视频帧进行编码,得到编码的音视频帧;发送单元605,用于响应于来自所述终端设备的基于所述渲染实例信息的音视频帧获取请求,根据所述渲染实例信息向所述终端设备发送所述编码的音 视频帧,以使所述终端设备对所述编码的音视频帧进行解码,得到所述交互对象的交互动画。
在一些实施例中,所述服务器还包括至少一个虚拟模型;所述目标虚拟模型是所述服务器根据所述第一消息从所述至少一个虚拟模型中确定作为所述交互对象的。
在一些实施例中,所述装置还包括第一动画生成单元,用于响应于所述第一消息是根据终端设备所接收的启动指令生成的,根据所述目标虚拟模型的预设参数生成所述交互对象的回应动画。
在一些实施例中,所述装置还包括第二动画生成单元,用于响应于所述第一消息是根据终端设备所接收的交互指令生成的,根据所述交互指令生成所述目标虚拟模型的控制参数,并根据所述控制参数生成所述交互对象的回应动画。
本公开至少一个实施例还提供了一种电子设备,如图7所示,所述设备包括存储器、处理器,所述存储器用于存储可在处理器上运行的计算机指令,所述处理器用于在执行所述计算机指令时实现本公开任一实施例所述的交互方法。
本说明书至少一个实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行时实现本公开任一实施例所述的交互方法。
本说明书至少一个实施例还提供了一种计算机程序产品,包括计算机程序,所述程序被处理器执行时实现本公开任一实施例所述的交互方法。
本领域技术人员应明白,本说明书一个或多个实施例可提供为方法、***或计算机程序产品。因此,本说明书一个或多个实施例可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本说明书一个或多个实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于数据处理设备实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
上述对本说明书特定实施例进行了描述。其他实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的行为或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺 序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。
本说明书中描述的主题及功能操作的实施例可以在以下中实现:数字电子电路、有形体现的计算机软件或固件、包括本说明书中公开的结构及其结构性等同物的计算机硬件、或者它们中的一个或多个的组合。本说明书中描述的主题的实施例可以实现为一个或多个计算机程序,即编码在有形非暂时性程序载体上以被数据处理装置执行或控制数据处理装置的操作的计算机程序指令中的一个或多个模块。可替代地或附加地,程序指令可以被编码在人工生成的传播信号上,例如机器生成的电、光或电磁信号,该信号被生成以将信息编码并传输到合适的接收机装置以由数据处理装置执行。计算机存储介质可以是机器可读存储设备、机器可读存储基板、随机或串行存取存储器设备、或它们中的一个或多个的组合。
本说明书中描述的处理及逻辑流程可以由执行一个或多个计算机程序的一个或多个可编程计算机执行,以通过根据输入数据进行操作并生成输出来执行相应的功能。所述处理及逻辑流程还可以由专用逻辑电路—例如FPGA(现场可编程门阵列)或ASIC(专用集成电路)来执行,并且装置也可以实现为专用逻辑电路。
适合用于执行计算机程序的计算机包括,例如通用和/或专用微处理器,或任何其他类型的中央处理单元。通常,中央处理单元将从只读存储器和/或随机存取存储器接收指令和数据。计算机的基本组件包括用于实施或执行指令的中央处理单元以及用于存储指令和数据的一个或多个存储器设备。通常,计算机还将包括用于存储数据的一个或多个大容量存储设备,例如磁盘、磁光盘或光盘等,或者计算机将可操作地与此大容量存储设备耦接以从其接收数据或向其传送数据,抑或两种情况兼而有之。然而,计算机不是必须具有这样的设备。此外,计算机可以嵌入在另一设备中,例如移动电话、个人数字助理(PDA)、移动音频或视频播放器、游戏操纵台、全球定位***(GPS)接收机、或例如通用串行总线(USB)闪存驱动器的便携式存储设备,仅举几例。
适合于存储计算机程序指令和数据的计算机可读介质包括所有形式的非易失性存储器、媒介和存储器设备,例如包括半导体存储器设备(例如EPROM、EEPROM和闪存设备)、磁盘(例如内部硬盘或可移动盘)、磁光盘以及CD ROM和DVD-ROM盘。处理器和存储器可由专用逻辑电路补充或并入专用逻辑电路中。
虽然本说明书包含许多具体实施细节,但是这些不应被解释为限制任何发明的范围或所要求保护的范围,而是主要用于描述特定发明的具体实施例的特征。本说明书内 在多个实施例中描述的某些特征也可以在单个实施例中被组合实施。另一方面,在单个实施例中描述的各种特征也可以在多个实施例中分开实施或以任何合适的子组合来实施。此外,虽然特征可以如上所述在某些组合中起作用并且甚至最初如此要求保护,但是来自所要求保护的组合中的一个或多个特征在一些情况下可以从该组合中去除,并且所要求保护的组合可以指向子组合或子组合的变型。
类似地,虽然在附图中以特定顺序描绘了操作,但是这不应被理解为要求这些操作以所示的特定顺序执行或顺次执行、或者要求所有例示的操作被执行,以实现期望的结果。在某些情况下,多任务和并行处理可能是有利的。此外,上述实施例中的各种***模块和组件的分离不应被理解为在所有实施例中均需要这样的分离,并且应当理解,所描述的程序组件和***通常可以一起集成在单个软件产品中,或者封装成多个软件产品。
由此,主题的特定实施例已被描述。其他实施例在所附权利要求书的范围以内。在某些情况下,权利要求书中记载的动作可以以不同的顺序执行并且仍实现期望的结果。此外,附图中描绘的处理并非必需所示的特定顺序或顺次顺序,以实现期望的结果。在某些实现中,多任务和并行处理可能是有利的。
以上所述仅为本说明书一个或多个实施例的较佳实施例而已,并不用以限制本说明书一个或多个实施例,凡在本说明书一个或多个实施例的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本说明书一个或多个实施例保护的范围之内。

Claims (21)

  1. 一种交互方法,应用于终端设备,所述终端设备包括目标应用,所述方法包括:
    响应于在所述目标应用中接收到对于交互对象的操作指令,根据所述操作指令获取所述交互对象的交互动画;
    在所述目标应用的页面的第一图层中播放所述交互对象的交互动画,所述第一图层为在所述目标应用的页面中增加的透明图层。
  2. 根据权利要求1所述的方法,其特征在于,所述终端设备安装有软件开发工具包SDK,所述响应于在所述目标应用中接收到对于交互对象的操作指令,根据所述操作指令获取所述交互对象的交互动画,包括:
    响应于在所述目标应用中接收到对于交互对象的操作指令,所述目标应用向所述SDK发送第一消息,以使所述SDK根据所述第一消息获取所述交互对象的交互动画,其中,所述第一消息包括对所述交互对象的虚拟模型的指示信息和控制所述交互对象的姿态的交互信息中的至少一个;
    所述目标应用从所述SDK获取所述交互对象的交互动画。
  3. 根据权利要求2所述的方法,其特征在于,所述SDK包括渲染引擎以及至少一个虚拟模型;所述SDK根据所述第一消息获取所述交互对象的交互动画,包括:
    所述SDK根据所述第一消息从所述至少一个虚拟模型中确定目标虚拟模型作为所述交互对象;以及
    所述SDK利用所述渲染引擎对所述目标虚拟模型进行渲染,得到所述交互对象的交互动画。
  4. 根据权利要求2所述的方法,其特征在于,所述SDK根据所述第一消息获取所述交互对象的交互动画,包括:
    所述SDK向服务器发送所述第一消息,所述服务器包括渲染引擎以及至少一个虚拟模型;
    所述SDK接收所述服务器对于所述第一消息回应的第二消息,所述第二消息包括所述交互对象的交互动画,所述交互对象的交互动画是利用所述渲染引擎对目标虚拟模型进行渲染得到的,所述目标虚拟模型是根据所述第一消息从所述至少一个虚拟模型中确定作为所述交互对象的。
  5. 根据权利要求4所述的方法,其特征在于,在所述操作指令包括启动指令的情况下,所述第一消息还包括实例请求信息;所述SDK接收所述服务器对于所述第一消息回应的第二消息,所述第二消息包括所述交互对象的交互动画,包括:
    所述SDK接收所述服务器返回的渲染实例信息,所述渲染实例信息用于描述渲染实例;
    所述SDK根据所述渲染实例信息从所述服务器获取编码的音视频帧,并且
    所述SDK对所述编码的音视频帧进行解码,得到所述交互对象的交互动画,其中,所述编码的音视频帧是通过对所述服务器利用所述渲染引擎渲染所述目标虚拟模型所生成的音视频帧进行编码得到的。
  6. 根据权利要求2至5任一项所述的方法,其特征在于,在所述操作指令包括启动指令的情况下,所述交互动画根据所述目标虚拟模型的预设参数生成。
  7. 根据权利要求3至5任一项所述的方法,其特征在于,在所述操作指令包括交互指令的情况下,所述交互动画根据所述目标虚拟模型的控制参数生成,所述控制参数根据所述交互指令获得。
  8. 根据权利要求2至7中任一所述的方法,其特征在于,所述方法还包括:
    在所述目标应用的页面的第二图层中,显示根据所述操作指令生成的提示信息,所述第二图层为在所述目标应用的页面中增加的透明度可控的图层。
  9. 根据权利要求8所述的方法,其特征在于,所述方法还包括:
    所述目标应用从所述SDK获取预设图层的设置信息;所述预设图层包括所述第一图层和所述第二图层中的至少一个;
    所述目标应用根据所述预设图层的设置信息,在所述目标应用的页面中设置并显示所述预设图层。
  10. 一种交互方法,应用于服务器,所述服务器包括渲染引擎,所述方法包括:
    接收来自终端设备的第一消息,所述第一消息包括对交互对象的虚拟模型的指示信息和控制所述交互对象的姿态的交互信息中的至少一个,以及实例请求信息;
    根据所述实例请求信息,生成目标虚拟模型的渲染实例信息,
    向所述终端设备发送所述渲染实例信息,其中,所述渲染实例信息用于描述渲染实例;
    利用所述渲染引擎对所述目标虚拟模型进行渲染,得到音视频帧;
    对所述音视频帧进行编码,得到编码的音视频帧;
    响应于来自所述终端设备的基于所述渲染实例信息的音视频帧获取请求,根据所述渲染实例信息向所述终端设备发送所述编码的音视频帧,以使所述终端设备对所述编码的音视频帧进行解码,得到所述交互对象的交互动画。
  11. 根据权利要求10所述的方法,其特征在于,所述服务器还包括至少一个虚拟 模型;所述目标虚拟模型是所述服务器根据所述第一消息从所述至少一个虚拟模型中确定作为所述交互对象的。
  12. 根据权利要求10或11所述的方法,其特征在于,所述方法还包括:
    响应于所述第一消息是根据终端设备所接收的启动指令生成的,根据所述目标虚拟模型的预设参数生成所述交互对象的回应动画。
  13. 根据权利要求10或11所述的方法,其特征在于,所述方法还包括:
    响应于所述第一消息是根据终端设备所接收的交互指令生成的,根据所述交互指令生成所述目标虚拟模型的控制参数,并
    根据所述控制参数生成所述交互对象的回应动画。
  14. 一种交互装置,其特征在于,所述装置应用于终端设备,所述终端设备包括目标应用,所述装置包括:
    获取单元,用于响应于在所述目标应用中接收到对于交互对象的操作指令,根据所述操作指令获取所述交互对象的交互动画;
    播放单元,在所述目标应用的页面的第一图层中播放所述交互对象的交互动画,所述第一图层为在所述目标应用的页面中增加的透明图层。
  15. 一种交互装置,其特征在于,应用于服务器,所述服务器包括渲染引擎,所述装置包括:
    接收单元,用于接收来自终端设备的第一消息,所述第一消息包括对交互对象的虚拟模型的指示信息和控制所述交互对象的姿态的交互信息中的至少一个,以及实例请求信息;
    生成单元,用于根据所述实例请求信息,生成目标虚拟模型的渲染实例信息,向所述终端设备发送所述渲染实例信息,其中,所述渲染实例信息用于描述渲染实例;
    第二渲染单元,用于利用所述渲染引擎对所述目标虚拟模型进行渲染,得到音视频帧;
    编码单元,用于对所述音视频帧进行编码,得到编码的音视频帧;
    发送单元,用于响应于来自所述终端设备的基于所述渲染实例信息的音视频帧获取请求,根据所述渲染实例信息向所述终端设备发送所述编码的音视频帧,以使所述终端设备对所述编码的音视频帧进行解码,得到所述交互对象的交互动画。
  16. 一种电子设备,其特征在于,所述设备包括存储器、处理器,所述存储器用于存储可在处理器上运行的计算机指令,所述处理器用于在执行所述计算机指令时实现权利要求1至9任一项所述的方法。
  17. 一种电子设备,其特征在于,所述设备包括存储器、处理器,所述存储器用于存储可在处理器上运行的计算机指令,所述处理器用于在执行所述计算机指令时实现权利要求10至13任一项所述的方法。
  18. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述程序被处理器执行时实现权利要求1至9任一所述的方法。
  19. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述程序被处理器执行时实现权利要求10至13任一所述的方法。
  20. 一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现权利要求1至9任一所述的方法。
  21. 一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现权利要求10至13任一所述的方法。
PCT/CN2022/086898 2021-05-19 2022-04-14 用于交互的方法、装置、设备以及存储介质 WO2022242380A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110545352.5 2021-05-19
CN202110545352.5A CN113138765A (zh) 2021-05-19 2021-05-19 交互方法、装置、设备以及存储介质

Publications (1)

Publication Number Publication Date
WO2022242380A1 true WO2022242380A1 (zh) 2022-11-24

Family

ID=76817311

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/086898 WO2022242380A1 (zh) 2021-05-19 2022-04-14 用于交互的方法、装置、设备以及存储介质

Country Status (2)

Country Link
CN (1) CN113138765A (zh)
WO (1) WO2022242380A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116233044A (zh) * 2022-12-14 2023-06-06 深圳市爱彼利科技有限公司 信息交互方法、装置、设备及计算机可读存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113138765A (zh) * 2021-05-19 2021-07-20 北京市商汤科技开发有限公司 交互方法、装置、设备以及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2916224A1 (en) * 2014-03-07 2015-09-09 Crytek GmbH Virtual store management based on render services
CN107294838A (zh) * 2017-05-24 2017-10-24 腾讯科技(深圳)有限公司 社交应用的动画生成方法、装置、***以及终端
CN108491147A (zh) * 2018-04-16 2018-09-04 青岛海信移动通信技术股份有限公司 一种基于虚拟人物的人机交互方法及移动终端
CN109766150A (zh) * 2017-11-06 2019-05-17 广州市动景计算机科技有限公司 交互式动画的实现方法、装置和终端设备
CN110572717A (zh) * 2019-09-30 2019-12-13 北京金山安全软件有限公司 视频编辑方法和装置
CN111488090A (zh) * 2020-04-13 2020-08-04 北京市商汤科技开发有限公司 交互方法、装置、交互***、电子设备及存储介质
CN113138765A (zh) * 2021-05-19 2021-07-20 北京市商汤科技开发有限公司 交互方法、装置、设备以及存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2916224A1 (en) * 2014-03-07 2015-09-09 Crytek GmbH Virtual store management based on render services
CN107294838A (zh) * 2017-05-24 2017-10-24 腾讯科技(深圳)有限公司 社交应用的动画生成方法、装置、***以及终端
CN109766150A (zh) * 2017-11-06 2019-05-17 广州市动景计算机科技有限公司 交互式动画的实现方法、装置和终端设备
CN108491147A (zh) * 2018-04-16 2018-09-04 青岛海信移动通信技术股份有限公司 一种基于虚拟人物的人机交互方法及移动终端
CN110572717A (zh) * 2019-09-30 2019-12-13 北京金山安全软件有限公司 视频编辑方法和装置
CN111488090A (zh) * 2020-04-13 2020-08-04 北京市商汤科技开发有限公司 交互方法、装置、交互***、电子设备及存储介质
CN113138765A (zh) * 2021-05-19 2021-07-20 北京市商汤科技开发有限公司 交互方法、装置、设备以及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116233044A (zh) * 2022-12-14 2023-06-06 深圳市爱彼利科技有限公司 信息交互方法、装置、设备及计算机可读存储介质
CN116233044B (zh) * 2022-12-14 2024-03-12 深圳市爱彼利科技有限公司 信息交互方法、装置、设备及计算机可读存储介质

Also Published As

Publication number Publication date
CN113138765A (zh) 2021-07-20

Similar Documents

Publication Publication Date Title
CN114625304B (zh) 虚拟现实和跨设备体验
WO2022242380A1 (zh) 用于交互的方法、装置、设备以及存储介质
KR101857946B1 (ko) 스크린샷의 생성
CN107852573B (zh) 混合现实社交交互
US11178358B2 (en) Method and apparatus for generating video file, and storage medium
US9524587B2 (en) Adapting content to augmented reality virtual objects
KR102382775B1 (ko) 기존 메시징 서비스의 메시지에 대한 앱에 의해 강화된 확장성 제공 기법
US10637804B2 (en) User terminal apparatus, communication system, and method of controlling user terminal apparatus which support a messenger service with additional functionality
WO2022089178A1 (zh) 视频处理方法及设备
WO2018076939A1 (zh) 视频文件的处理方法和装置
JP2022530935A (ja) インタラクティブ対象の駆動方法、装置、デバイス、及び記録媒体
US10945042B2 (en) Generating an interactive digital video content item
JP7267411B2 (ja) インタラクティブオブジェクト駆動方法、装置、電子デバイス及び記憶媒体
US10698744B2 (en) Enabling third parties to add effects to an application
CN113411537B (zh) 视频通话方法、装置、终端及存储介质
US20150352442A1 (en) Game having a Plurality of Engines
TWI759039B (zh) 互動物件的驅動方法、裝置、設備以及儲存媒體
US20230154445A1 (en) Spatial music creation interface
KR20230105120A (ko) 다양한 시간 지점에서의 가상 공간을 표시하는 방법 및 시스템
CN107832366A (zh) 视频分享方法及装置、终端装置及计算机可读存储介质
US20240012558A1 (en) User interface providing reply state transition
US20230344953A1 (en) Camera settings and effects shortcuts
WO2024020908A1 (en) Video processing with preview of ar effects
EP3389049B1 (en) Enabling third parties to add effects to an application
CN118176735A (zh) 用于动态头像的***和方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22803705

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE