CN113867538A - Interaction method, interaction device, computer equipment and computer-readable storage medium - Google Patents

Interaction method, interaction device, computer equipment and computer-readable storage medium Download PDF

Info

Publication number
CN113867538A
CN113867538A CN202111212391.XA CN202111212391A CN113867538A CN 113867538 A CN113867538 A CN 113867538A CN 202111212391 A CN202111212391 A CN 202111212391A CN 113867538 A CN113867538 A CN 113867538A
Authority
CN
China
Prior art keywords
video stream
clients
user
rendering parameter
service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111212391.XA
Other languages
Chinese (zh)
Inventor
穆少垒
田升
刘云峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhuiyi Technology Co Ltd
Original Assignee
Shenzhen Zhuiyi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhuiyi Technology Co Ltd filed Critical Shenzhen Zhuiyi Technology Co Ltd
Priority to CN202111212391.XA priority Critical patent/CN113867538A/en
Publication of CN113867538A publication Critical patent/CN113867538A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention provides an interaction method, an interaction device, computer equipment and a computer readable storage medium, wherein a first rendering parameter of a target service can be acquired; calling a display card to render the first rendering parameter to generate an interactive video stream; the video content of the interactive video stream comprises a first operation dynamic of the virtual object on the service resource information of the target service; sending the interactive video stream to a plurality of clients; when user touch information fed back by a first client in the multiple clients is received, acquiring a second rendering parameter corresponding to the user touch information; calling the display card to render the second rendering parameter, and generating a response video stream; responding to a second operation dynamic of the video content of the video stream, wherein the second operation dynamic of the virtual object to the service resource information; and sending the response video stream to the plurality of clients. Therefore, the second operation dynamic of the user to which the first client belongs to adjust the service resource information is displayed to the users to which other clients belong, and the interaction of the virtual object and multiple users is realized.

Description

Interaction method, interaction device, computer equipment and computer-readable storage medium
Technical Field
The present invention relates to the field of human-computer interaction technologies, and in particular, to an interaction method, an interaction apparatus, a computer device, and a computer-readable storage medium.
Background
In recent years, with the continuous development and application of information technology, in order to meet user demands, the number of virtual object presentation scenes such as digital human teachers, digital human customer service and the like is increasing.
However, in the existing interaction technology between the user and the virtual client, only a single user can perform one-to-one business interaction with the virtual object, and the requirement of the user cannot be met.
Disclosure of Invention
In view of the foregoing problems, the present invention provides an interaction method, an interaction apparatus, a computer device, and a computer-readable storage medium. The specific scheme is as follows:
according to a first aspect of the embodiments of the present invention, there is provided an interaction method applied to a rendering engine, where the rendering engine establishes communication connections with multiple clients, and the multiple clients participate in a target service, the method including:
acquiring a first rendering parameter of the target service;
calling a preset display card to render the first rendering parameter, and generating an interactive video stream; the video content of the interactive video stream comprises a first operation dynamic of the virtual object on the service resource information of the target service; the first operation is dynamically used for prompting a user to which the client belongs to execute touch operation aiming at the service resource information;
sending the interactive video stream to the plurality of clients;
when user touch information fed back by a first client in the multiple clients is received, acquiring a second rendering parameter corresponding to the user touch information;
calling the display card to render the second rendering parameter, and generating a response video stream; the video content of the response video stream comprises a second operation dynamic of the virtual object on the service resource information;
and sending the response video stream to the plurality of clients.
Optionally, the obtaining the first rendering parameter of the target service in the foregoing method includes:
receiving a business service request sent by a preset central control system;
and acquiring a first rendering parameter of the target service in the service request.
Optionally, in the method, the obtaining a second rendering parameter corresponding to the user touch message includes:
acquiring user operation information contained in the user touch message;
and traversing a preset configuration file according to the user operation information to acquire a second model rendering parameter corresponding to the user touch message.
In the foregoing method, optionally, after sending the response video to the plurality of clients, the method further includes:
when a touch message fed back by a second client in the plurality of clients is received, acquiring a third rendering parameter corresponding to the touch message; the touch message is information generated by the second client detecting the touch operation of the user to the service resource information;
calling the display card to render the third rendering parameter, and generating an operation dynamic video stream; the video content of the operation dynamic video stream comprises a third operation dynamic of the virtual object on the service resource information;
and sending the operation dynamic video stream to the plurality of clients.
In the foregoing method, optionally, after sending the response video to the plurality of clients, the method further includes:
and when a service completion instruction sent by a preset central control system is received, controlling the rendering engine to interrupt the communication connection with the plurality of clients.
According to a second aspect of the embodiments of the present invention, there is provided an interaction apparatus applied to a rendering engine, where the rendering engine establishes communication connections with a plurality of clients, and the plurality of clients participate in a target service, the method including:
a first obtaining unit, configured to obtain a first rendering parameter of the target service;
the first generation unit is used for calling a preset display card to render the first rendering parameter and generate an interactive video stream; the video content of the interactive video stream comprises a first operation dynamic of the virtual object on the service resource information of the target service; the first operation is dynamically used for prompting a user to which the client belongs to execute touch operation aiming at the service resource information;
a first transmission unit, configured to send the interactive video stream to the plurality of clients;
the second obtaining unit is used for obtaining a second rendering parameter corresponding to user touch information when the user touch information fed back by a first client in the plurality of clients is received;
the second generation unit is used for calling the display card to render the second rendering parameters and generate response video streams; the video content of the response video stream comprises a second operation dynamic of the virtual object on the service resource information;
a second transmission unit, configured to send the response video stream to the multiple clients.
The above apparatus, optionally, the first obtaining unit includes:
the receiving subunit is used for receiving a service request sent by a preset central control system;
and the first obtaining subunit is configured to obtain a first rendering parameter of the target service in the service request.
The above apparatus, optionally, the second obtaining unit includes:
the second acquiring subunit is used for acquiring user operation information contained in the user touch message;
and the execution subunit is used for traversing a preset configuration file according to the user operation information to acquire a second model rendering parameter corresponding to the user touch message.
The above apparatus, optionally, further comprises:
a third obtaining unit, configured to obtain a third rendering parameter corresponding to the touch information when receiving the touch information fed back by a second client in the multiple clients; the touch message is information generated by the second client detecting the touch operation of the user to the service resource information;
the third generation unit is used for calling the display card to render the third rendering parameter and generate an operation dynamic video stream; the video content of the operation dynamic video stream comprises a third operation dynamic of the virtual object on the service resource information;
a third transmission unit, configured to send the operation dynamic video stream to the multiple clients.
The above apparatus, optionally, further comprises:
and the control unit is used for controlling the rendering engine to interrupt the communication connection with the plurality of clients when receiving a service completion instruction sent by a preset central control system.
According to a third aspect of embodiments of the present invention, there is provided a computer device, comprising a memory and a processor, wherein the memory stores a computer program, and wherein the computer program, when executed by the processor, causes the processor to perform the steps of the interaction method as described above.
According to a fourth aspect of embodiments of the present invention, there is provided a computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements the steps of the interaction method as described above.
The invention provides an interaction method, storage, computer equipment and a computer readable storage medium, wherein a first rendering parameter of a target service can be acquired; calling a preset display card to render the first rendering parameter to generate an interactive video stream; the video content of the interactive video stream comprises a first operation dynamic of the virtual object on the service resource information of the target service; the first operation is dynamically used for prompting a user to which the client belongs to execute touch operation aiming at the service resource information; sending the interactive video stream to a plurality of clients; when user touch information fed back by a first client in the multiple clients is received, acquiring a second rendering parameter corresponding to the user touch information; calling the display card to render the second rendering parameter, and generating a response video stream; responding to a second operation dynamic of the video content of the video stream, wherein the second operation dynamic of the virtual object to the service resource information; and sending the response video stream to the plurality of clients. Each client can display the response video stream on the interactive interface of the client, so that the second operation dynamic state of the user of the first client for adjusting the service resource information is displayed to the users of other clients, and the interaction between the virtual object and multiple users is realized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a schematic diagram of an application environment according to an embodiment of the present invention;
fig. 2 is a flowchart of an interaction method according to an embodiment of the present invention;
FIG. 3 is a flowchart of a method of another interaction method according to an embodiment of the present invention;
fig. 4 is a diagram illustrating an architecture of an interactive system according to an embodiment of the present invention;
FIG. 5 is a flowchart of a method of another interaction method according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an interaction apparatus according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of a computer device according to an embodiment of the present invention;
fig. 8 is a block diagram of a computer-readable storage medium according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In this application, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Referring to fig. 1, fig. 1 is a schematic diagram of an application environment according to an embodiment of the present invention. The interaction method provided by the embodiment of the invention can be applied to the interaction system 100 shown in fig. 1. The interactive system 100 comprises a plurality of terminal devices 101 and a server 102, the server 102 being communicatively connected to the terminal devices 101. The server 102 may be a conventional server or a cloud server, and is not limited herein.
Each terminal device 101 may be various electronic devices that have a display screen, a data processing module, a camera, an audio input/output function, and the like, and support data input, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, self-service terminals, wearable electronic devices, and the like. Specifically, the data input may be inputting voice based on a voice module provided on the electronic device, inputting characters based on a character input module, and the like.
Each terminal device 101 may have a client application installed thereon, and the user may be based on the client application (e.g., APP, wechat applet, etc.). A user may register a user account in the server 102 based on the client application program, and communicate with the server 102 based on the user account, for example, the user logs in the user account in the client application program, inputs the user account through the client application program, and may input touch information, text information, voice information, or the like of the control, after receiving information input by the user, the client application program may send the information to the server 102, so that the server 102 may receive the information, process and store the information, and the server 102 may also receive the information and return a corresponding output information to the terminal device 101 according to the information.
In some embodiments, the apparatus for processing the data to be recognized may also be disposed on the terminal device 101, so that the terminal device 101 can interact with the user without relying on the server 102 to establish communication, and in this case, the interactive system 100 may only include the terminal device 101.
Referring to fig. 2, a flowchart of an interaction method provided in an embodiment of the present invention is shown, where the interaction method may be applied to a server, where a rendering engine UE is disposed in the server, a rendering engine in the server establishes communication connections with multiple clients, and the multiple clients participate in a target service, and the interaction method specifically includes the following steps:
s201: and acquiring a first rendering parameter of the target service.
In the present embodiment, the target business may be various types of businesses requiring cooperation of a plurality of persons, for example, a transaction business of a bank, a wedding certificate business, and the like.
The first rendering parameters may include first scene parameters of the target service and first model parameters of the virtual object, the first model parameters including audio information of the virtual object, mouth motion parameters, and limb motion parameters.
S202: calling a preset display card to render the first rendering parameter, and generating an interactive video stream; the video content of the interactive video stream comprises a first operation dynamic of the virtual object on the service resource information of the target service; the first operation is dynamically used for prompting a user to which the client belongs to execute touch operation aiming at the service resource information.
In this embodiment, the first operation dynamics of the virtual object on the service resource information may be a process in which the virtual object introduces the service resource information to a user to which the client belongs and directs the user to perform a touch operation on the service resource information. In some embodiments, the service resource information may be protocol information or job material of the target service, and the client prompted by the first operation dynamics may be each client, or may be a client currently having an operation right.
The virtual object can be a digital person such as a virtual customer service and a virtual teacher.
Optionally, the touch action may be single-point touch, sliding, or multi-point touch.
S203: and sending the interactive video stream to the plurality of clients.
In this embodiment, the interactive video stream is sent to each client, so that each client can display the interactive video stream in an interactive interface of the client, where the interactive interface may be a streaming media interface.
S204: and when user touch information fed back by a first client in the plurality of clients is received, acquiring a second rendering parameter corresponding to the user touch information.
In this embodiment, the touch information may be information generated by a user belonging to the first client performing a touch operation on an interactive interface of the first client, the touch information may include operation information of the user and/or user input information, and the user input information may be text information or voice information.
The second rendering parameters corresponding to the touch messages triggered by different touch operations may be different.
Optionally, the first client may be any client in the multiple clients, or may be a client currently having an operation right in the multiple clients.
S205: calling the display card to render the second rendering parameter, and generating a response video stream; and the video content of the response video stream comprises a second operation dynamic of the virtual object on the service resource information.
In this embodiment, the display card may be a display card in the server, and the response video stream is rendered by the display card, and the response video stream may be used as a response result of the touch message.
The second operation dynamics of the virtual object on the service resource information may be that the virtual object adjusts the service resource information, broadcasts the adjusted service resource information, modifies the content, and the like, and the virtual object adjusts the service resource information may be one or more of adding content, modifying content, deleting content, and the like on the service resource information.
S206: and sending the response video stream to the plurality of clients.
In this embodiment, the response video stream is sent to each client, so that each client can display the response video stream on the interactive interface of the client, thereby displaying the second operation dynamics of the user of the first client for adjusting the service resource information to the users to which the other clients belong, and implementing the interaction between the virtual object and multiple users.
In an embodiment provided by the present invention, based on the foregoing implementation process, specifically, the obtaining a first rendering parameter of the target service includes:
receiving a business service request sent by a preset central control system;
and acquiring a first rendering parameter of the target service in the service request.
In this embodiment, each client may apply for participating in a target service from the central control system, and when receiving a participation request for the target service sent by each client, the central control system allocates the same rendering engine UE in an idle state to each client, and sends a service request to the rendering engine UE, and the rendering engine UE may obtain a first rendering parameter in the service request, so as to generate an interactive video stream according to the first rendering parameter.
In an embodiment provided by the present invention, based on the foregoing implementation process, specifically, the obtaining a second rendering parameter corresponding to the user touch message includes:
and acquiring user operation information contained in the user touch message.
The user operation information may include a touch operation performed by a user and a control triggered by the touch operation.
And traversing a preset configuration file according to the user operation information to acquire a second model rendering parameter corresponding to the user touch message.
In this embodiment, the configuration file records correspondence between different user operation information and different model rendering events, and the model rendering event corresponding to the operation information, that is, the model rendering event corresponding to the touch message, may be determined by traversing the configuration file through the user operation information.
By applying the method provided by the embodiment of the invention, the touch information can be quickly translated to obtain the model rendering event corresponding to the touch information.
In some embodiments, referring to fig. 3, a flowchart of another method of an interaction method provided in an embodiment of the present invention includes the following steps:
s301: and acquiring a first rendering parameter of the target service.
S302: calling a preset display card to render the first rendering parameter, and generating an interactive video stream; the video content of the interactive video stream comprises a first operation dynamic of the virtual object on the service resource information of the target service; the first operation is dynamically used for prompting a user to which the client belongs to execute touch operation aiming at the service resource information.
S303: and sending the interactive video stream to the plurality of clients.
S304: and when user touch information fed back by a first client in the plurality of clients is received, acquiring a second rendering parameter corresponding to the user touch information.
S305: calling the display card to render the second rendering parameter, and generating a response video stream; and the video content of the response video stream comprises a second operation dynamic of the virtual object on the service resource information.
S306: and sending the response video stream to the plurality of clients.
In this embodiment, the implementation processes of S301 to S306 are the same as the implementation processes of S101 to S106 in the embodiment of fig. 1, and are not described herein again.
S307: when a touch message fed back by a second client in the plurality of clients is received, acquiring a third rendering parameter corresponding to the touch message; the touch message is information generated by the second client detecting the touch operation of the user to the service resource information.
In this embodiment, the second client may be any of the clients other than the first client.
Before receiving the touch message fed back by the second client, the second interactive video may be sent to each client to prompt the user to whom the second client belongs to execute the touch operation for the service resource information.
S308: calling the display card to render the third rendering parameter, and generating an operation dynamic video stream; the video content of the operation dynamic video stream comprises a third operation dynamic of the virtual object to the service resource information.
The third operation dynamics of the virtual object on the service resource information may be that the virtual object adjusts the service resource information, broadcasts the adjusted service resource information, modifies the content, and the like, and the virtual object adjusts the service resource information may be one or more of adding content, modifying content, deleting content, and the like on the service resource information.
S309: and sending the operation dynamic video stream to the plurality of clients.
In this embodiment, the operation dynamic video stream is sent to each client, so that each client can display the operation dynamic video stream on the interactive interface of the client, and thus, the third operation dynamic state of the user of the second client for adjusting the service resource information can be displayed to the users to which other clients belong, thereby interacting the virtual object with multiple users.
In an embodiment provided by the present invention, based on the foregoing implementation process, specifically, after sending the response video to the plurality of clients, the method further includes:
and when a service completion instruction sent by a preset central control system is received, controlling the rendering engine to interrupt the communication connection with the plurality of clients.
In this embodiment, after determining that the target service is completed, the central control system may send a service completion instruction to the central control system, so that the rendering engine is disconnected from the communication with the plurality of clients, and is in an idle state.
In some embodiments, as shown in fig. 4, an exemplary architecture diagram of an interactive system provided in the embodiments of the present invention is shown, where a first client APP1 may send a transaction request of a target service to a central control system of a server, the central control system of the server selects an idle rendering engine from among candidate rendering engines, and allocates the idle rendering engine UE1 to the first client APP1, and when a participation request for the target service sent by a second client APP2 is received, the rendering engine may be allocated to the second client APP2, and the rendering engine allocated to the first client APP1 is the same as the rendering engine allocated to the second client APP 2. The two clients of APP1 and APP2 receive video streams of the same rendering engine UE, that is, operate the same digital person (virtual object), the rendering engine UE1 can receive messages from APP1 and APP2 at the same time, and synchronously make a response, and cooperatively complete the operation flow. The UE1 may send user input messages (touch messages and interactive messages) of APP1 and APP2 to the central control system, the central control system sends the user audio in the user input messages to a preset question-answer processing module (Asr-bot-tts), obtains a reply audio, sends the reply audio to the mouth-type inference server, the mouth-type inference server returns mouth-type action data corresponding to the reply audio, the central control system sends the reply audio and the mouth-type action data to the rendering engine UE1, the rendering engine UE1 renders the mouth-type action data, obtains an interactive video stream or a response video stream of the virtual object, and sends the interactive video stream to each client.
The interaction method provided by the embodiment of the invention can be applied to various scenes, for example, in a transaction scene of a bank, a first user and a second user transact transaction business through a data man of the bank and need to sign a contract at the same time, in this scene, as shown in fig. 5, a method flow chart of another interaction method provided by the embodiment of the invention is shown, wherein the first user can transact target business to a digital man of the bank, a central control system in a server allocates an idle rendering engine UE1 for the first user, the central control system waits for the second user to join the target business according to business requirements, the second user applies for joining by the digital man and completes the target business with the first user cooperatively, the central control system allocates the same rendering engine UE1 as the first user for the second user, the first user operates signature and modifies contract terms under the guidance of a digital language, the corresponding signing and contract clause modifying operations of the first user are dynamically synchronized to the second user through the UE 1; the second user operates the signature and modifies contract terms under the guidance of the digital human language; the signature corresponding to the second user and the operation of modifying contract terms are dynamically synchronized to the first user through the UE 1; and the first user and the second user cooperate in the central control system to finish the target service.
Corresponding to the method described in fig. 2, an embodiment of the present invention further provides an interaction apparatus, which is used for specifically implementing the method in fig. 2, where the interaction apparatus provided in the embodiment of the present invention may be applied to a computer device, and a schematic structural diagram of the interaction apparatus is shown in fig. 6, and specifically includes:
a first obtaining unit 601, configured to obtain a first rendering parameter of the target service;
a first generating unit 602, configured to invoke a preset graphics card to render the first rendering parameter, so as to generate an interactive video stream; the video content of the interactive video stream comprises a first operation dynamic of the virtual object on the service resource information of the target service; the first operation is dynamically used for prompting a user to which the client belongs to execute touch operation aiming at the service resource information;
a first transmission unit 603, configured to send the interactive video stream to the multiple clients;
a second obtaining unit 604, configured to obtain, when receiving user touch information fed back by a first client in the multiple clients, a second rendering parameter corresponding to the user touch information;
a second generating unit 605, configured to invoke the graphics card to render the second rendering parameter, and generate a response video stream; the video content of the response video stream comprises a second operation dynamic of the virtual object on the service resource information;
a second transmitting unit 606, configured to send the response video stream to the multiple clients.
The above apparatus, optionally, the first obtaining unit includes:
the receiving subunit is used for receiving a service request sent by a preset central control system;
and the first obtaining subunit is configured to obtain a first rendering parameter of the target service in the service request.
In an embodiment provided by the present invention, based on the above scheme, optionally, the second obtaining unit 604 includes:
the second acquiring subunit is used for acquiring user operation information contained in the user touch message;
and the execution subunit is used for traversing a preset configuration file according to the user operation information to acquire a second model rendering parameter corresponding to the user touch message.
In an embodiment provided by the present invention, based on the above scheme, optionally, the interaction apparatus further includes:
a third obtaining unit, configured to obtain a third rendering parameter corresponding to the touch information when receiving the touch information fed back by a second client in the multiple clients; the touch message is information generated by the second client detecting the touch operation of the user to the service resource information;
the third generation unit is used for calling the display card to render the third rendering parameter and generate an operation dynamic video stream; the video content of the operation dynamic video stream comprises a third operation dynamic of the virtual object on the service resource information;
a third transmission unit, configured to send the operation dynamic video stream to the multiple clients.
In an embodiment provided by the present invention, based on the above scheme, optionally, the method further includes:
and the control unit is used for controlling the rendering engine to interrupt the communication connection with the plurality of clients when receiving a service completion instruction sent by a preset central control system.
The specific principle and the implementation process of each unit and module in the interaction apparatus disclosed in the above embodiment of the present invention are the same as those of the interaction method disclosed in the above embodiment of the present invention, and reference may be made to corresponding parts in the interaction method provided in the above embodiment of the present invention, which are not described herein again.
Fig. 7 is a block diagram of a computer device 700 according to an embodiment of the present invention. The computer device 700 may be a personal computer, a tablet computer, a server, an industrial computer, or the like capable of running an application. The computer device 700 of the present invention may include one or more of the following components: a processor 701, a memory 702, and one or more applications, wherein the one or more applications may be stored in the memory 702 and configured to be executed by the one or more processors 701, the one or more programs configured to perform a method as described in the aforementioned method embodiments.
Processor 701 may include one or more processing cores. The processor 701 interfaces with various components throughout the computer device 700 using various interfaces and lines to perform various functions of the computer device 700 and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 802 and invoking data stored in the memory 702. Alternatively, the processor 701 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 701 may integrate one or a combination of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is to be understood that the modem may not be integrated into the processor 801, but may be implemented by a communication chip.
The Memory 802 may include a Random Access Memory (RAM) or a Read-only Memory (Read-only Memory). The memory 702 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 702 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created during use by the computer device 700 (e.g., phone books, audio-visual data, chat log data), and the like.
Fig. 8 is a block diagram of a computer-readable storage medium according to an embodiment of the present invention. The computer-readable storage medium 800 has stored therein program code that can be invoked by a processor to perform the methods described in the method embodiments above.
The computer-readable storage medium 800 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 800 includes a non-volatile computer-readable storage medium. The computer readable storage medium 800 has storage space for program code 801 for performing any of the method steps described above. The program code can be read from or written to one or more computer program products. The program code 801 may be compressed, for example, in a suitable form.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functions of the units may be implemented in the same software and/or hardware or in a plurality of software and/or hardware when implementing the invention.
From the above description of the embodiments, it is clear to those skilled in the art that the present invention can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
The interaction method provided by the present invention is described in detail above, and the principle and the implementation of the present invention are explained in the present document by applying specific examples, and the description of the above examples is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. An interaction method applied to a rendering engine, wherein the rendering engine establishes communication connections with a plurality of clients, and the plurality of clients participate in a target service, and the method comprises the following steps:
acquiring a first rendering parameter of the target service;
calling a preset display card to render the first rendering parameter, and generating an interactive video stream; the video content of the interactive video stream comprises a first operation dynamic of the virtual object on the service resource information of the target service; the first operation is dynamically used for prompting a user to which the client belongs to execute touch operation aiming at the service resource information;
sending the interactive video stream to the plurality of clients;
when user touch information fed back by a first client in the multiple clients is received, acquiring a second rendering parameter corresponding to the user touch information;
calling the display card to render the second rendering parameter, and generating a response video stream; the video content of the response video stream comprises a second operation dynamic of the virtual object on the service resource information;
and sending the response video stream to the plurality of clients.
2. The method of claim 1, wherein the obtaining the first rendering parameter of the target service comprises:
receiving a business service request sent by a preset central control system;
and acquiring a first rendering parameter of the target service in the service request.
3. The method of claim 1, wherein the obtaining of the second rendering parameter corresponding to the user touch message comprises:
acquiring user operation information contained in the user touch message;
and traversing a preset configuration file according to the user operation information to acquire a second model rendering parameter corresponding to the user touch message.
4. The method of claim 1, wherein after sending the response video to the plurality of clients, further comprising:
when a touch message fed back by a second client in the plurality of clients is received, acquiring a third rendering parameter corresponding to the touch message; the touch message is information generated by the second client detecting the touch operation of the user to the service resource information;
calling the display card to render the third rendering parameter, and generating an operation dynamic video stream; the video content of the operation dynamic video stream comprises a third operation dynamic of the virtual object on the service resource information;
and sending the operation dynamic video stream to the plurality of clients.
5. The method of claim 1, wherein after sending the response video to the plurality of clients, further comprising:
and when a service completion instruction sent by a preset central control system is received, controlling the rendering engine to interrupt the communication connection with the plurality of clients.
6. An interaction device applied to a rendering engine, wherein the rendering engine establishes communication connections with a plurality of clients, and the plurality of clients participate in a target service, and the method comprises the following steps:
a first obtaining unit, configured to obtain a first rendering parameter of the target service;
the first generation unit is used for calling a preset display card to render the first rendering parameter and generate an interactive video stream; the video content of the interactive video stream comprises a first operation dynamic of the virtual object on the service resource information of the target service; the first operation is dynamically used for prompting a user to which the client belongs to execute touch operation aiming at the service resource information;
a first transmission unit, configured to send the interactive video stream to the plurality of clients;
the second obtaining unit is used for obtaining a second rendering parameter corresponding to user touch information when the user touch information fed back by a first client in the plurality of clients is received;
the second generation unit is used for calling the display card to render the second rendering parameters and generate response video streams; the video content of the response video stream comprises a second operation dynamic of the virtual object on the service resource information;
a second transmission unit, configured to send the response video stream to the multiple clients.
7. The apparatus of claim 6, wherein the first obtaining unit comprises:
the receiving subunit is used for receiving a service request sent by a preset central control system;
and the first obtaining subunit is configured to obtain a first rendering parameter of the target service in the service request.
8. The apparatus of claim 6, wherein the second obtaining unit comprises:
the second acquiring subunit is used for acquiring user operation information contained in the user touch message;
and the execution subunit is used for traversing a preset configuration file according to the user operation information to acquire a second model rendering parameter corresponding to the user touch message.
9. A computer device comprising a memory and a processor, the memory having stored thereon a computer program, characterized in that the computer program, when executed by the processor, causes the processor to carry out the steps of the interaction method according to any of claims 1 to 5.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the interaction method according to any one of claims 1 to 5.
CN202111212391.XA 2021-10-18 2021-10-18 Interaction method, interaction device, computer equipment and computer-readable storage medium Pending CN113867538A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111212391.XA CN113867538A (en) 2021-10-18 2021-10-18 Interaction method, interaction device, computer equipment and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111212391.XA CN113867538A (en) 2021-10-18 2021-10-18 Interaction method, interaction device, computer equipment and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN113867538A true CN113867538A (en) 2021-12-31

Family

ID=79000196

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111212391.XA Pending CN113867538A (en) 2021-10-18 2021-10-18 Interaction method, interaction device, computer equipment and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN113867538A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106878820A (en) * 2016-12-09 2017-06-20 北京小米移动软件有限公司 Living broadcast interactive method and device
CN111541908A (en) * 2020-02-27 2020-08-14 北京市商汤科技开发有限公司 Interaction method, device, equipment and storage medium
CN111729293A (en) * 2020-08-28 2020-10-02 腾讯科技(深圳)有限公司 Data processing method, device and storage medium
CN113365130A (en) * 2020-03-03 2021-09-07 广州虎牙科技有限公司 Live broadcast display method, live broadcast video acquisition method and related devices

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106878820A (en) * 2016-12-09 2017-06-20 北京小米移动软件有限公司 Living broadcast interactive method and device
CN111541908A (en) * 2020-02-27 2020-08-14 北京市商汤科技开发有限公司 Interaction method, device, equipment and storage medium
CN113365130A (en) * 2020-03-03 2021-09-07 广州虎牙科技有限公司 Live broadcast display method, live broadcast video acquisition method and related devices
CN111729293A (en) * 2020-08-28 2020-10-02 腾讯科技(深圳)有限公司 Data processing method, device and storage medium

Similar Documents

Publication Publication Date Title
CN110298906B (en) Method and device for generating information
KR102428368B1 (en) Initializing a conversation with an automated agent via selectable graphical element
US12026421B2 (en) Screen sharing method, apparatus, and device, and storage medium
KR102199434B1 (en) System and method for sharing message of messenger application
CN113766253A (en) Live broadcast method, device, equipment and storage medium based on virtual anchor
US11890540B2 (en) User interface processing method and device
US11303596B2 (en) Method and a device for processing information
CN112118215A (en) Convenient real-time conversation based on topic determination
CN114025186A (en) Virtual voice interaction method and device in live broadcast room and computer equipment
CN113377312A (en) Same-screen interaction method and device, computer equipment and computer readable storage medium
CN113850898A (en) Scene rendering method and device, storage medium and electronic equipment
CN108574878B (en) Data interaction method and device
CN111880756A (en) Online classroom screen projection method and device, electronic equipment and storage medium
CN113849117A (en) Interaction method, interaction device, computer equipment and computer-readable storage medium
US20160292564A1 (en) Cross-Channel Content Translation Engine
CN113867538A (en) Interaction method, interaction device, computer equipment and computer-readable storage medium
TW202336581A (en) Electronic comic distribution system, electronic comic distribution program, and application program
CN116192789A (en) Cloud document processing method and device and electronic equipment
WO2021090750A1 (en) Information processing device, information processing method, and program
CN114884914A (en) Application program same-screen communication method and system
CN110989910A (en) Interaction method, system, device, electronic equipment and storage medium
CN114153362A (en) Information processing method and device
CN113849069A (en) Image replacing method and device, storage medium and electronic equipment
CN113419650A (en) Data moving method and device, storage medium and electronic equipment
CN113641439A (en) Text recognition and display method, device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination