CN111935489A - Network live broadcast method, information display method and device, live broadcast server and terminal equipment - Google Patents

Network live broadcast method, information display method and device, live broadcast server and terminal equipment Download PDF

Info

Publication number
CN111935489A
CN111935489A CN201910395218.4A CN201910395218A CN111935489A CN 111935489 A CN111935489 A CN 111935489A CN 201910395218 A CN201910395218 A CN 201910395218A CN 111935489 A CN111935489 A CN 111935489A
Authority
CN
China
Prior art keywords
live
target object
images
target
live broadcast
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910395218.4A
Other languages
Chinese (zh)
Other versions
CN111935489B (en
Inventor
郑萌萌
程杭
徐珊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201910395218.4A priority Critical patent/CN111935489B/en
Publication of CN111935489A publication Critical patent/CN111935489A/en
Application granted granted Critical
Publication of CN111935489B publication Critical patent/CN111935489B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/254Management at additional data server, e.g. shopping server, rights management server
    • H04N21/2542Management at additional data server, e.g. shopping server, rights management server for selling goods, e.g. TV shopping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4394Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/47815Electronic shopping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8146Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics
    • H04N21/8153Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics comprising still images, e.g. texture, background image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Graphics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides a network live broadcast method, an information display method and device, a live broadcast server and a terminal device, and relates to the technical field of networks. In the embodiment of the application, at least two target images of a target object in live data are acquired by aiming at the target object. And determining at least two associated objects corresponding to the target object. And rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and displaying the effect images simultaneously on a live interface of the client. In the embodiment of the application, the live effect can be further improved through a plurality of effect images displayed simultaneously.

Description

Network live broadcast method, information display method and device, live broadcast server and terminal equipment
Technical Field
The embodiment of the application relates to the technical field of networks, in particular to a live network broadcasting method, an information display method and device, a live broadcast server and a terminal device.
Background
Currently, when a live user introduces commodities in live network, the live user usually explains the differences among different commodities so as to enable the user to know the commodities more comprehensively and deeply.
For example, when a live user introduces a clothing commodity during live broadcasting, the style, color, size, etc. of similar clothing are usually compared. In order to introduce more vividly, the live broadcast user can frequently try on different clothes, so that the watching user feels the contrast effect through the try-on of the live broadcast user. However, the interval time in the process of contrast and try-on of the live broadcast user is too long, so that the watching user has a weak visual feeling, and frequent try-on greatly increases the workload of the live broadcast user, thereby affecting the live broadcast effect.
Disclosure of Invention
The embodiment of the application provides a live network broadcasting method, an information display method and device, a live broadcasting server and a terminal device.
In a first aspect, an embodiment of the present application provides a live webcasting method, including:
aiming at a target object in live data, acquiring at least two target images of the target object;
determining at least two associated objects corresponding to the target object;
and rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and displaying the effect images simultaneously on a live interface of the client.
In a second aspect, an embodiment of the present application provides a live webcasting method, including:
collecting live broadcast data; the live data comprises a target object;
sending live broadcast data to a server, so that the server obtains at least two target images of a target object and determines at least two associated objects corresponding to the target object aiming at the target object in the live broadcast data, and rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images which are displayed simultaneously on a live broadcast interface of a client.
In a third aspect, an embodiment of the present application provides an information display method, including:
providing a direct broadcasting interface; the live broadcast interface is used for displaying live broadcast data;
acquiring at least two effect images corresponding to a target object in the live broadcast data; the at least two effect images are obtained by respectively rendering at least two associated objects corresponding to the target object into corresponding target images, and the at least two target images are obtained by the server side aiming at the target object;
and simultaneously displaying the at least two effect images on the live broadcast interface.
In a fourth aspect, an embodiment of the present application provides a live webcasting method, including:
determining a target object based on a target object selection instruction sent by a client and acquiring at least two target images of the target object; the target object selection instruction is generated based on the selection operation of a viewing user of the client to any preset object;
determining at least two associated objects corresponding to the target object;
and rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and displaying the effect images simultaneously on a live interface of the client.
In a fifth aspect, an embodiment of the present application provides an information display method, including:
providing a direct broadcasting interface; the live interface is used for displaying live data;
sending a target object selection instruction to the server, so that the server can determine a target object based on the target object selection instruction and obtain at least two target images corresponding to the target object; the target object selection instruction is generated based on the selection operation of a viewing user for any preset object;
acquiring at least two effect images corresponding to the target object; wherein the at least two effect images are obtained by rendering at least two associated objects corresponding to the target object into the corresponding target images respectively; at least two target images are obtained by the server side aiming at the target object;
and simultaneously displaying the at least two effect images on the live broadcast interface.
In a sixth aspect, an embodiment of the present application provides a live webcasting apparatus, including:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring at least two target images of a target object in live broadcast data aiming at the target object;
the first determining module is used for determining at least two associated objects corresponding to the target object;
the first display module is used for rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and displaying the effect images on a live interface of the client at the same time.
In a seventh aspect, an embodiment of the present application provides a live webcasting apparatus, including:
the live broadcast data acquisition module is used for acquiring live broadcast data; the live data comprises a target object;
and the live broadcast data sending module is used for sending live broadcast data to a server so that the server can acquire at least two target images of the target object and determine at least two associated objects corresponding to the target object aiming at the target object in the live broadcast data, and the at least two associated objects are respectively rendered into the corresponding target images to obtain at least two effect images which are displayed simultaneously on a live broadcast interface of a client.
In an eighth aspect, an embodiment of the present application provides an information display apparatus, including:
the first direct-playing interface providing module is used for providing a direct-playing interface; the live interface is used for displaying live data;
the second acquisition module is used for acquiring at least two effect images corresponding to a target object in the live broadcast data; the at least two effect images are obtained by respectively rendering at least two associated objects corresponding to the target object into corresponding target images, and the at least two target images are obtained by the server side aiming at the target object;
and the second display module is used for displaying the at least two effect images simultaneously on the live broadcast interface.
In a ninth aspect, an embodiment of the present application provides a live webcasting apparatus, including:
the third acquisition module is used for determining a target object based on a target object selection instruction sent by the client and acquiring at least two target images of the target object; the target object selection instruction is generated based on the selection operation of a viewing user of the client to any preset object;
the second determination module is used for determining at least two associated objects corresponding to the target object;
and the third display module is used for rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and displaying the effect images on a live interface of the client at the same time.
In a tenth aspect, an embodiment of the present application provides an information display apparatus, including:
the second live broadcast interface providing module is used for providing a live broadcast interface; the live interface is used for displaying live data;
the selection instruction sending module is used for sending a target object selection instruction to the server so that the server can determine a target object based on the target object selection instruction and obtain at least two target images corresponding to the target object; the target object selection instruction is generated based on the selection operation of a viewing user for any preset object;
the fourth acquisition module is used for acquiring at least two effect images corresponding to the target object; wherein the at least two effect images are obtained by rendering at least two associated objects corresponding to the target object into the corresponding target images respectively; at least two target images are obtained by the server side aiming at the target object;
and the fourth display module is used for displaying the at least two effect images simultaneously on the live broadcast interface.
In an eleventh aspect, an embodiment of the present application provides a live broadcast server, including a processing component and a storage component; the storage component stores one or more computer instructions; the one or more computer instructions to be invoked for execution by the processing component;
the processing component is to:
aiming at a target object in live data, acquiring at least two target images of the target object;
determining at least two associated objects corresponding to the target object;
and rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and displaying the effect images simultaneously on a live interface of the client.
In a twelfth aspect, a live server includes a processing component and a storage component; the storage component stores one or more computer instructions; the one or more computer instructions to be invoked for execution by the processing component;
the processing component is to:
determining a target object based on a target object selection instruction sent by a client and acquiring at least two target images of the target object; the target object selection instruction is generated based on the selection operation of a viewing user of the client to any preset object;
determining at least two associated objects corresponding to the target object;
and rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and displaying the effect images simultaneously on a live interface of the client.
In a thirteenth aspect, an embodiment of the present application provides a terminal device, including a processing component and a storage component; the storage component stores one or more computer program instructions; the one or more computer program instructions for invocation and execution by the processing component;
the processing component is to:
collecting live broadcast data; the live data comprises a target object;
sending live broadcast data to a server, so that the server obtains at least two target images of a target object and determines at least two associated objects corresponding to the target object aiming at the target object in the live broadcast data, and rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images which are displayed simultaneously on a live broadcast interface of a client.
In a fourteenth aspect, an embodiment of the present application provides a terminal device, including a processing component, a display component, and a storage component; the storage component stores one or more computer program instructions; the one or more computer program instructions for invocation and execution by the processing component;
the processing component is to:
the display component provides a direct-playing interface; the live broadcast interface is used for displaying live broadcast data;
acquiring at least two effect images corresponding to a target object in the live broadcast data; wherein the at least two effect images are obtained by rendering at least two associated objects corresponding to the target object into the corresponding target images respectively; at least two target images are obtained by the server side aiming at the target object;
and simultaneously displaying the at least two effect images on the live broadcast interface.
In a fifteenth aspect, a terminal device comprises a processing component, a display component, and a storage component; the storage component stores one or more computer program instructions; the one or more computer program instructions for invocation and execution by the processing component;
the processing component is to:
the display component provides a direct-playing interface; the live interface is used for displaying live data;
sending a target object selection instruction to the server, so that the server can determine a target object based on the target object selection instruction and obtain at least two target images corresponding to the target object; the target object selection instruction is generated based on the selection operation of a viewing user for any preset object;
acquiring at least two effect images corresponding to the target object; wherein the at least two effect images are obtained by rendering at least two associated objects corresponding to the target object into the corresponding target images respectively; at least two target images are obtained by the server side aiming at the target object;
and simultaneously displaying the at least two effect images on the live broadcast interface.
The embodiment of the application provides a network live broadcast method, an information display method, a live broadcast server and a terminal device. In the process of network live broadcast, at least two target images of a target object in live broadcast data are obtained, and at least two associated objects corresponding to the target object are determined, so that the at least two associated objects are respectively rendered into the corresponding target images to obtain at least two effect images which are displayed on a live broadcast interface of a client side at the same time. In the embodiment of the application, at least two effect images are obtained by respectively rendering different associated objects for at least two target images of the target object, so that comparison can be performed more intuitively based on a plurality of effect images displayed simultaneously, a more intuitive experience effect is obtained, and the watching experience of a watching user is improved.
These and other aspects of the present application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic flowchart illustrating an embodiment of a live webcasting method according to the present application;
FIG. 2 illustrates a schematic diagram of an effect image presentation method provided in accordance with the present application;
fig. 3 is a schematic flow chart diagram illustrating another embodiment of a live webcasting method according to the present application;
fig. 4 is a schematic flow chart diagram illustrating a live webcasting method according to still another embodiment of the present application;
FIG. 5 is a flow chart diagram illustrating one embodiment of an information display method according to the present application;
fig. 6 is a schematic flow chart diagram illustrating a live webcasting method according to still another embodiment of the present application;
FIG. 7 is a schematic flow chart diagram illustrating another embodiment of an information display method according to the present application;
fig. 8 is a schematic structural diagram illustrating an embodiment of a live webcasting apparatus according to the present application;
fig. 9 is a schematic structural diagram illustrating another embodiment of a live webcasting apparatus according to the present application;
fig. 10 is a schematic structural diagram illustrating a live webcasting apparatus according to still another embodiment of the present disclosure;
FIG. 11 is a schematic diagram illustrating the structure of one embodiment of an information display device according to the present application;
fig. 12 is a schematic structural diagram illustrating a live webcasting apparatus according to still another embodiment of the present disclosure;
FIG. 13 is a schematic diagram illustrating another embodiment of an information display device according to the present application;
FIG. 14 is a block diagram illustrating one embodiment of a live server according to the present application;
fig. 15 is a schematic structural diagram illustrating another embodiment of a live server provided in accordance with the present application;
FIG. 16 is a block diagram illustrating an embodiment of a terminal device according to the present application;
fig. 17 is a schematic structural diagram of another embodiment of a terminal device provided in accordance with the present application;
fig. 18 is a schematic structural diagram of a terminal device according to still another embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
In some of the flows described in the specification and claims of this application and in the above-described figures, a number of operations are included that occur in a particular order, but it should be clearly understood that these operations may be performed out of order or in parallel as they occur herein, the number of operations, e.g., 101, 102, etc., merely being used to distinguish between various operations, and the number itself does not represent any order of performance. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
As described in the background art, when introducing apparel goods during live broadcasting, a live broadcasting user needs to frequently change the apparel in order to display and explain, so that the watching user can feel the contrast effect through the fitting of the live broadcasting user. However, the live broadcasting mode enables the watching user to obtain a weaker visual feeling, and frequent try-on greatly increases the workload of the live broadcasting user, thereby influencing the live broadcasting effect.
In order to further improve the network live broadcast effect, the inventor provides the technical scheme through a series of researches. The method and the device for displaying the live broadcast effect images can be used for simultaneously displaying at least two effect images obtained from the target images in the live broadcast interface of the client by obtaining at least two target images of the target objects in the live broadcast data and determining at least two associated objects corresponding to the target objects in the live broadcast process. In the embodiment of the application, at least two effect images are obtained by respectively rendering different associated objects for at least two target images of the target object, so that comparison can be performed more intuitively based on a plurality of effect images displayed simultaneously, and the watching experience of a watching user is improved.
The embodiment of the application is suitable for but not limited to a live network scene.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a schematic flow diagram of an embodiment of a live webcasting method provided in an embodiment of the present application, where a technical solution of the embodiment may be executed by a server, and the method may include the following steps:
101: at least two target images of a target object are obtained aiming at the target object in the live data.
In the network live broadcast process, a live broadcast end acquires live broadcast data of a live broadcast site through a live broadcast lens, and the live broadcast data can generally include live broadcast video data, video special effect setting data of a main broadcast end, induction data acquired by a live broadcast end sensing assembly, sound data and the like.
As an optional implementation manner, for a target object in live data, the acquiring at least two target images of the target object may include:
identifying a target object in the live data;
at least two target images of the target object are acquired.
The server side can identify the target object in the live broadcast data through technologies such as image identification and the like, and at least two target images of the target object are obtained aiming at the target object in the live broadcast data. The target image can be a two-dimensional plane image of the target object obtained by identifying the target object in the live video data, or can also be a three-dimensional stereo image generated by sensing data of the target object acquired by a live end sensing assembly, the number of the target images can be determined according to the live requirement of a live user, the number of the target images can be modified according to the change of the live requirement in the live process, and no specific limitation is made herein. The sensing component may be an AR (Augmented Reality) projection device arranged in a live broadcast site, or a wearable device worn on a live broadcast user, or a stereoscopic image capture device, and the like, which is not specifically limited herein.
In practical applications, the target object may be a live user, a model prop or other live props, and the like, which is not specifically limited herein. In order to obtain a more intuitive effect, at least two target image size proportions, postures, angles and the like of the target object can be completely the same or different, and the size proportions, postures, angles and the like can be specifically limited according to actual live broadcasting requirements.
102: and determining at least two associated objects corresponding to the target object.
The associated object may be an article to be explained by the live user, such as a garment, an ornament, a makeup kit, a backpack, and a special material, which are not specifically limited herein. Each target image may correspond to one associated object, or may correspond to a plurality of associated objects.
For example, if a live user wants to explain the fitting effect of two types of clothing, and the at least two associated objects may be two types of clothing, two target images may be obtained, where each target image is configured with one type of clothing.
If the live broadcast user wants to explain the effect of matching one ornament with different clothes, the at least two associated objects can be two kinds of clothes and the ornament. Two target images may be acquired, where each target image is configured with a garment and the accessory is configured at the same time. Of course, three target images may be acquired in order to compare the effect of trying on the same kind of clothes with the effect of wearing and not wearing the ornament. In the three target images, two target images are configured on the same garment and correspond to two target images of the same garment, wherein one target image is configured with the accessory, and the other target image is not configured with the accessory; the third target image is configured with the second kind of clothes and the condition of the accessory.
Therefore, the association relationship between the at least two associated objects corresponding to the target object and the target image needs to be specifically set according to the live broadcast requirement of the live broadcast user. In practical application, the target images may not be configured with associated objects according to the live broadcast requirement, for example, for a scene of three target images, one target image is not configured with associated objects, and the other two target images are respectively configured with one associated object, or each target image may be configured with at least one associated object respectively, which is not specifically limited herein.
103: and rendering at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and simultaneously displaying the effect images on a live interface of the client.
And rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and simultaneously displaying the at least two effect images on a live interface of the client. At least one associated object can be covered on the corresponding target image according to conditions of illumination, spatial distance, position and the like of the current environment, so that the target image presents a real fitting scene after being rendered by the virtual associated object, and at least two effect images are obtained. The at least two effect images can be displayed simultaneously in a live interface through a terminal device of the client for a user to watch.
The associated object may be entirely covered on the target image or only cover a part of the target image. For example, if the target image is a real portrait and the associated object is a piece of clothing, the clothing is covered on the body of the real portrait by an image rendering technology, and the associated object covers only a part of the body of the real portrait.
In practical application, the live broadcast interface may only present at least two effect images obtained by rendering the target image with the associated object, and may also present a background image in the live broadcast interface, where the background image may be environment content of a current live broadcast site, or a background image that is set by a live broadcast user in advance based on a use environment of the associated object, for example, the associated object is a swimsuit, the background image may be a background image of a swimming pool or a seaside, the associated object is a background image that an office scene may be set by a professional costume, and if the associated object is a sports costume, the background image may be a background image that a sports scene is set, and the background image may be rendered in the live broadcast interface, so that the effect images displayed by rendering the target image with the associated object and the associated object are integrated with the background image, and a display effect of virtual combination. The user experience can be further enhanced by adding the background picture, so that the user can feel the effect of the associated object in the preset use scene from the displayed live broadcast picture without imagination, and the user experience is further improved.
In the embodiment of the application, at least two effect images are obtained by obtaining at least two target images of a target object and respectively rendering at least two associated objects to the corresponding target images by taking a commodity to be explained as the associated objects. The AR display effect can be achieved through rendering imaging, particularly under the scene that a live user tries on clothes, the virtual try-on effect can be achieved, the live user does not need to waste a large amount of time and frequently try on different clothes, the watching user can be enabled to compare a plurality of effect images displayed simultaneously more intuitively, the more intuitive comparison effect is obtained, and the watching experience of the watching user is improved.
As an optional implementation manner, the rendering the at least two associated objects to the target images respectively to obtain at least two effect images, and the displaying simultaneously on the live interface of the client may include:
determining respective corresponding preset display positions of the at least two target images in a live interface of the client;
and rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and displaying the at least two effect images at corresponding preset display positions in a live interface of the client side simultaneously.
In practical application, because at least two effect images need to be displayed simultaneously in the live interface, after the number of the target images is determined, respective preset positions of each target image in the live interface are predetermined based on the number of the target images, and the target images can be distributed in the live interface according to different display forms, such as transverse display, longitudinal display, symmetrical display and the like, including distance interval setting and the like, and can be adapted to the live interface according to the number of the target images and the number of the target images, and no specific limitation is made herein.
As shown in fig. 2, in the live broadcasting process, the target object is a live user, and the target image is a three-dimensional stereoscopic image of the live user. In fig. 2, two target images of a live user are obtained, and two kinds of clothing are respectively rendered on the corresponding target images as associated objects to obtain two effect images. The two effect images are displayed simultaneously in a live interface in a display mode of bilateral symmetry distribution.
Further, in an implementation manner, after the at least two effect images obtained by rendering the at least two associated objects to the corresponding target images respectively are simultaneously displayed at the corresponding preset display positions in the live interface of the client, the method may further include:
receiving a moving instruction which is sent by a live broadcast end and generated based on the moving operation of a live broadcast user for any effect image;
performing display position movement processing on the any effect image based on the movement instruction.
If the live broadcast user is not satisfied with the preset display position, or the live broadcast effect needs to be better obtained by adjusting the display position of the effect image, optionally, a moving instruction can be generated and sent to the server by moving any effect image displayed in the live broadcast interface. The server moves the display position of any one effect image in the live broadcast interface based on the movement instruction, for example, the display positions of the two effect images displayed in bilateral symmetry are exchanged, or the two effect images displayed in bilateral symmetry are moved to be displayed in an up-down symmetry manner, and the like, for example, the distance between the two effect images is large and small, and the like, which can be adjusted by the moving operation of the live broadcast user.
In addition, the live broadcast user also performs a zoom operation on any effect image, for example, the live broadcast user needs to explain details of any related object in the process of explaining the related object, and any related object can be enlarged in order to enable the watching user to see details of the related object more clearly and intuitively. For example, in order to make the effect image better fit with the background picture, the viewing user can obtain a better viewing experience by reducing or enlarging the at least two effect images to present the best visual effect to the viewing user. In addition, for the three-dimensional effect image, the live broadcast user can also perform rotation operation on any three-dimensional effect image so as to adjust the azimuth angle of any three-dimensional effect image.
In practical application, the process of rendering the at least two associated objects to respective target images may be implemented by a server or a client, and is not limited specifically herein.
As an optional implementation manner, the rendering the at least two associated objects to the target images respectively to obtain at least two effect images, and the displaying simultaneously on the live interface of the client may include:
rendering and displaying the at least two associated objects in the corresponding target images respectively to obtain at least two effect images;
and simultaneously displaying the at least two effect images in a live interface of the client.
After the server side determines the associated object corresponding to each target image, the rendering step is directly executed, and the at least two associated objects are respectively rendered on the respective target images to obtain at least two effect images. The server side sends the effect images to the live broadcast side and the client side at the same time, and the live broadcast side and the client side display the at least two effect images on respective live broadcast interfaces at the same time.
Of course, as another optional implementation manner, the rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and the displaying on the live interface of the client at the same time may include:
simultaneously displaying the at least two target images in a live interface of the client;
and sending the at least two associated objects to the client, so that the client renders the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and displays the at least two effect images.
When the live user selects the associated object, a certain time may need to be consumed, so the server may first send at least two target images of the target object to the live end and the client for display. And after the server side sends the determined associated objects to the client side, the rendering operation is executed by the client side, so that the at least two associated objects are rendered to respective target images to obtain effect images. Similarly, the live broadcast end can receive at least two associated objects sent by the server end synchronously, render the at least two associated objects to respective target images to obtain at least two effect images, and display the effect images on a live broadcast interface of the live broadcast end simultaneously.
In the foregoing embodiment, when displaying the effect images, the watching user may not see the live view image of the live view or may only see a part of the live view image of the live view, in order to further improve the watching experience of the watching user, as another implementable embodiment, the rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and the displaying on the live view interface of the client at the same time may include:
at least two effect images obtained by rendering the at least two associated objects to the corresponding target images are sent to a live broadcast end;
and controlling the live broadcast end to project the at least two effect images to a projection screen in a live broadcast lens acquisition range at the same time so that the client can play live broadcast data showing the at least two effect images at a live broadcast interface at the same time.
In practical applications, an original image of a target object may be first acquired by using an AR projection technique, and at least two target images may be obtained based on the original image, wherein the target images may be obtained by copying or mirroring the original image of the target object. And the server side respectively renders the determined at least two associated objects to respective target images to obtain at least two effect images, and then sends the at least two effect images to the live broadcast side. The live broadcast end can project the effect image to the projection screen of the live broadcast lens collection range through the AR projection equipment, so that the live broadcast data collected by the live broadcast end comprises live broadcast data which simultaneously display at least two effect images.
Of course, at least two target images may also be first sent to the live broadcast end, and the live broadcast end projects the at least two target images into the screen by using the AR projection device. After the server side sends the at least two kinds of associated objects to the live broadcast side, the live broadcast side respectively renders the at least two kinds of associated objects to respective target images through AR projection equipment to obtain at least two effect images, and therefore live broadcast data acquired by the live broadcast side comprises live broadcast data which simultaneously shows the at least two effect images.
In the above embodiment, a display mode of multiple effect images in a live interface of a client is provided, which may be displaying independent of live data in the live interface, or playing in the live interface of the client in a mode of acquiring live data displaying at least two effect images in an AR projection mode at the live interface. Through various display modes, the anchor user can select a suitable effect image display mode more flexibly according to actual requirements and conditions, and the workload of the live user in the live broadcast process is further simplified.
Fig. 3 is a schematic flow diagram of an embodiment of a live webcasting method provided in an embodiment of the present application, where a technical solution of the embodiment may be executed by a server, and the method may include the following steps:
301: the method comprises the steps of collecting first preset information input by a live broadcast user in a live broadcast process.
302: and switching to a preset live broadcast mode based on the first preset information.
In a normal live broadcast process, a live broadcast end sends collected live broadcast data to a server end, the server end forwards the live broadcast data to a client end in real time, and the client end outputs the live broadcast data through a live broadcast interface for watching users.
When a live broadcast user needs to introduce a commodity comparison effect, first preset information can be input in the live broadcast process to trigger the server to switch to a preset live broadcast mode.
The preset live mode may be that, as shown in fig. 2, the live interface of the client stops playing live data, and is simultaneously switched to an effect graph comparison interface, so as to simultaneously display at least two effect graphs in the live interface. Certainly, in order not to affect the normal watching of the live data by the user, the live data is played in the live interface, and the effect graph preset display window is output so as to display at least two effect images in the effect graph preset display window at the same time.
In an optional implementation manner, in practical applications, the switching to the preset live mode based on the first predetermined information may include:
identifying a first control instruction corresponding to the first preset information;
and responding to the first control instruction, and switching to a preset live broadcast mode.
The first predetermined information may be gesture information, voice information, or information generated by a live-end input device based on a user operation, such as an output operation for a touch screen or a button.
Optionally, the acquiring the first predetermined information input by the live user in the live broadcasting process may include:
collecting first preset voice information input by a live user in a live broadcast process;
the identifying of the first control instruction corresponding to the first predetermined information may include:
and carrying out voice recognition on the first preset voice information to obtain the first control instruction.
Optionally, the acquiring the first predetermined information input by the live user in the live broadcasting process may include:
acquiring first sensing data generated by a sensing assembly based on a first preset gesture output by a live user in a live broadcasting process;
the identifying of the first control instruction corresponding to the first predetermined information may include:
and obtaining the first control instruction based on the first sensing data.
The actual live broadcast user can preset a first control instruction corresponding to the first preset information, for example, an association relation is established between words such as 'try-on' or 'comparison effect' and the first control instruction, and when the server identifies that any word appears in the live broadcast voice, the server can be triggered to generate the first control instruction, so that the server is switched to a preset live broadcast mode based on the first control instruction.
Similarly, the preset gesture information and the first control instruction can also be associated, so that when the first sensing data corresponding to the first preset gesture are acquired, the first control instruction is triggered and generated, and the server is switched to the preset live broadcast mode based on the first control instruction.
It can be understood that, after the explanation of the product contrast effect is completed, in order not to affect the viewing of the live broadcast data by the live broadcast user, the second preset information input in the live broadcast process may be further sent to the server, so that the server exits the current preset live broadcast mode based on the second preset information.
The second preset information is similar to the first preset information, and also can be voice information, gesture information or information generated by the live broadcast terminal input device based on user operation, for example, when a user speaks words such as "end" or "exit" in the live broadcast process, or outputs a second preset gesture, the server can be triggered to exit the current preset live broadcast mode by recognizing the second preset information, so that the server can enter a common live broadcast mode to perform live broadcast.
303: and under the preset live broadcast mode, aiming at a target object in live broadcast data, acquiring at least two target images of the target object.
As an optional implementation manner, in the preset live mode, for a target object in live data, acquiring at least two target images of the target object may include:
identifying a target object in the live broadcast data in the preset live broadcast mode;
generating the at least two target images based on the target object.
In practical applications, the target object is usually a live user, and in order to generate a three-dimensional target image of the live user to obtain a more vivid display effect, the generating the at least two target images based on the target object may include:
acquiring three-dimensional spatial data of the target object;
at least two target images of the target object are generated based on the three-dimensional spatial data.
The depth information of the live broadcast user is acquired and obtained based on three-dimensional imaging equipment or infrared equipment, and three-dimensional space data of the target object is obtained based on the depth information. For example, three-dimensional body type data of a live user can be obtained based on collecting three-dimensional space data of the live user, so that a three-dimensional body type model of the live user can be generated, and at least two target images of the live user are generated based on the three-dimensional body type model.
For synchronization of the target image and the motion of the target object, the method may further include acquiring three-dimensional posture data of the target object in real time, and capturing a morphological change of the live user based on the three-dimensional posture data, wherein the method further includes:
acquiring three-dimensional attitude data of the target object;
and synchronously controlling the three-dimensional postures of the at least two target images based on the three-dimensional posture data.
In practical application, the three-dimensional posture data of the target object can detect changes of skeletal key points of the live broadcast user through a skeletal tracking technology, so that the posture and the shape of the live broadcast user can be predicted, meanwhile, the motion posture of the head of the live broadcast user can be predicted by combining an eyeball tracking technology and the like, and the three-dimensional posture data of the live broadcast user can be obtained.
The three-dimensional gestures of the at least two target images are synchronously controlled based on the three-dimensional gesture data, the associated object can be synchronously rendered based on the gesture change of the target object, and different dynamic effects can be obtained based on the rendering of different gestures, for example, the associated object is a skirt, and when a live broadcast user performs actions such as turning or rotating, the skirt can render dynamic effects such as skirt waving along with the turning of the live broadcast user, and the like, and the method is not limited specifically herein.
304: and determining at least two associated objects corresponding to the target object.
In an optional embodiment, the determining at least two associated objects corresponding to the target object may include:
and determining at least one associated object corresponding to each target image of the target objects.
In practical application, the associated object of the target object may be determined by a live user in a live process, or a configuration relationship between the target image and the associated object may be preset and stored in the server.
As an optional implementation manner, the determining at least two associated objects corresponding to the target object may include:
receiving at least two object identifications sent by a live broadcast end; wherein the at least two object identifications are generated based on a trigger operation of a live user for the at least two associated objects;
and determining at least two associated objects corresponding to the target object based on the at least two object identifications.
In practical application, the live broadcast user stores object information of the associated object to the server in advance, the object information may include a size, a color, a style, and an object identifier of the associated object, and if three-dimensional rendering is required, three-dimensional stereo data of the associated object, and the like.
The associated objects stored by the server can be displayed in a live interface of the live end in a list form, a live user generates an object identifier of the associated object through a triggering operation aiming at any associated object in the associated object list, and a target image can be selected in advance before the associated object is selected, so that the association relationship between the target image and the associated object is established.
Similarly, the viewing user of the client may also select an associated object of the target object according to a requirement of the viewing user, and the determining of the at least two associated objects corresponding to the target object may include:
receiving at least two object identifications sent by a client; wherein the at least two object identifications are generated based on a triggering operation of a viewing user for the at least two associated objects;
and determining at least two associated objects corresponding to the target object based on the at least two object identifications.
The associated objects stored by the server may also be displayed in a live interface of the client in a list form, for example, the viewing user generates an object identifier of any associated object in the associated object list through a trigger operation for the associated object, and may select a target image in advance before selecting the associated object, thereby establishing an association relationship between the target image and the associated object.
The live broadcast user or the watching user may output a voice instruction of "selecting the nth associated object" through voice, for example, according to the arrangement order in the associated object list, and the server obtains the associated object of the object identification object of the nth associated object through voice recognition, and may also select the associated object through gestures, touch screen operations, key operations, and the like, which is not specifically limited herein.
As an achievable implementation, the determining at least two associated objects corresponding to the target object may include:
acquiring at least one user characteristic of the client;
and obtaining at least two associated objects corresponding to the target object based on the at least one user characteristic matching.
In practice, the at least one user characteristic may include a historical viewing record, a shopping record, a preference, a user collection record, or a shopping cart record of the viewing user. And matching the at least one user characteristic with the characteristics of the associated objects prestored by the server to obtain at least two associated objects with higher matching degree. For example, matching is carried out on the basis of the fact that the features (one or more of type, color, material, size, cost performance and the like) of clothes historically purchased by a user or clothes added into a shopping cart are matched with the features of preset clothes, and at least two leisure type preset clothes similar or identical to the clothes historically purchased by the user or the clothes added into the shopping cart are obtained through matching and serve as related objects; or obtaining user preference based on the user label, for example, classifying into occupation, leisure, sports and other types according to the dressing style, and if watching the clothes of which the user likes the leisure type, matching to obtain preset clothes conforming to at least two leisure types as associated objects.
305: and rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and displaying the effect images simultaneously on a live interface of the client.
In practical application, when the anchor user is performing an effect image explanation process, a viewing user of the client may select whether to display the effect image in the live interface according to a viewing requirement, and thus, as an optional implementation manner, the rendering of the at least two associated objects to the respective corresponding target images to obtain at least two effect images may include:
receiving an effect image display instruction sent by the client; the effect image display instruction is generated based on the triggering operation of a watching user of the client to the effect image display control;
and rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and displaying the effect images simultaneously on a live interface of the client.
After the server is triggered to enter a preset live mode, an effect image display control can be displayed in a live interface of the client, when a watching user opens the control, an effect image display instruction can be triggered to be generated, the server respectively renders the at least two associated objects to the corresponding target images based on the effect image display instruction to obtain at least two effect images, and the two effect images are displayed on the live interface of the client simultaneously.
When the watching user watches the corresponding effect image in the previous live broadcast, the effect image is not required to be displayed in the live broadcast interface, and the client can continuously play the live broadcast data in the live broadcast interface by closing the control.
Optionally, if the associated object obtained by the anchor user selection or automatic matching does not meet the requirement of the viewing user, or the viewing user wants to change the associated object to experience different contrast effect. Further, as an implementable implementation manner, after the at least two effect images obtained by rendering the at least two associated objects to the respective corresponding target images are simultaneously displayed on a live interface of the client, the method may further include:
and receiving a replacement instruction for any effect image sent by the client.
Wherein the replacement instruction is generated based on a replacement operation of a viewing user for an associated object of the any effect image.
And determining the associated object to be replaced based on the replacing instruction.
And re-rendering the associated object to be replaced to the target image of any effect image object to obtain a replaced effect image, and performing replacement display on a live interface of the client.
In the actual process of replacing the associated object, it is necessary to remove the original associated object in any effect image to obtain a corresponding target image, and re-render the associated object with the object to be replaced onto the target image to obtain a replaced effect image. In this process, the watching user may be realized by any operation mode, such as voice, gesture, touch, and the like, which is not specifically limited herein.
In the embodiment of the application, in order not to influence the watching user to watch live broadcast data, the server is triggered to switch to the preset live broadcast mode only by outputting the first preset information when the live broadcast user needs to explain the display effect of the effect image, so that the display of the effect image is completed in the preset live broadcast mode, the preset live broadcast mode can be quitted after the explanation is completed, and the live broadcast user is not influenced to carry out subsequent live broadcast.
In addition, in order to enable the effect image displayed by the anchor user in the scenes such as dress try-on to be more vivid and vivid, the three-dimensional body type data and the three-dimensional posture data of the anchor user are collected, so that the display effect image is more vivid and can be synchronized with the action of the live user, the live effect is further improved, the effect image seen by the watching user can obtain more real feeling, and better watching experience is obtained.
Fig. 4 is a schematic flow diagram of an embodiment of a live network broadcast method provided in an embodiment of the present application, where a technical scheme of the embodiment may be executed by a live broadcast end, and the method may include the following steps:
401: collecting live broadcast data; and the live data comprises a target object.
Live broadcast end passes through live broadcast camera lens and other information acquisition equipment and gathers live broadcast live in the video data include: video images, voice information, sensed data, etc., and are not specifically limited herein.
402: and sending the live broadcast data to a server.
The server side obtains at least two target images of the target object and determines at least two associated objects corresponding to the target object aiming at the target object in the live broadcast data, and at least two effect images obtained by rendering the at least two associated objects to the corresponding target images are displayed simultaneously on a live broadcast interface of the client side.
As an implementable embodiment, said collecting live data may comprise:
acquiring first preset information input by a live broadcast user in a live broadcast process;
the sending the live data to the server may include:
and sending the first preset information to the server side so that the server side can be switched to a preset live broadcast mode based on the first preset information, and acquiring at least two target images of a target object in live broadcast data in the preset live broadcast mode.
In practical application, at least two effect images can be displayed simultaneously in a live interface of a live end. Optionally, in some embodiments, after sending the live data to the server, the method may further include:
receiving at least two effect images sent by the server;
simultaneously displaying the at least two effect images in a live interface of a live end; the at least two effect images are obtained by rendering the at least two associated objects to the corresponding target images respectively by the server side.
As an implementation manner, after sending the live data to the server, the method may further include:
receiving at least two target images corresponding to the target object sent by the server;
simultaneously displaying the at least two target images in a live interface of a live end;
receiving at least two associated objects corresponding to the target object sent by the server;
rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images;
and displaying the at least two effect images in a live interface of the live end.
The same as the implementation method of displaying the effect images by the client, the live broadcast end can also display at least two effect images of the target object according to actual requirements, and perform rendering operation after receiving at least two associated objects sent by the server end, so that the at least two effect images are displayed in a live broadcast interface; of course, at least two effect images obtained by the server after the rendering operation is executed may also be directly obtained, and the at least two effect images are simultaneously displayed in the live interface.
In an implementation manner, after sending the live data to the server, the method may further include:
receiving at least two effect images sent by the server;
and projecting the at least two effect images to a screen within a live broadcast lens acquisition range simultaneously to acquire and obtain live broadcast data for simultaneously displaying the at least two effect images.
After the live broadcast end receives the at least two effect images sent by the server end, the AR projection device connected with the live broadcast end can be controlled to project the at least two effect images onto a screen, and the projection screen can be a curtain or a projection wall in practical application, which is not specifically limited herein.
Because the screen setting is in the live shot collection scope, consequently the live shot can be gathered and the live picture that shows two at least effect images simultaneously. The live broadcast end sends the live broadcast picture to the server end, the live broadcast interface sent to the client end by the server end is displayed, the watching user can see a plurality of effect images displayed simultaneously through live broadcast data played in the live broadcast interface at the moment, the contrast effect of the effect images can be felt more intuitively when the watching live broadcast effect is not influenced, and therefore the watching and the showing are better achieved.
As an optional implementation, the method may further include:
collecting three-dimensional space data of the target object in a live broadcast process;
and sending the three-dimensional space data to the server so that the server can generate at least two target images of the target object based on the three-dimensional space data.
As another optional implementation, the method may further include:
acquiring three-dimensional attitude data of the target object;
and sending the three-dimensional attitude data to the server side so that the server side can synchronously control the three-dimensional attitudes of the at least two target images based on the three-dimensional attitude data.
The foregoing detailed description has described specific embodiments of the present application in detail, which are not repeated herein.
According to the embodiment of the application, the live broadcast end acquires live broadcast data and acquires three-dimensional data of a target object through three-dimensional image acquisition equipment or AR projection equipment, wherein the three-dimensional data comprises three-dimensional space data and three-dimensional attitude data, so that the target image is more vivid and the effect is more vivid. Further, through setting up AR projection equipment at the live broadcast end, the work load of server can be simplified for the live broadcast user can explain in the live broadcast in-process in combination with two at least effect images of screen presentation, has further improved live broadcast effect, makes and watches the user and obtain watching experience better.
Fig. 5 is a schematic flow chart of an embodiment of an information display method provided in an embodiment of the present application, where a technical solution of the embodiment may be executed by a client, and the method may include the following steps:
501: providing a direct broadcasting interface; and the live broadcast interface is used for displaying live broadcast data.
502: and acquiring at least two effect images corresponding to the target object in the live broadcast data.
Wherein the at least two effect images are obtained by rendering at least two associated objects corresponding to the target object into the corresponding target images respectively; and at least two target images are obtained by the server side aiming at the target object.
In practical application, the client may generate an effect image display instruction based on a user operation of a viewing user, obtain at least two effect images by sending the effect image display instruction to the server, or obtain at least two effect images by sending at least two associated objects from the server and rendering the at least two associated objects to at least two target images displayed on a live interface, where the obtaining manner of the previous effect images is described in detail and is not described herein again.
503: and simultaneously displaying the at least two effect images on the live broadcast interface.
As an optional implementation, the acquiring at least two effect images corresponding to the target object in the live data may include:
receiving at least two target images corresponding to the target object sent by the server;
simultaneously displaying the at least two target images in a live interface of the client;
receiving at least two associated objects corresponding to the target object sent by the server;
rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images.
As another optional implementation manner, the acquiring at least two effect images corresponding to a target object in the live data may include:
receiving at least two effect images of the target object sent by the server; the at least two effect images are obtained by rendering the at least two associated objects to the corresponding target images respectively by the server side.
In practical application, before the obtaining of the at least two effect images corresponding to the target object in the live data, the method may further include:
generating an effect image display instruction based on the triggering operation of a watching user for the effect image display control;
and sending the effect image display instruction to the server, so that the server renders the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and simultaneously displaying the at least two effect images on a live interface of the client.
Optionally, in some embodiments, the presenting the at least two effect images simultaneously on the live interface of the client may include:
simultaneously displaying the at least two effect images at respective preset display positions of a live interface of the client; and the preset display positions of the at least two effect images are determined by the server.
Optionally, in some embodiments, before the displaying the at least two effect images simultaneously on the live interface of the client, the method may further include:
a preset display window output in a live interface of the client;
and simultaneously displaying the at least two effect images in the preset display window.
The at least two effect images are displayed through the preset display window, so that live broadcast data can be played on a live broadcast interface while the effect images are displayed. The preset display window can be in a floating window or small window form, a watching user can adjust the size of the preset display window according to the watching habit of the watching user, the preset display window can be adjusted to cover the live broadcast interface to the maximum extent and the display position of the preset display window on the live broadcast interface can be adjusted, and the size is not limited specifically here.
The foregoing detailed description has described specific embodiments of the present application in detail, which are not repeated herein.
In the embodiment of the application, at least two effect images are displayed simultaneously by providing a live interface, meanwhile, in order to reduce the data processing pressure of a server, the rendering process can be sunk to a terminal equipment end, and at least two associated objects are rendered to respective target images in the live interface by a client to obtain at least two effect images.
In addition, in order to avoid influencing the playing of live data while watching the effect images, the at least two effect images can be independently displayed through the preset display window, the position and the size of the preset display window can be adjusted based on the watching habits of the watching users, and the watching experience of the watching users is further optimized.
Fig. 6 is a schematic flow diagram of an embodiment of a live webcasting method provided in an embodiment of the present application, where a technical solution of the embodiment may be executed by a server, and the method may include the following steps:
601: and determining the target object based on the target object selection instruction sent by the client.
602: at least two target images of the target object are acquired.
The target object selection instruction is generated based on the selection operation of a viewing user of the client to any preset object;
603: and determining at least two associated objects corresponding to the target object.
604: and rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and displaying the effect images simultaneously on a live interface of the client.
In the foregoing embodiment, the target object is determined based on live broadcast data uploaded by a live broadcast terminal in a live broadcast process. In practical application, a watching user can trigger the effect image contrast display through a watching end according to own requirements, and the user can independently select whether to select a target object set by a live user. The watching user can store at least one preset object at the server, wherein the preset object can be a target object set by a live user, can also be a user portrait set by the watching user, or a three-dimensional stereo image of the watching user collected by a three-dimensional image collecting device, and can also be other model portraits, prop portraits and the like. The watching user can select any preset object as a target object so as to obtain better experience effect. In practical applications, the at least one preset object set by the viewing user may be presented in a thumbnail, an object list or any other form in a live interface of the client for the viewing user to select. The watching user can select any preset object as a target object based on any trigger operation, and trigger the client to generate a target object selection instruction, wherein the selection instruction can carry object information such as an object identifier or an object thumbnail of the preset object, so that the server determines the target object based on the object information and acquires at least two target images of the target object.
In practical applications, the operations of steps 602 to 604 are the same as those of the embodiments shown in fig. 1 to 3, and the foregoing detailed description has been given to the specific embodiments of the present application, and will not be repeated herein.
According to the embodiment of the application, the target object and the associated object can be selected according to the requirements of different watching users of the client, so that an effect image meeting the requirements of the users can be obtained, and better watching experience can be obtained. The method and the device avoid the condition that the watching user needs to wait for the operation of the live broadcasting user or needs to communicate with the live broadcasting user repeatedly so that the live broadcasting user triggers to generate the expected effect image of the watching user, can improve the autonomous participation degree of the watching user, can meet the demand diversity of different watching users, and further improve the user experience.
Fig. 7 is a schematic flow diagram of an embodiment of an information display method provided in an embodiment of the present application, where a technical solution of the embodiment may be executed by a client, and the method may include the following steps:
701: providing a direct broadcasting interface; and the live broadcast interface is used for displaying live broadcast data.
702: and sending a target object selection instruction to a server, so that the server determines a target object based on the target object selection instruction and acquires at least two target images corresponding to the target object.
The target object selection instruction is generated based on the selection operation of a viewing user for any preset object.
703: and acquiring at least two effect images corresponding to the target object.
Wherein the at least two effect images are obtained by rendering at least two associated objects corresponding to the target object into the corresponding target images respectively; the at least two target images are obtained by the server side aiming at the target object.
704: and simultaneously displaying the at least two effect images on the live broadcast interface.
The foregoing detailed description has described specific embodiments of the present application in detail, which are not repeated herein.
According to the embodiment of the application, the target object and the associated object can be selected according to the requirements of different watching users of the client, so that an effect image meeting the requirements of the users can be obtained, and better watching experience can be obtained. The method and the device avoid the condition that the watching user needs to wait for the operation of the live broadcasting user or needs to communicate with the live broadcasting user repeatedly so that the live broadcasting user triggers to generate the expected effect image of the watching user, can improve the autonomous participation degree of the watching user, can meet the demand diversity of different watching users, and further improve the user experience.
Fig. 8 is a schematic structural diagram of an embodiment of a live webcast device provided in an embodiment of the present application, where a technical solution of the embodiment may be executed by a server, and the device may include:
a first obtaining module 801, configured to obtain, for a target object in live data, at least two target images of the target object.
As an optional implementation manner, the first obtaining module 801 may specifically be configured to:
identifying a target object in the live data;
at least two target images of the target object are acquired.
A first determining module 802, configured to determine at least two associated objects corresponding to the target object.
The first display module 803 is configured to render the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and display the at least two effect images simultaneously on a live interface of the client.
As an optional implementation manner, the first display module 803 may specifically be configured to:
determining respective corresponding preset display positions of the at least two target images in a live interface of the client;
and rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and displaying the at least two effect images at corresponding preset display positions in a live interface of the client side simultaneously.
As an optional implementation manner, the first display module 803 may specifically be configured to:
rendering and displaying the at least two associated objects in the corresponding target images respectively to obtain at least two effect images;
and simultaneously displaying the at least two effect images in a live interface of the client.
Of course, as another alternative implementation, the first display module 803 may be specifically configured to:
simultaneously displaying the at least two target images in a live interface of the client;
and sending the at least two associated objects to the client, so that the client renders the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and displays the at least two effect images.
In the foregoing embodiment, when displaying the effect image, the watching user may not see the live view of the live view or may only see a part of the live view, and in order to further improve the watching embodiment of the watching user, as another implementable embodiment, the first displaying module 803 may specifically be configured to:
at least two effect images obtained by rendering the at least two associated objects to the corresponding target images are sent to a live broadcast end;
and controlling the live broadcast end to project the at least two effect images to a projection screen in a live broadcast lens acquisition range at the same time so that the client can play live broadcast data showing the at least two effect images at a live broadcast interface at the same time.
The foregoing detailed description has described specific embodiments of the present application in detail, which are not repeated herein.
In the embodiment of the application, at least two effect images are obtained by obtaining at least two target images of a target object and respectively rendering at least two associated objects to the corresponding target images by taking a commodity to be explained as the associated objects. The AR display effect can be achieved through rendering imaging, particularly under the scene that a live user tries on clothes, the virtual try-on effect can be achieved, the live user does not need to frequently try on different clothes within a large amount of time, the watching user can be enabled to compare a plurality of effect images displayed simultaneously more intuitively, the more intuitive experience effect is obtained, and the watching experience of the watching user is improved.
Fig. 9 is a schematic structural diagram of an embodiment of a live webcast device provided in an embodiment of the present application, where a technical solution of the embodiment may be executed by a server, and the device may include:
the first collecting module 901 is configured to collect first predetermined information input by a live broadcast user in a live broadcast process.
A mode switching module 902, configured to switch to a preset live mode based on the first predetermined information.
In practical applications, in an optional implementation manner, the mode switching module 902 may specifically include:
the identification unit is used for identifying a first control instruction corresponding to the first preset information;
and the response unit is used for responding to the first control instruction and switching to a preset live broadcast mode.
Optionally, the first acquisition module 901 may be specifically configured to:
collecting first preset voice information input by a live user in a live broadcast process;
the identification unit may specifically be configured to:
and carrying out voice recognition on the first preset voice information to obtain the first control instruction.
Optionally, the first acquisition module 901 may be specifically configured to:
acquiring first sensing data generated by a sensing assembly based on a first preset gesture output by a live user in a live broadcasting process;
the identification unit may specifically be configured to:
and obtaining the first control instruction based on the first sensing data.
A first obtaining module 903, configured to obtain, in a preset live broadcast mode, at least two target images of a target object in live broadcast data.
As an optional implementation manner, the first obtaining module 903 may specifically be configured to:
identifying a target object in the live broadcast data in the preset live broadcast mode;
generating the at least two target images based on the target object.
In practical application, the target object is usually a live user, and in order to generate a three-dimensional target image of the live user to obtain a more vivid display effect, the generation of the at least two target images based on the target object may be specifically used to:
acquiring three-dimensional spatial data of the target object;
at least two target images of the target object are generated based on the three-dimensional spatial data.
In order to synchronize the target image with the motion of the target object, the apparatus may further include, by acquiring three-dimensional posture data of the target object in real time, capturing a morphological change of the live user based on the three-dimensional posture data:
the three-dimensional attitude data acquisition module is used for acquiring three-dimensional attitude data of the target object;
and the three-dimensional attitude control module is used for synchronously controlling the three-dimensional attitudes of the at least two target images based on the three-dimensional attitude data.
A first determining module 904, configured to determine at least two associated objects corresponding to the target object.
A first display module 905, configured to render the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and display the at least two effect images simultaneously on a live interface of the client.
In an optional implementation manner, the first determining module 904 may specifically be configured to:
and determining at least one associated object corresponding to each target image of the target objects.
In practical application, the associated object of the target object may be determined by a live user in a live process, or a configuration relationship between the target image and the associated object may be preset and stored in the server.
As an optional implementation manner, the first determining module 904 may specifically be configured to:
receiving at least two object identifications sent by a live broadcast end; wherein the at least two object identifications are generated based on a trigger operation of a live user for the at least two associated objects;
and determining at least two associated objects corresponding to the target object based on the at least two object identifications.
Similarly, the viewing user of the client may also select the associated object of the target object according to the requirement of the viewing user, and the first determining module 904 may be specifically configured to:
receiving at least two object identifications sent by a client; wherein the at least two object identifications are generated based on a triggering operation of a viewing user for the at least two associated objects;
and determining at least two associated objects corresponding to the target object based on the at least two object identifications.
The associated objects stored by the server may also be displayed in a live interface of the client in a list form, for example, the viewing user generates an object identifier of any associated object in the associated object list through a trigger operation for the associated object, and may select a target image in advance before selecting the associated object, thereby establishing an association relationship between the target image and the associated object.
As an implementation manner, the first determining module 904 may specifically be configured to:
acquiring at least one user characteristic of the client;
and obtaining at least two associated objects corresponding to the target object based on the at least one user characteristic matching.
In practical application, when the anchor user is performing an explanation of an effect image, a viewing user at the client may select whether to display the effect image in the live interface according to a viewing requirement, and therefore, as an optional implementation manner, the first display module 905 may be specifically configured to:
receiving an effect image display instruction sent by the client; the effect image display instruction is generated based on the triggering operation of a watching user of the client to the effect image display control;
and rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and displaying the effect images simultaneously on a live interface of the client.
Optionally, if the associated object obtained by the anchor user selection or automatic matching does not meet the requirement of the viewing user, or the viewing user wants to change the associated object to experience different contrast effect. Further, as an implementable implementation, after the first display module 905, the method may further include:
and the replacing instruction receiving module is used for receiving a replacing instruction aiming at any effect image sent by the client.
Wherein the replacement instruction is generated based on a replacement operation of a viewing user for an associated object of the any effect image;
and the to-be-replaced associated object determining module is used for determining the to-be-replaced associated object based on the replacing instruction.
And the replacing and displaying module is used for performing replacing and displaying on a live interface of the client by the replaced effect image obtained by re-rendering the associated object to be replaced to the target image of any effect image object.
The foregoing detailed description has described specific embodiments of the present application in detail, which are not repeated herein.
In the embodiment of the application, in order not to influence the watching user to watch live broadcast data, the server is triggered to switch to the preset live broadcast mode only by outputting the first preset information when the live broadcast user needs to explain the display effect of the effect image, so that the display of the effect image is completed in the preset live broadcast mode, the preset live broadcast mode can be quitted after the explanation is completed, and the live broadcast user is not influenced to carry out subsequent live broadcast.
In addition, in order to enable the effect image displayed by the anchor user in the scenes such as dress try-on to be more vivid and vivid, the three-dimensional body type data and the three-dimensional posture data of the anchor user are collected, so that the display effect image is more vivid and can be synchronized with the action of the live user, the live effect is further improved, the effect image seen by the watching user can obtain more real feeling, and better watching experience is obtained.
Fig. 10 is a schematic structural diagram of an embodiment of a network live broadcast apparatus provided in an embodiment of the present application, where a technical solution of the embodiment may be executed by a live broadcast end, and the apparatus may include:
a live data acquisition module 1001 for acquiring live data; and the live data comprises a target object.
And a live data sending module 1002, configured to send the live data to a server.
The server side obtains at least two target images of the target object and determines at least two associated objects corresponding to the target object aiming at the target object in the live broadcast data, and at least two effect images obtained by rendering the at least two associated objects to the corresponding target images are displayed simultaneously on a live broadcast interface of the client side.
As an implementation manner, the live data collection module 1001 may specifically be configured to:
the method comprises the steps of collecting first preset information input by a live broadcast user in a live broadcast process.
The live data sending module 1002 may specifically be configured to:
and sending the first preset information to the server side so that the server side can be switched to a preset live broadcast mode based on the first preset information, and acquiring at least two target images of a target object in live broadcast data in the preset live broadcast mode.
In practical application, at least two effect images can be displayed simultaneously in a live interface of a live end. Optionally, in some embodiments, after the live data sending module 1002, the method may further include:
the first effect image receiving module is used for receiving at least two effect images sent by the server;
and the first effect image display module is used for simultaneously displaying the at least two effect images in a live interface of the live end.
The at least two effect images are obtained by rendering the at least two associated objects to the corresponding target images respectively by the server side.
As an implementation manner, after the live data sending module 1002, the method may further include:
the target image receiving module is used for receiving at least two target images corresponding to the target object sent by the server;
the target image display module is used for simultaneously displaying the at least two target images in a live interface of a live end;
the associated object receiving module is used for receiving at least two associated objects corresponding to the target object sent by the server;
the rendering module is used for rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images;
and the second effect image display module is used for displaying the at least two effect images in a live broadcast interface of the live broadcast end.
In an implementation manner, after the live data sending module 1002, the method may further include:
the second effect image receiving module is used for receiving at least two effect images sent by the server;
and the projection module is used for projecting the at least two effect images to a screen within a live broadcast lens acquisition range simultaneously so as to acquire and obtain live broadcast data which simultaneously display the at least two effect images.
As an optional implementation, the apparatus may further include:
the three-dimensional space data acquisition module is used for acquiring three-dimensional space data of the target object in the live broadcast process;
the first sending module is used for sending the three-dimensional space data to the server so that the server can generate at least two target images of the target object based on the three-dimensional space data.
As another optional implementation, the apparatus may further include:
the three-dimensional attitude data acquisition module is used for acquiring three-dimensional attitude data of the target object;
and the second sending module is used for sending the three-dimensional attitude data to the server so that the server can synchronously control the three-dimensional attitudes of the at least two target images based on the three-dimensional attitude data.
The foregoing detailed description has described specific embodiments of the present application in detail, which are not repeated herein.
According to the embodiment of the application, the live broadcast end acquires live broadcast data and acquires three-dimensional data of a target object through three-dimensional image acquisition equipment or AR projection equipment, wherein the three-dimensional data comprises three-dimensional space data and three-dimensional attitude data, so that the target image is more vivid and the effect is more vivid. Further, through setting up AR projection equipment at the live broadcast end, the work load of server can be simplified for the live broadcast user can explain in the live broadcast in-process in combination with two at least effect images of screen presentation, has further improved live broadcast effect, makes and watches the user and obtain watching experience better.
Fig. 11 is a schematic structural diagram of an embodiment of an information display apparatus provided in an embodiment of the present application, where a technical solution of the embodiment may be executed by a client, and the apparatus may include:
a first live interface providing module 1101 for providing a live interface; and the live broadcast interface is used for displaying live broadcast data.
A second obtaining module 1102, configured to obtain at least two effect images corresponding to a target object in the live data.
Wherein the at least two effect images are obtained by rendering at least two associated objects corresponding to the target object into the corresponding target images respectively; at least two target images are obtained by the server side aiming at the target object;
a second displaying module 1103, configured to display the at least two effect images simultaneously on the live interface.
As an optional implementation, the acquiring at least two effect images corresponding to the target object in the live data may include:
receiving at least two target images corresponding to the target object sent by the server;
simultaneously displaying the at least two target images in a live interface of the client;
receiving at least two associated objects corresponding to the target object sent by the server;
rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images.
As another optional implementation manner, the acquiring at least two effect images corresponding to a target object in the live data may include:
receiving at least two effect images of the target object sent by the server; the at least two effect images are obtained by rendering the at least two associated objects to the corresponding target images respectively by the server side.
In practical application, before the obtaining of the at least two effect images corresponding to the target object in the live data, the method may further include:
generating an effect image display instruction based on the triggering operation of a watching user for the effect image display control;
and sending the effect image display instruction to the server, so that the server renders the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and simultaneously displaying the at least two effect images on a live interface of the client.
Optionally, in some embodiments, the presenting the at least two effect images simultaneously on the live interface of the client may include:
simultaneously displaying the at least two effect images at respective preset display positions of a live interface of the client; and the preset display positions of the at least two effect images are determined by the server.
Optionally, in some embodiments, before the displaying the at least two effect images simultaneously on the live interface of the client, the method may further include:
a preset display window output in a live interface of the client;
and simultaneously displaying the at least two effect images in the preset display window.
The foregoing detailed description has described specific embodiments of the present application in detail, which are not repeated herein.
In the embodiment of the application, at least two effect images are displayed simultaneously by providing a live interface, meanwhile, in order to reduce the data processing pressure of a server, the rendering process can be sunk to a terminal equipment end, and at least two associated objects are rendered to respective target images in the live interface by a client to obtain at least two effect images.
In addition, in order to avoid influencing the playing of live data while watching the effect images, the at least two effect images can be independently displayed through the preset display window, the position and the size of the preset display window can be adjusted based on the watching habits of the watching users, and the watching experience of the watching users is further optimized.
Fig. 12 is a schematic structural diagram of an embodiment of a live webcast device provided in an embodiment of the present application, where a technical solution of the embodiment may be executed by a server, and the device may include:
a target object determining module 1201, configured to determine a target object based on a target object selection instruction sent by the client.
A third obtaining module 1202, configured to obtain at least two target images of the target object.
The target object selection instruction is generated based on the selection operation of a viewing user of the client to any preset object.
A second determining module 1203, configured to determine at least two associated objects corresponding to the target object;
a third display module 1204, configured to render the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and display the at least two effect images simultaneously on a live interface of the client.
The foregoing detailed description has described specific embodiments of the present application in detail, which are not repeated herein.
According to the embodiment of the application, the target object and the associated object can be selected according to the requirements of different watching users of the client, so that an effect image meeting the requirements of the users can be obtained, and better watching experience can be obtained. The method and the device avoid the condition that the watching user needs to wait for the operation of the live broadcasting user or needs to communicate with the live broadcasting user repeatedly so that the live broadcasting user triggers to generate the expected effect image of the watching user, can improve the autonomous participation degree of the watching user, can meet the demand diversity of different watching users, and further improve the user experience.
Fig. 13 is a schematic structural diagram of an embodiment of an information display apparatus provided in an embodiment of the present application, where a technical solution of the embodiment may be executed by a client, and the apparatus may include:
a second live interface providing module 1301, configured to provide a live interface; the live interface is used for displaying live data;
a selecting instruction sending module 1302, configured to send a target object selecting instruction to the server, so that the server determines a target object based on the target object selecting instruction and obtains at least two target images corresponding to the target object.
The target object selection instruction is generated based on the selection operation of a viewing user for any preset object.
A fourth obtaining module 1303, configured to obtain at least two effect images corresponding to the target object.
The at least two effect images are obtained by respectively rendering at least two associated objects corresponding to the target object into the corresponding target images, and the at least two target images are obtained by the server side aiming at the target object.
A fourth display module 1304, configured to display the at least two effect images simultaneously on the live interface.
The foregoing detailed description has described specific embodiments of the present application in detail, which are not repeated herein.
According to the embodiment of the application, the target object and the associated object can be selected according to the requirements of different watching users of the client, so that an effect image meeting the requirements of the users can be obtained, and better watching experience can be obtained. The method and the device avoid the condition that the watching user needs to wait for the operation of the live broadcasting user or needs to communicate with the live broadcasting user repeatedly so that the live broadcasting user triggers to generate the expected effect image of the watching user, can improve the autonomous participation degree of the watching user, can meet the demand diversity of different watching users, and further improve the user experience.
Fig. 14 is a schematic structural diagram of an embodiment of a live server provided in an embodiment of the present application, where the live server may include a processing component 1401 and a storage component 1402.
The storage component 1402 is for storing one or more computer instructions; the one or more computer instructions are to be invoked for execution by the processing component 1401.
The processing component 1401 may be configured to:
aiming at a target object in live data, acquiring at least two target images of the target object;
determining at least two associated objects corresponding to the target object;
and rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and displaying the effect images simultaneously on a live interface of the client.
Among other things, the processing component 1401 may include one or more processors to execute computer instructions to perform all or part of the steps of the method described above. Of course, the processing elements may also be implemented as one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components configured to perform the above-described methods.
The storage component 1402 is configured to store various types of data to support operations in the server. The memory components may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Of course, the server may of course also comprise other components, such as input/output interfaces, communication components, etc.
The input/output interface provides an interface between the processing components and peripheral interface modules, which may be output devices, input devices, etc.
The communication component is configured to facilitate wired or wireless communication between the server and other devices, such as with a terminal.
An embodiment of the present application further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a computer, the network live broadcast method in the embodiments shown in fig. 1 and fig. 3 may be implemented.
Fig. 15 is a schematic structural diagram of an embodiment of a live server provided in an embodiment of the present application, where the live server may include a processing component 1501 and a storage component 1502.
The storage component 1502 is configured to store one or more computer instructions; the one or more computer instructions are to be invoked for execution by the processing component 1501.
The processing component 1501 may be configured to:
determining a target object based on a target object selection instruction sent by a client and acquiring at least two target images of the target object; the target object selection instruction is generated based on the selection operation of a viewing user of the client to any preset object;
determining at least two associated objects corresponding to the target object;
and rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and displaying the effect images simultaneously on a live interface of the client.
The processing component 1501 may include one or more processors executing computer instructions to perform all or part of the steps of the above-described method. Of course, the processing elements may also be implemented as one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components configured to perform the above-described methods.
The storage component 1502 is configured to store various types of data to support operations in the server. The memory components may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Of course, the server may of course also comprise other components, such as input/output interfaces, communication components, etc.
The input/output interface provides an interface between the processing components and peripheral interface modules, which may be output devices, input devices, etc.
The communication component is configured to facilitate wired or wireless communication between the server and other devices, such as with a terminal.
An embodiment of the present application further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a computer, the network live broadcast method in the embodiment shown in fig. 6 may be implemented.
Fig. 16 is a schematic structural diagram of an embodiment of a terminal device according to an embodiment of the present disclosure, where the terminal device may include a processing component 1601 and a storage component 1602. The storage component 1602 is used to store one or more computer program instructions; the one or more computer program instructions are for invocation and execution by the processing component 1601.
The processing component may be to:
collecting live broadcast data; the live data comprises a target object;
and sending the live broadcast data to a server, so that the server acquires at least two target images of the target object and determines at least two associated objects corresponding to the target object aiming at the target object in the live broadcast data, and renders the at least two associated objects to the corresponding target images respectively to obtain at least two effect images which are displayed simultaneously on a live broadcast interface of a client.
Processing component 1601 may include one or more processors executing computer instructions to perform all or some of the steps of the methods described above. Of course, the processing elements may also be implemented as one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components configured to perform the above-described methods.
The storage component 1602 is configured to store various types of data to support operation at the terminal. The memory components may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Of course, the terminal device may of course also comprise other components, such as input/output interfaces, communication components, etc.
The input/output interface provides an interface between the processing components and peripheral interface modules, which may be output devices, input devices, etc.
The communication component is configured to facilitate wired or wireless communication between the computing device and other devices, and the like.
An embodiment of the present application further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a computer, the network live broadcast method in the embodiment shown in fig. 4 may be implemented.
Fig. 17 is a schematic structural diagram of an embodiment of a terminal device according to the present disclosure, where the terminal device may include a processing component 1701, a display component 1702, and a storage component 1703. The storage component 1703 is used to store one or more computer program instructions; the one or more computer program instructions are for invocation and execution by the processing component 1701.
The processing component 1701 may be configured to:
the display component 1702 provides a live interface; and the live broadcast interface is used for displaying live broadcast data.
And acquiring at least two effect images corresponding to the target object in the live broadcast data.
The at least two effect images are obtained by respectively rendering at least two associated objects corresponding to the target object into corresponding target images, and the at least two target images are obtained by the server side aiming at the target object;
and simultaneously displaying the at least two effect images on the live broadcast interface.
The processing component 1701 may include one or more processors executing computer instructions to perform all or some of the steps of the methods described above. Of course, the processing elements may also be implemented as one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components configured to perform the above-described methods.
The storage component 1703 is configured to store various types of data to support operations at the terminal. The memory components may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The display element 1702 may be an Electroluminescent (EL) device, a liquid crystal display or a microdisplay with a similar structure, or a retina-directed display or similar laser scanning display.
Of course, the terminal device may of course also comprise other components, such as input/output interfaces, communication components, etc.
The input/output interface provides an interface between the processing components and peripheral interface modules, which may be output devices, input devices, etc.
The communication component is configured to facilitate wired or wireless communication between the computing device and other devices, and the like.
An embodiment of the present application further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a computer, the information display method of the embodiment shown in fig. 5 may be implemented.
Fig. 18 is a schematic structural diagram of an embodiment of a terminal device according to the embodiment of the present disclosure, where the terminal device may include a processing component 1801, a display component 1802, and a storage component 1803. The storage component 1803 is configured to store one or more computer program instructions; the one or more computer program instructions are for invocation and execution by the processing component 1801.
The processing component 1801 may be configured to:
the display component 1802 provides a direct-play interface; and the live broadcast interface is used for displaying live broadcast data.
And sending a target object selection instruction to the server, so that the server determines a target object based on the target object selection instruction and acquires at least two target images corresponding to the target object.
The target object selection instruction is generated based on the selection operation of a viewing user for any preset object.
And acquiring at least two effect images corresponding to the target object.
The at least two effect images are obtained by respectively rendering at least two associated objects corresponding to the target object into the corresponding target images, and the at least two target images are obtained by the server side aiming at the target object.
And simultaneously displaying the at least two effect images on the live broadcast interface.
The processing component 1801 may include one or more processors to execute computer instructions to perform all or part of the steps of the above-described method, among other things. Of course, the processing elements may also be implemented as one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components configured to perform the above-described methods.
The storage component 1803 is configured to store various types of data to support operation at the terminal. The memory components may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Display module 1802 may be an electro-luminescent (EL) element, a liquid crystal display or a microdisplay having a similar structure, or a retinal direct display or similar laser scanning type display.
Of course, the terminal device may of course also comprise other components, such as input/output interfaces, communication components, etc.
The input/output interface provides an interface between the processing components and peripheral interface modules, which may be output devices, input devices, etc.
The communication component is configured to facilitate wired or wireless communication between the computing device and other devices, and the like.
An embodiment of the present application further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a computer, the information display method according to the embodiment shown in fig. 7 may be implemented.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (44)

1. A method for live webcasting, comprising:
aiming at a target object in live data, acquiring at least two target images of the target object;
determining at least two associated objects corresponding to the target object;
and rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and displaying the effect images simultaneously on a live interface of the client.
2. The method of claim 1, wherein the obtaining, for a target object in live data, at least two target images of the target object comprises:
identifying a target object in the live data;
at least two target images of the target object are acquired.
3. The method of claim 1, wherein before acquiring at least two target images of a target object in live data for the target object, the method further comprises:
acquiring first preset information input by a live broadcast user in a live broadcast process;
switching to a preset live broadcast mode based on the first preset information;
the acquiring, for a target object in live data, at least two target images of the target object includes:
and under the preset live broadcast mode, aiming at a target object in live broadcast data, acquiring at least two target images of the target object.
4. The method according to claim 3, wherein in the preset live mode, for a target object in live data, acquiring at least two target images of the target object comprises:
identifying a target object in the live broadcast data in the preset live broadcast mode;
generating the at least two target images based on the target object.
5. The method of claim 4, wherein the generating the at least two target images based on the target object comprises:
acquiring three-dimensional spatial data of the target object;
at least two target images of the target object are generated based on the three-dimensional spatial data.
6. The method of claim 5, further comprising:
acquiring three-dimensional attitude data of the target object;
and synchronously controlling the three-dimensional postures of the at least two target images based on the three-dimensional posture data.
7. The method of claim 3, wherein the switching to a preset live mode based on the first predetermined information comprises:
identifying a first control instruction corresponding to the first preset information;
and responding to the first control instruction, and switching to a preset live broadcast mode.
8. The method of claim 7, wherein the capturing of the first predetermined information input by the live user during the live process comprises:
collecting first preset voice information input by a live user in a live broadcast process;
the identifying of the first control instruction corresponding to the first predetermined information includes:
and carrying out voice recognition on the first preset voice information to obtain the first control instruction.
9. The method of claim 7, wherein the capturing of the first predetermined information input by the live user during the live process comprises:
acquiring first sensing data generated by a sensing assembly based on a first preset gesture output by a live user in a live broadcasting process;
the identifying of the first control instruction corresponding to the first predetermined information includes:
and obtaining the first control instruction based on the first sensing data.
10. The method according to claim 1, wherein the rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and the displaying on a live interface of a client at the same time comprises:
rendering and displaying the at least two associated objects in the corresponding target images respectively to obtain at least two effect images;
and simultaneously displaying the at least two effect images in a live interface of the client.
11. The method according to claim 1, wherein the rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and the displaying on a live interface of a client at the same time comprises:
simultaneously displaying the at least two target images in a live interface of the client;
and sending the at least two associated objects to the client, so that the client renders the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and displays the at least two effect images.
12. The method according to claim 1, wherein the rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and the displaying on a live interface of a client at the same time comprises:
at least two effect images obtained by rendering the at least two associated objects to the corresponding target images are sent to a live broadcast end;
and controlling the live broadcast end to project the at least two effect images to a projection screen in a live broadcast lens acquisition range at the same time so that the client can play live broadcast data showing the at least two effect images at a live broadcast interface at the same time.
13. The method of claim 1, wherein the determining at least two associated objects corresponding to the target object comprises:
receiving at least two object identifications sent by a live broadcast end; wherein the at least two object identifications are generated based on a trigger operation of a live user for the at least two associated objects;
and determining at least two associated objects corresponding to the target object based on the at least two object identifications.
14. The method of claim 1, wherein the determining at least two associated objects corresponding to the target object comprises:
acquiring at least one user characteristic of the client;
and obtaining at least two associated objects corresponding to the target object based on the at least one user characteristic matching.
15. The method of claim 1, wherein the determining at least two associated objects corresponding to the target object comprises:
receiving at least two object identifications sent by the client; wherein the at least two object identifications are generated based on a triggering operation of a viewing user for the at least two associated objects;
and determining at least two associated objects corresponding to the target object based on the at least two object identifications.
16. The method according to claim 1, wherein the rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and the displaying on a live interface of a client at the same time comprises:
determining respective corresponding preset display positions of the at least two target images in a live interface of the client;
and rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and displaying the at least two effect images at corresponding preset display positions in a live interface of the client side simultaneously.
17. The method according to claim 16, wherein the at least two effect images obtained by rendering the at least two associated objects to the corresponding target images respectively, after being simultaneously displayed at corresponding preset display positions in a live interface of the client, further comprises:
receiving a moving instruction which is sent by a live broadcast end and generated based on the moving operation of a live broadcast user for any effect image;
performing display position movement processing on the any effect image based on the movement instruction.
18. The method according to claim 1, wherein the rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and the displaying on a live interface of a client at the same time comprises:
receiving an effect image display instruction sent by the client; the effect image display instruction is generated based on the triggering operation of a watching user of the client to the effect image display control;
and rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and displaying the effect images simultaneously on a live interface of the client.
19. The method according to claim 1, wherein the at least two effect images obtained by rendering the at least two associated objects to the corresponding target images respectively further include, after simultaneous presentation in a live interface of a client:
receiving a replacement instruction for any effect image sent by the client; wherein the replacement instruction is generated based on a replacement operation of a viewing user for an associated object of the any effect image;
determining an associated object to be replaced based on the replacement instruction;
and re-rendering the associated object to be replaced to the target image of any effect image object to obtain a replaced effect image, and executing replacement display on a live interface of the client.
20. A method for live webcasting, comprising:
collecting live broadcast data; the live data comprises a target object;
and sending the live broadcast data to a server, so that the server acquires at least two target images of the target object and determines at least two associated objects corresponding to the target object aiming at the target object in the live broadcast data, and renders the at least two associated objects to the corresponding target images respectively to obtain at least two effect images which are displayed simultaneously on a live broadcast interface of a client.
21. The method of claim 20, wherein the capturing live data comprises:
acquiring first preset information input by a live broadcast user in a live broadcast process;
the sending the live broadcast data to a server includes:
and sending the first preset information to the server side so that the server side can switch to a preset live broadcast mode based on the first preset information, and acquiring at least two target images of a target object in live broadcast data in the preset live broadcast mode.
22. The method of claim 20, wherein after sending the live data to a server, further comprising:
receiving at least two effect images sent by the server;
simultaneously displaying the at least two effect images in a live interface of a live end; the at least two effect images are obtained by rendering the at least two associated objects to the corresponding target images respectively by the server side.
23. The method of claim 20, wherein after sending the live data to the server, further comprising:
receiving at least two target images corresponding to the target object sent by the server;
simultaneously displaying the at least two target images in a live interface of a live end;
receiving at least two associated objects corresponding to the target object sent by the server;
rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images;
and displaying the at least two effect images in a live interface of the live end.
24. The method of claim 20, wherein after sending the live data to the server, further comprising:
receiving at least two effect images sent by the server;
and projecting the at least two effect images to a screen within a live broadcast lens acquisition range simultaneously to acquire and obtain live broadcast data for simultaneously displaying the at least two effect images.
25. The method of claim 20, further comprising:
collecting three-dimensional space data of the target object in a live broadcast process;
and sending the three-dimensional space data to the server so that the server can generate at least two target images of the target object based on the three-dimensional space data.
26. The method of claim 20, further comprising:
acquiring three-dimensional attitude data of the target object;
and sending the three-dimensional attitude data to the server side so that the server side can synchronously control the three-dimensional attitudes of the at least two target images based on the three-dimensional attitude data.
27. An information display method, comprising:
providing a direct broadcasting interface; the live interface is used for displaying live data;
acquiring at least two effect images corresponding to a target object in the live broadcast data; the at least two effect images are obtained by respectively rendering at least two associated objects corresponding to the target object into corresponding target images, and the at least two target images are obtained by the server side aiming at the target object;
and simultaneously displaying the at least two effect images on the live broadcast interface.
28. The method of claim 27, wherein the obtaining at least two effect images corresponding to a target object in live data comprises:
receiving at least two target images corresponding to the target object sent by the server;
simultaneously displaying the at least two target images in a live interface of the client;
receiving at least two associated objects corresponding to the target object sent by the server;
rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images.
29. The method of claim 27, wherein the obtaining at least two effect images corresponding to a target object in live data comprises:
receiving at least two effect images of the target object sent by the server; the at least two effect images are obtained by rendering the at least two associated objects to the corresponding target images respectively by the server side.
30. The method of claim 27, wherein before the obtaining at least two effect images corresponding to the target object in the live data, further comprising:
generating an effect image display instruction based on the triggering operation of a watching user for the effect image display control;
and sending the effect image display instruction to the server, so that the server renders the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and simultaneously displaying the at least two effect images on a live interface of the client.
31. The method of claim 26, wherein the presenting the at least two effect images simultaneously on a live interface of a client comprises:
simultaneously displaying the at least two effect images at respective preset display positions of a live interface of the client; and the preset display positions of the at least two effect images are determined by the server.
32. The method of claim 26, further comprising, before the client's live interface simultaneously presents the at least two effect images:
a preset display window output in a live interface of the client;
and simultaneously displaying the at least two effect images in the preset display window.
33. A method for live webcasting, comprising:
determining a target object based on a target object selection instruction sent by a client and acquiring at least two target images of the target object; the target object selection instruction is generated based on the selection operation of a viewing user of the client to any preset object;
determining at least two associated objects corresponding to the target object;
and rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and displaying the effect images simultaneously on a live interface of the client.
34. An information display method, comprising:
providing a direct broadcasting interface; the live interface is used for displaying live data;
sending a target object selection instruction to a server, so that the server can determine a target object based on the target object selection instruction and obtain at least two target images corresponding to the target object; the target object selection instruction is generated based on the selection operation of a viewing user for any preset object;
acquiring at least two effect images corresponding to the target object; the at least two effect images are obtained by respectively rendering at least two associated objects corresponding to the target object into the corresponding target images, and the at least two target images are obtained by the server side aiming at the target object;
and simultaneously displaying the at least two effect images on the live broadcast interface.
35. A webcast apparatus, comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring at least two target images of a target object in live broadcast data aiming at the target object;
the first determining module is used for determining at least two associated objects corresponding to the target object;
the first display module is used for rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and displaying the effect images on a live interface of the client at the same time.
36. A webcast apparatus, comprising:
the live broadcast data acquisition module is used for acquiring live broadcast data; the live data comprises a target object;
and the live broadcast data sending module is used for sending the live broadcast data to a server so that the server can acquire at least two target images of the target object and determine at least two associated objects corresponding to the target object aiming at the target object in the live broadcast data, and the at least two associated objects are respectively rendered into the corresponding target images to obtain at least two effect images which are displayed simultaneously on a live broadcast interface of a client.
37. An information display device characterized by comprising:
the first direct-playing interface providing module is used for providing a direct-playing interface; the live interface is used for displaying live data;
the second acquisition module is used for acquiring at least two effect images corresponding to a target object in the live broadcast data; the at least two effect images are obtained by respectively rendering at least two associated objects corresponding to the target object into corresponding target images, and the at least two target images are obtained by the server side aiming at the target object;
and the second display module is used for displaying the at least two effect images simultaneously on the live broadcast interface.
38. A webcast apparatus, comprising:
the target object determining module is used for determining a target object based on a target object selection instruction sent by the client;
a third obtaining module, configured to obtain at least two target images of the target object; the target object selection instruction is generated based on the selection operation of a viewing user of the client to any preset object;
the second determination module is used for determining at least two associated objects corresponding to the target object;
and the third display module is used for rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and displaying the effect images on a live interface of the client at the same time.
39. An information display device characterized by comprising:
the second live broadcast interface providing module is used for providing a live broadcast interface; the live interface is used for displaying live data;
the selection instruction sending module is used for sending a target object selection instruction to the server so that the server can determine a target object based on the target object selection instruction and obtain at least two target images corresponding to the target object; the target object selection instruction is generated based on the selection operation of a viewing user for any preset object;
the fourth acquisition module is used for acquiring at least two effect images corresponding to the target object; the at least two effect images are obtained by respectively rendering at least two associated objects corresponding to the target object into the corresponding target images, and the at least two target images are obtained by the server side aiming at the target object;
and the fourth display module is used for displaying the at least two effect images simultaneously on the live broadcast interface.
40. A live broadcast server is characterized by comprising a processing component and a storage component; the storage component stores one or more computer instructions; the one or more computer instructions to be invoked for execution by the processing component;
the processing component is to:
aiming at a target object in live data, acquiring at least two target images of the target object;
determining at least two associated objects corresponding to the target object;
and rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and displaying the effect images simultaneously on a live interface of the client.
41. A live broadcast server is characterized by comprising a processing component and a storage component; the storage component stores one or more computer instructions; the one or more computer instructions to be invoked for execution by the processing component;
the processing component is to:
determining a target object based on a target object selection instruction sent by a client and acquiring at least two target images of the target object; the target object selection instruction is generated based on the selection operation of a viewing user of the client to any preset object;
determining at least two associated objects corresponding to the target object;
and rendering the at least two associated objects to the corresponding target images respectively to obtain at least two effect images, and displaying the effect images simultaneously on a live interface of the client.
42. A terminal device is characterized by comprising a processing component and a storage component; the storage component stores one or more computer program instructions; the one or more computer program instructions for invocation and execution by the processing component;
the processing component is to:
collecting live broadcast data; the live data comprises a target object;
and sending the live broadcast data to a server, so that the server acquires at least two target images of the target object and determines at least two associated objects corresponding to the target object aiming at the target object in the live broadcast data, and renders the at least two associated objects to the corresponding target images respectively to obtain at least two effect images which are displayed simultaneously on a live broadcast interface of a client.
43. The terminal equipment is characterized by comprising a processing component, a display component and a storage component; the storage component stores one or more computer program instructions; the one or more computer program instructions for invocation and execution by the processing component;
the processing component is to:
the display component provides a direct-playing interface; the live interface is used for displaying live data;
acquiring at least two effect images corresponding to a target object in the live broadcast data; the at least two effect images are obtained by respectively rendering at least two associated objects corresponding to the target object into corresponding target images, and the at least two target images are obtained by the server side aiming at the target object;
and simultaneously displaying the at least two effect images on the live broadcast interface.
44. The terminal equipment is characterized by comprising a processing component, a display component and a storage component; the storage component stores one or more computer program instructions; the one or more computer program instructions for invocation and execution by the processing component;
the processing component is to:
the display component provides a direct-playing interface; the live interface is used for displaying live data;
sending a target object selection instruction to a server, so that the server can determine a target object based on the target object selection instruction and obtain at least two target images corresponding to the target object; the target object selection instruction is generated based on the selection operation of a viewing user for any preset object;
acquiring at least two effect images corresponding to the target object; the at least two effect images are obtained by respectively rendering at least two associated objects corresponding to the target object into the corresponding target images, and the at least two target images are obtained by the server side aiming at the target object;
and simultaneously displaying the at least two effect images on the live broadcast interface.
CN201910395218.4A 2019-05-13 2019-05-13 Network live broadcast method, information display method and device, live broadcast server and terminal equipment Active CN111935489B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910395218.4A CN111935489B (en) 2019-05-13 2019-05-13 Network live broadcast method, information display method and device, live broadcast server and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910395218.4A CN111935489B (en) 2019-05-13 2019-05-13 Network live broadcast method, information display method and device, live broadcast server and terminal equipment

Publications (2)

Publication Number Publication Date
CN111935489A true CN111935489A (en) 2020-11-13
CN111935489B CN111935489B (en) 2023-08-04

Family

ID=73282593

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910395218.4A Active CN111935489B (en) 2019-05-13 2019-05-13 Network live broadcast method, information display method and device, live broadcast server and terminal equipment

Country Status (1)

Country Link
CN (1) CN111935489B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112702616A (en) * 2020-12-08 2021-04-23 珠海格力电器股份有限公司 Processing method and device for playing content
CN112785381A (en) * 2021-01-28 2021-05-11 维沃移动通信有限公司 Information display method, device and equipment
CN113438531A (en) * 2021-05-18 2021-09-24 北京达佳互联信息技术有限公司 Object display method and device, electronic equipment and storage medium
CN115079878A (en) * 2021-03-15 2022-09-20 北京字节跳动网络技术有限公司 Object display method and device, electronic equipment and storage medium
WO2022237190A1 (en) * 2021-05-13 2022-11-17 北京达佳互联信息技术有限公司 Information display method and electronic device
CN115486088A (en) * 2021-03-30 2022-12-16 京东方科技集团股份有限公司 Information interaction method, computer readable storage medium and communication terminal

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20010068207A (en) * 2000-07-07 2001-07-23 황인수 Two-way Live education Broadcasting System
US20070294142A1 (en) * 2006-06-20 2007-12-20 Ping Liu Kattner Systems and methods to try on, compare, select and buy apparel
WO2014036642A1 (en) * 2012-09-06 2014-03-13 Decision-Plus M.C. Inc. System and method for broadcasting interactive content
CN105608238A (en) * 2014-11-21 2016-05-25 中兴通讯股份有限公司 Clothes trying-on method and device
US9581962B1 (en) * 2015-11-20 2017-02-28 Arht Media Inc. Methods and systems for generating and using simulated 3D images
CN106875470A (en) * 2016-12-28 2017-06-20 广州华多网络科技有限公司 The method and system for changing main broadcaster's image of live platform
CN107220887A (en) * 2017-05-10 2017-09-29 应凯 Intelligent dressing system with clothes effect printing function
CN108134945A (en) * 2017-12-18 2018-06-08 广州市动景计算机科技有限公司 AR method for processing business, device and terminal
CN108933954A (en) * 2017-05-22 2018-12-04 中兴通讯股份有限公司 Method of video image processing, set-top box and computer readable storage medium
US20190019242A1 (en) * 2017-07-12 2019-01-17 Accenture Global Solutions Limited Immersive and artificial intelligence based retail
CN109472655A (en) * 2017-09-07 2019-03-15 阿里巴巴集团控股有限公司 Data object trial method, apparatus and system
US20190104325A1 (en) * 2017-10-04 2019-04-04 Livecloudtv, Llc Event streaming with added content and context
WO2019072096A1 (en) * 2017-10-10 2019-04-18 腾讯科技(深圳)有限公司 Interactive method, device, system and computer readable storage medium in live video streaming

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20010068207A (en) * 2000-07-07 2001-07-23 황인수 Two-way Live education Broadcasting System
US20070294142A1 (en) * 2006-06-20 2007-12-20 Ping Liu Kattner Systems and methods to try on, compare, select and buy apparel
WO2014036642A1 (en) * 2012-09-06 2014-03-13 Decision-Plus M.C. Inc. System and method for broadcasting interactive content
CN105608238A (en) * 2014-11-21 2016-05-25 中兴通讯股份有限公司 Clothes trying-on method and device
US9581962B1 (en) * 2015-11-20 2017-02-28 Arht Media Inc. Methods and systems for generating and using simulated 3D images
CN106875470A (en) * 2016-12-28 2017-06-20 广州华多网络科技有限公司 The method and system for changing main broadcaster's image of live platform
CN107220887A (en) * 2017-05-10 2017-09-29 应凯 Intelligent dressing system with clothes effect printing function
CN107358493A (en) * 2017-05-10 2017-11-17 应凯 Intelligent dressing system with adaptive image design function
CN108933954A (en) * 2017-05-22 2018-12-04 中兴通讯股份有限公司 Method of video image processing, set-top box and computer readable storage medium
US20190019242A1 (en) * 2017-07-12 2019-01-17 Accenture Global Solutions Limited Immersive and artificial intelligence based retail
CN109472655A (en) * 2017-09-07 2019-03-15 阿里巴巴集团控股有限公司 Data object trial method, apparatus and system
US20190104325A1 (en) * 2017-10-04 2019-04-04 Livecloudtv, Llc Event streaming with added content and context
WO2019072096A1 (en) * 2017-10-10 2019-04-18 腾讯科技(深圳)有限公司 Interactive method, device, system and computer readable storage medium in live video streaming
CN108134945A (en) * 2017-12-18 2018-06-08 广州市动景计算机科技有限公司 AR method for processing business, device and terminal

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
PAPE-ABDOULAYE FAM 等: "Energy Efficiency of Hybrid Unicast-Broadcast Networks for Mobile TV Services", 《2017 IEEE 28TH ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR, AND MOBILE RADIO COMMUNICATIONS (PIMRC)》, pages 1 - 7 *
严小芳;: "场景传播视阈下的网络直播探析", 新闻界, no. 15, pages 51 - 54 *
傅力军;: "5G技术现状及4K over 5G业务前景", 广播与电视技术, no. 06, pages 54 - 58 *
刘经南;高柯夫;: "增强现实及其在导航与位置服务中的应用", 地理空间信息, no. 02, pages 1 - 6 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112702616A (en) * 2020-12-08 2021-04-23 珠海格力电器股份有限公司 Processing method and device for playing content
CN112785381A (en) * 2021-01-28 2021-05-11 维沃移动通信有限公司 Information display method, device and equipment
CN115079878A (en) * 2021-03-15 2022-09-20 北京字节跳动网络技术有限公司 Object display method and device, electronic equipment and storage medium
CN115079878B (en) * 2021-03-15 2024-04-16 北京字节跳动网络技术有限公司 Object display method, device, electronic equipment and storage medium
CN115486088A (en) * 2021-03-30 2022-12-16 京东方科技集团股份有限公司 Information interaction method, computer readable storage medium and communication terminal
WO2022237190A1 (en) * 2021-05-13 2022-11-17 北京达佳互联信息技术有限公司 Information display method and electronic device
CN113438531A (en) * 2021-05-18 2021-09-24 北京达佳互联信息技术有限公司 Object display method and device, electronic equipment and storage medium
CN113438531B (en) * 2021-05-18 2023-09-05 北京达佳互联信息技术有限公司 Object display method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111935489B (en) 2023-08-04

Similar Documents

Publication Publication Date Title
CN111935489B (en) Network live broadcast method, information display method and device, live broadcast server and terminal equipment
US10360715B2 (en) Storage medium, information-processing device, information-processing system, and avatar generating method
JP7498209B2 (en) Information processing device, information processing method, and computer program
KR102585051B1 (en) Moving picture delivery system for delivering moving picture including animation of character object generated based on motions of actor, moving picture delivery method, and moving picture delivery program
US20170032577A1 (en) Real-time virtual reflection
WO2016097732A1 (en) Methods for generating a 3d virtual body model of a person combined with a 3d garment image, and related devices, systems and computer program products
JP7366611B2 (en) Image processing device, image processing method, and program
CN112396679B (en) Virtual object display method and device, electronic equipment and medium
US9294670B2 (en) Lenticular image capture
CN108304063A (en) Information processing unit, information processing method and computer-readable medium
JP2019139673A (en) Information processing apparatus, information processing method, and computer program
US20150213784A1 (en) Motion-based lenticular image display
EP3819752A1 (en) Personalized scene image processing method and apparatus, and storage medium
CN112199016A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN110751707B (en) Animation display method, animation display device, electronic equipment and storage medium
EP3621300B1 (en) Display control device and display control method
US20230412897A1 (en) Video distribution system for live distributing video containing animation of character object generated based on motion of actors
CN111429543B (en) Material generation method and device, electronic equipment and medium
JP7446754B2 (en) Image processing device, image processing method, and program
US20160344946A1 (en) Screen System
CN106331806B (en) A kind of implementation method and equipment of virtual remote controller
CN116170624A (en) Object display method and device, electronic equipment and storage medium
CN105630170B (en) Information processing method and electronic equipment
CN114125271B (en) Image processing method and device and electronic equipment
KR20180000006A (en) Shoping inducing mehtod using augmented reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant