CN113286186A - Image display method and device in live broadcast and storage medium - Google Patents

Image display method and device in live broadcast and storage medium Download PDF

Info

Publication number
CN113286186A
CN113286186A CN202110615701.6A CN202110615701A CN113286186A CN 113286186 A CN113286186 A CN 113286186A CN 202110615701 A CN202110615701 A CN 202110615701A CN 113286186 A CN113286186 A CN 113286186A
Authority
CN
China
Prior art keywords
anchor
action
image display
control
live
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110615701.6A
Other languages
Chinese (zh)
Other versions
CN113286186B (en
Inventor
蓝永峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huya Information Technology Co Ltd
Original Assignee
Guangzhou Huya Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huya Information Technology Co Ltd filed Critical Guangzhou Huya Information Technology Co Ltd
Priority to CN202110615701.6A priority Critical patent/CN113286186B/en
Publication of CN113286186A publication Critical patent/CN113286186A/en
Application granted granted Critical
Publication of CN113286186B publication Critical patent/CN113286186B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present application relates to the field of network live broadcast technologies, and in particular, to a live broadcast image display method, device, and storage medium. The live image display method comprises the following steps: acquiring control parameters corresponding to the image display of the anchor; controlling the execution action of the virtual role corresponding to the anchor according to the control parameter; and sending the execution action of the virtual role to a client interface of the audience for displaying. By using the scheme provided by the application, the interaction between the anchor user and the audience by using the virtual role is facilitated.

Description

Image display method and device in live broadcast and storage medium
The application is a divisional application of an invention patent application with the application number of 201811185967.6 and the name of 'image display method, device and storage medium in live broadcasting'.
Technical Field
The application relates to the technical field of network live broadcast, in particular to an image display method, device and storage medium in live broadcast.
Background
The network live broadcast absorbs and continues the advantages of the internet, and aiming at users with live broadcast requirements, by utilizing the internet and advanced multimedia communication technology and constructing a multifunctional network live broadcast platform integrating audio, video, desktop sharing, document sharing and interaction links on the internet, enterprises or individuals can directly carry out comprehensive communication and interaction of voice, video and data on line.
In the existing internet live broadcast, in order to have better interaction with audiences, a main broadcast mostly selects a direct-access mirror, when the audiences give a virtual gift and the like and need the main broadcast to give a response, the main broadcast who enters the mirror can express thank you through sound, facial expressions or body language, for the main broadcasts which do not want to enter the mirror or are not convenient to enter the mirror, the main broadcasts are often processed in other image covering or fuzzifying modes, and after the processing in the mode, the feedback information of the main broadcast to the audiences cannot be accurately transmitted, the interaction effect is poor, and the audience experience is greatly discounted.
Content of application
Aiming at the defect that good interaction with audiences cannot be carried out at present, the application provides a live image display method, a live image display device and a storage medium, so that an anchor user can conveniently interact with the audiences by using virtual characters.
The embodiment of the application firstly provides a live image display method, which comprises the following steps:
acquiring control parameters corresponding to the image display of the anchor;
controlling the execution action of the virtual role corresponding to the anchor according to the control parameter;
and sending the execution action of the virtual role to a client interface of the audience for displaying.
Preferably, the step of obtaining the control parameter corresponding to the avatar display of the anchor includes:
recognizing a human face image of a anchor;
acquiring the attitude characteristic information of a main broadcaster according to the face image;
and converting the attitude characteristic information into a control parameter for controlling the image display of the anchor.
Preferably, the step of obtaining the control parameter corresponding to the avatar display of the anchor includes:
acquiring a control instruction input by a main broadcast; the control command is a command which is associated with the virtual role in advance and is provided for the anchor to input;
and converting the control instruction into a control parameter for displaying the image of the anchor.
Preferably, the step of controlling the action of the virtual character selected by the anchor according to the control parameter is preceded by:
displaying a plurality of virtual roles to a client interface of the anchor;
and determining the virtual role selected by the anchor as the virtual role to be displayed according to the selection operation of the anchor on the client interface.
Preferably, the step of controlling the action of the virtual character selected by the anchor according to the control parameter is preceded by:
displaying a plurality of background pictures to a client interface of the anchor;
and replacing the background of the current anchor live broadcast room with the background picture selected by the anchor according to the selection operation of the anchor on the client interface.
Preferably, the step of sending the execution action of the virtual character to a client interface of a viewer for presentation includes:
detecting whether the virtual character has performed the execution action;
if not, identifying the current action being executed by the virtual role, performing gradient processing on the current action being executed, splicing the current action and the execution action, and controlling the virtual role to execute the execution action.
Preferably, after the step of sending the execution action of the virtual character to a client interface of a viewer for presentation, the method further includes:
after the virtual role completes the execution action, detecting whether a next execution action instruction is received at the current moment;
and if the next action execution instruction is not received, controlling the virtual role to execute the preset action.
Further, this application embodiment still provides a live image display device, includes:
the acquisition module is used for acquiring control parameters corresponding to the image display of the anchor;
the control module is used for controlling the execution action of the virtual role corresponding to the anchor according to the control parameter;
and the display module is used for sending the execution action of the virtual role to a client interface of a viewer for display.
Further, an embodiment of the present application also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps of the live image presentation method described in any one of the foregoing items.
Still further, an embodiment of the present application further provides a computer device, where the computer device includes:
one or more processors;
a storage device for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors implement the steps of the live image presentation method according to any one of the above technical solutions.
Compared with the prior art, the application at least has the following beneficial effects:
the embodiment of the application provides an image display method in live broadcasting, the execution action of virtual character is controlled through the corresponding control parameter of the image display of anchor, the execution action of virtual character is displayed on the client interface of audience, the virtual character representing the image of anchor is displayed on the client interface of audience, the virtual character changes according to the change of the control parameter of anchor, the requirement that the anchor is not out of the mirror is met, the virtual character which flexibly changes can interact with the audience, the interaction effect is improved, and the user experience is improved.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flow chart of a live image display method according to an embodiment of the present application;
fig. 2 is a schematic flow chart illustrating a process of obtaining a control parameter corresponding to an image presentation of a anchor according to an embodiment of the present application;
fig. 3 is a flowchart illustrating a process of obtaining a control parameter corresponding to an image presentation of a anchor according to another embodiment of the present application;
FIG. 4 is a schematic view of the embodiment of FIG. 3;
fig. 5 is a schematic flowchart illustrating an execution action of the virtual character is sent to a client interface of a viewer according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a live image display device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, the first live video image may be referred to as a second live video image, and similarly, the second live video image may be referred to as a first live video image, without departing from the scope of the present disclosure. Both the first live video image and the second live video image are live video images, but they are not the same live video image.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As will be appreciated by those skilled in the art, a client, as used herein, includes both a device that includes a wireless signal receiver, a device that has only a wireless signal receiver without transmit capability, and a device that includes receive and transmit hardware, having a device that is capable of performing two-way communication over a two-way communication link. Such a device may include: a cellular or other communication device having a single line display or a multi-line display or a cellular or other communication device without a multi-line display; PCS (Personal Communications Service), which may combine voice, data processing, facsimile and/or data communication capabilities; a PDA (Personal Digital Assistant), which may include a radio frequency receiver, a pager, internet/intranet access, a web browser, a notepad, a calendar and/or a GPS (Global Positioning System) receiver; a conventional laptop and/or palmtop computer or other device having and/or including a radio frequency receiver. As used herein, a client can be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or situated and/or configured to operate locally and/or in a distributed fashion at any other location(s) on earth and/or in space.
The invention firstly provides a live image display method, which is suitable for being executed at a main broadcast client, in one embodiment, the method comprises the steps of S11, S12 and S13, the flow diagram of the method is shown in figure 1, and the method specifically comprises the following steps:
and S11, acquiring control parameters corresponding to the image display of the anchor.
The image display comprises information such as expressions, actions, forms and clothes of a main broadcast, the main broadcast displays users of other audiences, the main broadcast displays talent art for the audiences between the main broadcasts, the audiences who enter a live broadcast room and can be seen by the other audiences can be included, if a traditional main broadcast sets the background of the live broadcast room into a game scene, the traditional main broadcast and the audiences correspondingly find virtual roles of the audiences to enter the scene, in the scene, the virtual roles of the traditional main broadcast and the audiences all need to make a response in real time, and the traditional main broadcast and the audiences can be the main broadcast.
In one embodiment, the step of obtaining the control parameter corresponding to the avatar display of the anchor, whose flow diagram is shown in fig. 2, includes the following sub-steps:
s21, recognizing the face image of the anchor;
in one embodiment, before the identifying the human face image of the anchor, the method further comprises: and acquiring a human face image of the anchor. And recognizing the face image by utilizing an image recognition technology. If the face images of the anchor are to be accurately identified, a plurality of face images of the anchor can be obtained, and an identification model of the anchor is established by utilizing a large number of training samples, so that the accuracy and the speed of identifying the anchor are improved.
And S22, acquiring the gesture feature information of the anchor according to the face image.
The face image is analyzed through an image recognition algorithm, the characteristic information of various postures of the anchor is extracted, and when the anchor is in expressions such as laugh, smile and laugh, the characteristic information such as the bending degree of eyes and eyebrows, the shape of a mouth and the number or area of exposed teeth is obtained, so that the control parameters such as the expression, the shape and the action of the anchor image can be controlled according to the characteristic information in the following process.
And S23, converting the posture characteristic information into a control parameter for controlling the image display of the anchor.
Converting the current posture characteristic information of the anchor into a control parameter for controlling the image display of the anchor by using the posture characteristic information of the anchor obtained in the step S22, if the current posture of the anchor is smiling, obtaining the control parameter for controlling the image display of the anchor according to the characteristic information, wherein the posture characteristic information of the anchor is that the eye bending radian is 10 degrees, the mouth angle is raised by 15 degrees, 6 teeth are exposed and the like, correspondingly determining the control parameter for the image display according to the incidence relation between the posture characteristic information of the anchor and the control parameter for the image display of the anchor, if the virtual character selected by the anchor is a kitten, the control parameter for the image display of the anchor is that the eye bending radian is 10 degrees, the mouth angle is raised by 15 degrees, and the control parameter for the virtual character corresponding to 6 teeth is exposed: the eyes of the kitten are bent at 45 degrees, and the mouth angle is raised at 30 degrees.
According to the scheme provided by the embodiment of the application, the face image of the anchor is intelligently identified, the control parameter of the image display of the anchor is controlled according to the attitude characteristic information of the anchor, the control parameter of the image of the anchor is automatically adjusted, so that the action of the virtual character is controlled according to the control parameter, the control parameter of the image display is not required to be manually adjusted, and the user experience is improved.
In an embodiment, the obtaining of the control parameter corresponding to the avatar display of the anchor can be further implemented by the following sub-steps, as shown in fig. 3, specifically including the following sub-steps:
s31, acquiring a control instruction input by the anchor; the control command is a command which is associated with the virtual role in advance and is provided for the anchor to input.
The virtual roles are virtual characters, figures or cartoon characters customized according to the anchor image in preset creative works, control instructions input by the anchor are associated with the virtual roles selected by the anchor, each action of the virtual roles corresponds to an independent control element and is triggered by the anchor, each action is provided with a control panel, the control panel can be represented in the form of a knob or a progress bar, the progress bar is taken as an example, each action of the virtual roles corresponds to one progress bar, the set value is 0-1, 0 represents that the action is not triggered, and 1 represents that the action is in a triggered state. When an action is in a trigger state, the progress bar is slid, and the action changes in amplitude, and a scene diagram provided by this embodiment is shown in fig. 4.
Preferably, when the progress bar indication is detected to be 0, the identification operation of the action is not required to be performed, so that the efficiency is improved, and the resources are saved.
In an application scenario, when a progress bar pointing to 1 is detected, namely the action is in a trigger state, a control instruction or a recognition operation of a user can be received, and gesture feature information of a main broadcast is recognized. And (3) finding a corresponding action module by using a specific algorithm and combining product requirements, for example, the anchor gives a sound or action of 'hello', and when the 'hello' is detected, the action is considered to be required to be triggered. If the action is triggered, the action is not processed, otherwise, the action recognition is stopped, the action is not triggered, then the action which is triggered currently is found, and the gradual animation is performed on the action, so that the two actions are naturally transited and switched.
When the state is 0, the progress bar does not start to slide, which indicates that the action represented by the progress bar is not started at the moment, the background action controller relocates to the currently identified action, the size of the controller is reset to be consistent with the currently identified layout, the animation corresponding to the virtual character is displayed, namely the currently started action is identified, and the action is displayed at the audience side.
And S32, converting the control command into a control parameter of the anchor image display.
Converting the control command sent by the user into the control parameter of the anchor character display, before the step S32, the method further includes: and pre-establishing a mapping relation between the control instruction and the control parameter of the anchor image display so as to quickly obtain the corresponding control parameter of the anchor image display according to the currently obtained control instruction.
According to the method and the device, the anchor input control instruction is converted into the control parameter for displaying the anchor image, so that the anchor image displayed at the audience can be manually controlled by the anchor, the face image of the anchor does not need to be acquired in real time, and the processing complexity and the system energy consumption of the system are reduced.
And S12, controlling the execution action of the virtual character selected by the anchor according to the control parameter.
Before receiving the execution action of the virtual character selected by the control parameter control anchor, the method further comprises the following steps: displaying a plurality of virtual roles to a client interface of the anchor; and determining the virtual roles to be displayed according to the selection operation of the anchor on the client interface.
Specifically, when a plurality of virtual roles exist, the virtual roles are displayed to a client side of the anchor, so that the anchor can select the virtual roles to display own images according to own preferences. For example, the anchor can select a kitten or a puppy as the virtual character displayed in the self image.
In one embodiment, the step of controlling the action of the virtual character selected by the anchor according to the control parameter is preceded by the steps of: displaying a plurality of background pictures to a client interface of the anchor; and replacing the background picture of the current anchor live broadcast room with the selected background picture according to the selection operation of the anchor on the client interface.
The anchor may select a suitable background according to the current live topic, such as: music, optionally the background is a music hall, etc., including but not limited to a game scene, a star show, an outdoor scene, etc., to give the user an immersive experience.
In one embodiment, before step S12, the method further includes: and establishing an incidence relation between the control parameters of the image display of the anchor and the control parameters of the virtual character.
In one embodiment, historical behavior data of a anchor and a virtual character is obtained, and an association relationship between a control parameter displayed by the anchor and a control parameter of the virtual character in the same posture is determined according to the historical behavior data, as described in the above example, the anchor determines that the virtual character representing the anchor is a kitten, and detects that posture characteristic information of the anchor is that an eye is bent at an arc of 10 °, a mouth angle is raised at 15 °, 6 teeth are exposed, which indicates that the current posture of the anchor is smiling and is limited by the design of the virtual character, a kitten in the virtual character may not have a designed tooth, and the posture characteristic information of the kitten smiling is greatly different from that of the anchor, and then the association relationship established according to the historical data can determine that the control parameter of the kitten is: the degree of eye flexion is 45 degrees, and the mouth angle rises 30 degrees.
The association relationship between the control parameters of the image display of the anchor and the control parameters of the virtual character is pre-established, so that the control parameters of the virtual character can be rapidly obtained according to the control parameters of the image display of the current anchor, and when the virtual character has multiple choices, the pre-established association relationship is particularly important, and the virtual character can be rapidly controlled to make corresponding feedback information according to the current posture characteristic information of the current anchor.
In one embodiment, the association relationship between the control parameters of the anchor image display and the control parameters of the virtual character is established, a recognition model between the control parameters of the anchor image display and the control parameters of the virtual character can be established through a large number of training samples, and the control parameters of the virtual character selected by the anchor image display can be accurately and quickly obtained according to the currently obtained attitude characteristic information of the anchor.
And S13, sending the execution action of the virtual character to a client interface of the audience for displaying.
The action of the avatar selected by the anchor is shown at the client of the viewer according to the execution action, in one embodiment, step S13 includes the following sub-steps, whose flow chart is shown in fig. 5, and the specific process is as follows:
s51, detecting whether the virtual character has executed the executing action.
Detecting whether the virtual character responds to the instruction related to the execution action, if so, continuing to execute the action, if so, acquiring whether the current action of the virtual character is matched with the action corresponding to the current time progress, if so, determining that the virtual character executes the execution action, and if not, determining that the virtual character does not execute the execution action.
And S52, if not, identifying the current action being executed by the virtual character, performing gradient processing on the current action being executed, splicing the current action and the execution action, and controlling the virtual character to execute the execution action.
If the virtual role selected by the user does not execute the current execution action, identifying the current action being executed by the virtual role, obtaining the current action being executed, obtaining the characteristic information of the current action and the execution action, splicing the current action and the execution action according to the two characteristic information, performing gradient processing on the current action being executed, and controlling the virtual role to execute the execution action after the current action is finished so as to realize natural transition of the two execution actions.
In one embodiment, after the step S13, the method further includes: after the virtual role completes the execution action, detecting whether a next execution action instruction is received at the current moment; and if the next action execution instruction is not received, controlling the virtual role to execute the preset action.
Specifically, after the virtual character performs the action, it is detected whether there is a next action execution instruction at the current time, and if the next action execution instruction is not detected, the virtual character enters a standby action state, where the standby action state may be a preset action, such as a smiling action of a kitten. According to the scheme, when no execution action is available, the preset action is automatically executed, and the flexible state of the virtual role is kept.
The image show scheme in live broadcast that this application embodiment provided controls virtual character's executive action through the relevant control parameter of image show of anchor, will virtual character's executive action show on spectator's client interface, the virtual character of the representation anchor image is shown on spectator's client interface, virtual character changes according to the change of anchor's control parameter, has both satisfied the demand that the anchor can not go out the mirror, can interact with spectator with the virtual character that changes in a flexible way again, promotes user experience.
Further, an embodiment of the present invention further provides a live image display device, a schematic structural diagram of which is shown in fig. 6, and the live image display device includes: the system comprises an acquisition module 61, a control module 62 and a display module 63, wherein the functions of the modules are as follows:
the acquisition module 61 is used for acquiring control parameters corresponding to the image display of the anchor;
the control module 62 is configured to control an execution action of the virtual character selected by the anchor according to the control parameter;
and the display module 63 is configured to send the execution action of the virtual character to a client interface of a viewer for display.
With regard to the live image display device in the above embodiment, the specific manner in which each module and unit performs operations has been described in detail in the embodiment related to the method, and will not be elaborated herein.
Further, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements any one of the above-mentioned live image display methods. The storage medium includes, but is not limited to, any type of disk including floppy disks, hard disks, optical disks, CD-ROMs, and magneto-optical disks, ROMs (Read-Only memories), RAMs (Random AcceSS memories), EPROMs (EraSable Programmable Read-Only memories), EEPROMs (Electrically EraSable Programmable Read-Only memories), flash memories, magnetic cards, or optical cards. That is, a storage medium includes any medium that stores or transmits information in a form readable by a device (e.g., a computer). Which may be a read-only memory, magnetic or optical disk, or the like.
Further, an embodiment of the present invention further provides a computer device, where the computer device may be a server, and the computer device includes:
one or more processors;
a storage device for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors implement the live image display method according to any one of the above technical solutions.
Fig. 7 is a schematic structural diagram of a computer apparatus according to the present invention, which includes a processor 720, a storage device 730, an input unit 740, a display unit 750, and other components. Those skilled in the art will appreciate that the structural elements shown in fig. 7 do not constitute a limitation of all computer devices and may include more or fewer components than those shown, or some of the components may be combined. The storage 730 may be used to store the application 710 and various functional modules, and the processor 720 runs the application 710 stored in the storage 730 to perform various functional applications of the device and data processing. The storage 730 may be an internal memory or an external memory, or include both internal and external memories. The memory may comprise read-only memory, Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), flash memory, or random access memory. The external memory may include a hard disk, a floppy disk, a ZIP disk, a usb-disk, a magnetic tape, etc. The disclosed memory devices include, but are not limited to, these types of memory devices. The memory device 730 disclosed herein is provided by way of example only and not by way of limitation.
The input unit 740 is used for receiving input of signals, and the input unit 740 may include a touch panel and other input devices. The touch panel can collect touch operations of a user on or near the touch panel (for example, operations of the user on or near the touch panel by using any suitable object or accessory such as a finger, a stylus and the like) and drive the corresponding connecting device according to a preset program; other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., play control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like. The display unit 750 may be used to display information input by a user or information provided to the user and various menus of the computer device. The display unit 750 may take the form of a liquid crystal display, an organic light emitting diode, or the like. The processor 720 is a control center of the computer device, connects various parts of the entire computer using various interfaces and lines, and performs various functions and processes data by operating or executing software programs and/or modules stored in the storage device 730 and calling data stored in the storage device.
In one embodiment, a computer device includes one or more processors 720, and one or more storage 730, one or more applications 710, wherein the one or more applications 710 are stored in the storage 730 and configured to be executed by the one or more processors 720, the one or more applications 710 configured to perform the live character presentation method described in the above embodiments.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
It should be understood that each functional unit in the embodiments of the present invention may be integrated into one processing module, each unit may exist alone physically, or two or more units may be integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
The foregoing is only a partial embodiment of the present application, and it should be noted that, for those skilled in the art, several modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.

Claims (10)

1. A live image display method is characterized by comprising the following steps:
acquiring control parameters corresponding to the image display of the anchor based on a control instruction input by the anchor and/or a result obtained by identifying a face image of the anchor;
controlling the execution action of the virtual role corresponding to the anchor according to the control parameter;
and sending the execution action of the virtual role to a client of the audience so as to display the execution action on a client interface of the audience.
2. The live image display method according to claim 1, wherein before obtaining control parameters corresponding to the image display of the anchor based on the control command input by the anchor and/or the result obtained by recognizing the face image of the anchor, the method comprises:
and determining that any execution action corresponding to the virtual character corresponding to the anchor is in a trigger state, and acquiring a control instruction input by the anchor and/or identifying a face image of the anchor.
3. The live image display method according to claim 1, wherein the step of obtaining a control parameter corresponding to the anchor image display by recognizing the result of the anchor face image comprises:
recognizing a human face image of a anchor;
acquiring the attitude characteristic information of a main broadcaster according to the face image;
converting the attitude characteristic information into a control parameter for controlling the image display of the anchor;
the step of obtaining the control parameter corresponding to the image display of the anchor based on the control instruction input by the anchor comprises the following steps:
acquiring a control instruction input by a main broadcast; the control command is a command which is associated with the virtual role in advance and is provided for the anchor to input;
converting the control instruction into a control parameter of the image display of the anchor;
the converting the control instruction into the control parameter of the image display of the anchor comprises:
in response to the trigger operation of the anchor on the control element, determining amplitude change information of an execution action corresponding to the control element; the trigger operation comprises a rotation operation and a sliding operation;
and converting the amplitude change information into a control parameter for displaying the image of the anchor.
4. The live character presentation method according to claim 1, wherein said step of controlling the action of the avatar selected by the anchor according to the control parameter is preceded by:
displaying a plurality of virtual roles on a client interface of a main broadcast;
and determining the virtual role selected by the anchor as the virtual role to be displayed according to the selection operation of the anchor on the client interface.
5. The live character presentation method according to claim 1, wherein said step of controlling the action of the avatar selected by the anchor according to the control parameter is preceded by:
displaying a plurality of background pictures on a client interface of the anchor;
and replacing the background of the current anchor live broadcast room with the background picture selected by the anchor according to the selection operation of the anchor on the client interface.
6. The live character presentation method according to claim 1, wherein the step of sending the execution action of the virtual character to a client interface of a viewer for presentation comprises:
detecting whether the virtual character has performed the execution action;
if not, identifying the current action being executed by the virtual role, performing gradient processing on the current action being executed, splicing the current action and the execution action, and controlling the virtual role to execute the execution action.
7. The method for displaying an image on a live broadcast of claim 1, wherein after the step of sending the action of the virtual character to a client interface of a viewer for displaying, the method further comprises:
after the virtual role completes the execution action, detecting whether a next execution action instruction is received at the current moment;
and if the next action execution instruction is not received, controlling the virtual role to execute the preset action.
8. An image display device in live broadcast, comprising:
the acquisition module is used for acquiring control parameters corresponding to the image display of the anchor based on a control instruction input by the anchor and/or a result obtained by identifying a face image of the anchor;
the control module is used for controlling the execution action of the virtual role corresponding to the anchor according to the control parameter;
and the display module is used for sending the execution action of the virtual role to a client of the audience so as to display the execution action on a client interface of the audience.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the character presentation method in live broadcasting according to any one of claims 1 to 7.
10. A computer device, characterized in that the computer device comprises:
one or more processors;
a storage device for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the steps of the method of presenting an avatar in a live broadcast of any of claims 1-7.
CN202110615701.6A 2018-10-11 2018-10-11 Image display method, device and storage medium in live broadcast Active CN113286186B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110615701.6A CN113286186B (en) 2018-10-11 2018-10-11 Image display method, device and storage medium in live broadcast

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110615701.6A CN113286186B (en) 2018-10-11 2018-10-11 Image display method, device and storage medium in live broadcast
CN201811185967.6A CN109120985B (en) 2018-10-11 2018-10-11 Image display method and device in live broadcast and storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201811185967.6A Division CN109120985B (en) 2018-10-11 2018-10-11 Image display method and device in live broadcast and storage medium

Publications (2)

Publication Number Publication Date
CN113286186A true CN113286186A (en) 2021-08-20
CN113286186B CN113286186B (en) 2023-07-18

Family

ID=64857918

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110615701.6A Active CN113286186B (en) 2018-10-11 2018-10-11 Image display method, device and storage medium in live broadcast
CN201811185967.6A Active CN109120985B (en) 2018-10-11 2018-10-11 Image display method and device in live broadcast and storage medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201811185967.6A Active CN109120985B (en) 2018-10-11 2018-10-11 Image display method and device in live broadcast and storage medium

Country Status (1)

Country Link
CN (2) CN113286186B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113435431A (en) * 2021-08-27 2021-09-24 北京市商汤科技开发有限公司 Posture detection method, training device and training equipment of neural network model
CN114245155A (en) * 2021-11-30 2022-03-25 北京百度网讯科技有限公司 Live broadcast method and device and electronic equipment
WO2023178640A1 (en) * 2022-03-25 2023-09-28 云智联网络科技(北京)有限公司 Method and system for realizing live-streaming interaction between virtual characters
CN117289791A (en) * 2023-08-22 2023-12-26 杭州空介视觉科技有限公司 Meta universe artificial intelligence virtual equipment data generation method

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109922354B9 (en) * 2019-03-29 2020-08-21 广州虎牙信息科技有限公司 Live broadcast interaction method and device, live broadcast system and electronic equipment
CN109922355B (en) * 2019-03-29 2020-04-17 广州虎牙信息科技有限公司 Live virtual image broadcasting method, live virtual image broadcasting device and electronic equipment
CN109788345B (en) * 2019-03-29 2020-03-10 广州虎牙信息科技有限公司 Live broadcast control method and device, live broadcast equipment and readable storage medium
CN109905724A (en) * 2019-04-19 2019-06-18 广州虎牙信息科技有限公司 Live video processing method, device, electronic equipment and readable storage medium storing program for executing
CN110062267A (en) * 2019-05-05 2019-07-26 广州虎牙信息科技有限公司 Live data processing method, device, electronic equipment and readable storage medium storing program for executing
CN110072116A (en) * 2019-05-06 2019-07-30 广州虎牙信息科技有限公司 Virtual newscaster's recommended method, device and direct broadcast server
CN110308792B (en) * 2019-07-01 2023-12-12 北京百度网讯科技有限公司 Virtual character control method, device, equipment and readable storage medium
CN110784676B (en) * 2019-10-28 2023-10-03 深圳传音控股股份有限公司 Data processing method, terminal device and computer readable storage medium
CN111541908A (en) * 2020-02-27 2020-08-14 北京市商汤科技开发有限公司 Interaction method, device, equipment and storage medium
CN111432267B (en) * 2020-04-23 2021-05-21 深圳追一科技有限公司 Video adjusting method and device, electronic equipment and storage medium
CN111970522A (en) * 2020-07-31 2020-11-20 北京琳云信息科技有限责任公司 Processing method and device of virtual live broadcast data and storage medium
CN112135160A (en) * 2020-09-24 2020-12-25 广州博冠信息科技有限公司 Virtual object control method and device in live broadcast, storage medium and electronic equipment
CN112241203A (en) * 2020-10-21 2021-01-19 广州博冠信息科技有限公司 Control device and method for three-dimensional virtual character, storage medium and electronic device
CN112601098A (en) * 2020-11-09 2021-04-02 北京达佳互联信息技术有限公司 Live broadcast interaction method and content recommendation method and device
CN112535867B (en) * 2020-12-15 2024-05-10 网易(杭州)网络有限公司 Game progress information processing method and device and electronic equipment
CN112788359B (en) * 2020-12-30 2023-05-09 北京达佳互联信息技术有限公司 Live broadcast processing method and device, electronic equipment and storage medium
CN113289332B (en) * 2021-06-17 2023-08-01 广州虎牙科技有限公司 Game interaction method, game interaction device, electronic equipment and computer readable storage medium
CN113457171A (en) * 2021-06-24 2021-10-01 网易(杭州)网络有限公司 Live broadcast information processing method, electronic equipment and storage medium
CN113518239A (en) * 2021-07-09 2021-10-19 珠海云迈网络科技有限公司 Live broadcast interaction method and system, computer equipment and storage medium thereof
CN113824982A (en) * 2021-09-10 2021-12-21 网易(杭州)网络有限公司 Live broadcast method and device, computer equipment and storage medium
CN114168018A (en) * 2021-12-08 2022-03-11 北京字跳网络技术有限公司 Data interaction method, data interaction device, electronic equipment, storage medium and program product
CN114363685A (en) * 2021-12-20 2022-04-15 咪咕文化科技有限公司 Video interaction method and device, computing equipment and computer storage medium
CN114693294A (en) * 2022-03-04 2022-07-01 支付宝(杭州)信息技术有限公司 Interaction method and device based on electronic certificate and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120133581A1 (en) * 2010-11-29 2012-05-31 International Business Machines Corporation Human-computer interaction device and an apparatus and method for applying the device into a virtual world
CN106878820A (en) * 2016-12-09 2017-06-20 北京小米移动软件有限公司 Living broadcast interactive method and device
CN107154069A (en) * 2017-05-11 2017-09-12 上海微漫网络科技有限公司 A kind of data processing method and system based on virtual role
CN107180446A (en) * 2016-03-10 2017-09-19 腾讯科技(深圳)有限公司 The expression animation generation method and device of character face's model
CN107194979A (en) * 2017-05-11 2017-09-22 上海微漫网络科技有限公司 The Scene Composition methods and system of a kind of virtual role
US20170323467A1 (en) * 2016-05-09 2017-11-09 Activision Publishing, Inc. User generated character animation
CN107750005A (en) * 2017-09-18 2018-03-02 迈吉客科技(北京)有限公司 Virtual interactive method and terminal

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106937154A (en) * 2017-03-17 2017-07-07 北京蜜枝科技有限公司 Process the method and device of virtual image
CN106993195A (en) * 2017-03-24 2017-07-28 广州创幻数码科技有限公司 Virtual portrait role live broadcasting method and system
CN107170030A (en) * 2017-05-31 2017-09-15 珠海金山网络游戏科技有限公司 A kind of virtual newscaster's live broadcasting method and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120133581A1 (en) * 2010-11-29 2012-05-31 International Business Machines Corporation Human-computer interaction device and an apparatus and method for applying the device into a virtual world
CN107180446A (en) * 2016-03-10 2017-09-19 腾讯科技(深圳)有限公司 The expression animation generation method and device of character face's model
US20170323467A1 (en) * 2016-05-09 2017-11-09 Activision Publishing, Inc. User generated character animation
CN106878820A (en) * 2016-12-09 2017-06-20 北京小米移动软件有限公司 Living broadcast interactive method and device
CN107154069A (en) * 2017-05-11 2017-09-12 上海微漫网络科技有限公司 A kind of data processing method and system based on virtual role
CN107194979A (en) * 2017-05-11 2017-09-22 上海微漫网络科技有限公司 The Scene Composition methods and system of a kind of virtual role
CN107750005A (en) * 2017-09-18 2018-03-02 迈吉客科技(北京)有限公司 Virtual interactive method and terminal

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113435431A (en) * 2021-08-27 2021-09-24 北京市商汤科技开发有限公司 Posture detection method, training device and training equipment of neural network model
CN114245155A (en) * 2021-11-30 2022-03-25 北京百度网讯科技有限公司 Live broadcast method and device and electronic equipment
WO2023178640A1 (en) * 2022-03-25 2023-09-28 云智联网络科技(北京)有限公司 Method and system for realizing live-streaming interaction between virtual characters
CN117289791A (en) * 2023-08-22 2023-12-26 杭州空介视觉科技有限公司 Meta universe artificial intelligence virtual equipment data generation method

Also Published As

Publication number Publication date
CN109120985B (en) 2021-07-23
CN113286186B (en) 2023-07-18
CN109120985A (en) 2019-01-01

Similar Documents

Publication Publication Date Title
CN109120985B (en) Image display method and device in live broadcast and storage medium
CN111405299B (en) Live broadcast interaction method based on video stream and corresponding device thereof
CN111541936A (en) Video and image processing method and device, electronic equipment and storage medium
US9185349B2 (en) Communication terminal, search server and communication system
US20210409535A1 (en) Updating an avatar status for a user of a messaging system
US20230386204A1 (en) Adding beauty products to augmented reality tutorials
KR20230096043A (en) Side-by-side character animation from real-time 3D body motion capture
US11924540B2 (en) Trimming video in association with multi-video clip capture
CN111984763B (en) Question answering processing method and intelligent device
US11769500B2 (en) Augmented reality-based translation of speech in association with travel
US11516550B2 (en) Generating an interactive digital video content item
EP3679825A1 (en) A printing method and system of a nail printing apparatus, and a medium thereof
US20210406965A1 (en) Providing travel-based augmented reality content relating to user-submitted reviews
US20240171849A1 (en) Trimming video in association with multi-video clip capture
US20220375137A1 (en) Presenting shortcuts based on a scan operation within a messaging system
KR20230022844A (en) Artificial Intelligence Request and Suggestion Cards
CN111708383A (en) Method for adjusting shooting angle of camera and display device
CN111741321A (en) Live broadcast control method, device, equipment and computer storage medium
CN108038160A (en) Dynamic animation store method, dynamic animation call method and device
CN113051435B (en) Server and medium resource dotting method
US20240073166A1 (en) Combining individual functions into shortcuts within a messaging system
CN102157183B (en) Method for realizing video capturing on video player of portable electronic equipment
Albertini et al. Communicating user's focus of attention by image processing as input for a mobile museum guide
US11704626B2 (en) Relocation of content item to motion picture sequences at multiple devices
WO2024051467A1 (en) Image processing method and apparatus, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant