CN115002554A - Live broadcast picture adjusting method, system and device and computer equipment - Google Patents

Live broadcast picture adjusting method, system and device and computer equipment Download PDF

Info

Publication number
CN115002554A
CN115002554A CN202210519242.6A CN202210519242A CN115002554A CN 115002554 A CN115002554 A CN 115002554A CN 202210519242 A CN202210519242 A CN 202210519242A CN 115002554 A CN115002554 A CN 115002554A
Authority
CN
China
Prior art keywords
live broadcast
picture
live
preset
broadcast picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210519242.6A
Other languages
Chinese (zh)
Inventor
曾家乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Cubesili Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Cubesili Information Technology Co Ltd filed Critical Guangzhou Cubesili Information Technology Co Ltd
Priority to CN202210519242.6A priority Critical patent/CN115002554A/en
Publication of CN115002554A publication Critical patent/CN115002554A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4854End-user interface for client configuration for modifying image parameters, e.g. image brightness, contrast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440245Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application relates to the technical field of network live broadcast, and provides a live broadcast picture adjusting method, a system, a device and computer equipment, wherein the method comprises the following steps: the method comprises the steps that a spectator client side responds to a live broadcast picture adjusting instruction to obtain a first live broadcast picture and a target background picture; the live broadcast picture adjusting instruction is generated when at least judging that the environment brightness information corresponding to the live broadcast watching environment where the current audience is located meets a preset first live broadcast picture adjusting condition and the picture brightness related information corresponding to the background area in the first live broadcast picture meets a preset second live broadcast picture adjusting condition; the first live broadcast picture is obtained by analyzing a live broadcast video stream sent by the server; and the audience client side adjusts the background area in the first live broadcast picture according to the target background picture to obtain a second live broadcast picture, and outputs the second live broadcast picture to a live broadcast room interface. Compared with the prior art, the method and the device can improve the live broadcast watching experience of audiences under the dim live broadcast watching environment.

Description

Live broadcast picture adjusting method, system and device and computer equipment
Technical Field
The embodiment of the application relates to the technical field of network live broadcast, in particular to a live broadcast picture adjusting method, a live broadcast picture adjusting system, a live broadcast picture adjusting device and computer equipment.
Background
With the rapid development of internet technology and streaming media technology, live webcasting is becoming an entertainment means that is becoming popular. More and more users are in the online interaction of experience and anchor in the live broadcast room, and the anchor can also obtain economic benefits through network live broadcast, so that the social employment pressure can be relieved, and the regional economic development can be driven.
Currently, the live viewing environment of a webcast audience is not controllable, for example: spectator may watch live under the very dim environment of light, so, under this kind of live watching environment, bright live broadcast picture will be great to spectator's eye stimulation, be unfavorable for improving spectator's retention rate and watch for a long time, also can influence its live broadcast of network and watch experience.
Disclosure of Invention
The embodiment of the application provides a live broadcast picture adjusting method, a live broadcast picture adjusting system, a live broadcast picture adjusting device and computer equipment, and the technical problems that under a dim live broadcast watching environment, the live broadcast watching experience of audiences is improved, the retention rate of the audiences is improved, and the watching duration is prolonged can be solved, and the technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a live view adjustment method, including:
the method comprises the steps that a spectator client side responds to a live broadcast picture adjusting instruction to obtain a first live broadcast picture and a target background picture; the live broadcast picture adjusting instruction is generated when at least judging that the environment brightness information corresponding to the live broadcast watching environment where the current audience is located meets a preset first live broadcast picture adjusting condition and the picture brightness related information corresponding to the background area in the first live broadcast picture meets a preset second live broadcast picture adjusting condition; the first live broadcast picture is obtained by analyzing a live broadcast video stream sent by a server;
and the audience client side adjusts the background area in the first live broadcast picture according to the target background picture to obtain a second live broadcast picture, and the second live broadcast picture is output to a live broadcast room interface.
In a second aspect, an embodiment of the present application provides a live view adjustment system, including: a server and a viewer client;
the audience client is used for responding to the live broadcast picture adjusting instruction and acquiring a first live broadcast picture and a target background picture; the live broadcast picture adjusting instruction is generated when at least judging that the environment brightness information corresponding to the live broadcast watching environment where the current audience is located meets a preset first live broadcast picture adjusting condition and the picture brightness related information corresponding to the background area in the first live broadcast picture meets a preset second live broadcast picture adjusting condition; the first live broadcast picture is obtained by analyzing a live broadcast video stream sent by the server;
and the audience client is used for adjusting the background area in the first live broadcast picture according to the target background picture to obtain a second live broadcast picture, and outputting the second live broadcast picture to a live broadcast room interface.
In a third aspect, an embodiment of the present application provides a live view adjusting apparatus, including:
the first acquisition unit is used for responding to the live broadcast picture adjustment instruction by the audience client to acquire a first live broadcast picture and a target background picture; the live broadcast picture adjusting instruction is generated when at least judging that the environment brightness information corresponding to the live broadcast watching environment where the current audience is located meets a preset first live broadcast picture adjusting condition and the picture brightness related information corresponding to the background area in the first live broadcast picture meets a preset second live broadcast picture adjusting condition; the first live broadcast picture is obtained by analyzing a live broadcast video stream sent by a server;
and the first adjusting unit is used for adjusting the background area in the first live broadcast picture by the audience client according to the target background picture to obtain a second live broadcast picture and outputting the second live broadcast picture to a live broadcast room interface.
In a fourth aspect, embodiments of the present application provide a computer device, a processor, a memory, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the steps of the method according to the first aspect.
In a fifth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the method according to the first aspect.
According to the method and the device, when the ambient brightness information corresponding to the live broadcast watching environment where the current audience is located meets the preset first live broadcast picture adjusting condition and the picture brightness related information corresponding to the background area in the first live broadcast picture meets the preset second live broadcast picture adjusting condition, a live broadcast picture adjusting instruction is generated, so that the audience client responds to the live broadcast picture adjusting instruction to obtain the first live broadcast picture and a target background picture, the target background picture is used for adjusting the background area in the first live broadcast picture to obtain the second live broadcast picture, the second live broadcast picture is output to a live broadcast room interface, the live broadcast watching experience of the audience is improved in a dark live broadcast watching environment in a mode of adjusting the background area in the live broadcast picture, and the retention rate and the watching duration of the audience are improved.
For a better understanding and implementation, the technical solutions of the present application are described in detail below with reference to the accompanying drawings.
Drawings
Fig. 1 is a schematic view of an application scenario of a live view adjustment method according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a live view adjustment method according to a first embodiment of the present application;
fig. 3 is another schematic flow chart of a live view adjustment method according to a first embodiment of the present application;
fig. 4 is a schematic flowchart of S101 in a live view adjustment method according to a first embodiment of the present application;
fig. 5 is a schematic display diagram of a first confirmation control provided in the embodiment of the present application in a live view interface;
fig. 6 is another schematic flow chart of S101 in a live view adjustment method according to a first embodiment of the present application;
fig. 7 is a schematic display diagram of a background frame list in a live view interface according to an embodiment of the present application;
fig. 8 is a schematic flowchart of S102 in a live view adjustment method according to a first embodiment of the present application;
fig. 9 is a schematic flowchart of a live view adjustment method according to a first embodiment of the present application;
fig. 10 is a schematic view illustrating a display of a virtual clothing list in a live broadcast interface according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a live view adjustment system according to a second embodiment of the present application;
fig. 12 is a schematic structural diagram of a live view adjustment apparatus according to a third embodiment of the present application;
fig. 13 is a schematic structural diagram of a computer device according to a fourth embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if/if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
As will be appreciated by those skilled in the art, the terms "client," "terminal device," and "terminal device" as used herein include both wireless signal receiver devices, which include only wireless signal receiver devices without transmit capability, and receiving and transmitting hardware devices, which include receiving and transmitting hardware devices capable of two-way communication over a two-way communication link. Such a device may include: cellular or other communication devices such as personal computers, tablets, etc. having single or multi-line displays or cellular or other communication devices without multi-line displays; PCS (personal communications Service), which may combine voice, data processing, facsimile and/or data communications capabilities; a PDA (Personal Digital Assistant), which may include a radio frequency receiver, a pager, internet/intranet access, a web browser, a notepad, a calendar and/or a GPS (Global positioning system) receiver; a conventional laptop and/or palmtop computer or other device having and/or including a radio frequency receiver. As used herein, a "client," "terminal device" can be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or situated and/or configured to operate locally and/or in a distributed fashion at any other location(s) on earth and/or in space. The "client", "terminal Device" used herein may also be a communication terminal, a web terminal, a music/video playing terminal, such as a PDA, an MID (Mobile Internet Device) and/or a Mobile phone with music/video playing function, and may also be a smart tv, a set-top box, and the like.
The hardware referred to by the names "server", "client", "service node", etc. is essentially a computer device with the performance of a personal computer, and is a hardware device having necessary components disclosed by the von neumann principle, such as a central processing unit (including an arithmetic unit and a controller), a memory, an input device, an output device, etc., wherein a computer program is stored in the memory, and the central processing unit loads a program stored in an external memory into the internal memory to run, executes instructions in the program, and interacts with the input and output devices, thereby accomplishing specific functions.
It should be noted that the concept of "server" as referred to in this application can be extended to the case of a server cluster. According to the network deployment principle understood by those skilled in the art, the servers should be logically divided, and in physical space, the servers may be independent from each other but can be called through an interface, or may be integrated into one physical computer or a set of computer clusters. Those skilled in the art will appreciate this variation and should not be so limited as to restrict the implementation of the network deployment of the present application.
Referring to fig. 1, fig. 1 is a schematic view of an application scenario of a live view adjustment method according to an embodiment of the present application, where the application scenario includes an anchor client 101, a server 102, and a viewer client 103, where the anchor client 101 and the viewer client 103 interact with each other through the server 102.
The proposed clients of the embodiment of the present application include the anchor client 101 and the viewer client 103.
It is noted that there are many understandings of the concept of "client" in the prior art, such as: it may be understood as an application program installed in a computer device, or may be understood as a hardware device corresponding to a server.
In the embodiments of the present application, the term "client" refers to a hardware device corresponding to a server, and more specifically, refers to a computer device, such as: smart phones, smart interactive tablets, personal computers, and the like.
When the client is a mobile device such as a smart phone and an intelligent interactive tablet, a user can install a matched mobile application program on the client and can also access a Web application program on the client.
When the client is a non-mobile device such as a Personal Computer (PC), the user can install a matching PC application on the client, and similarly can access a Web application on the client.
The mobile application refers to an application program that can be installed in the mobile device, the PC application refers to an application program that can be installed in the non-mobile device, and the Web application refers to an application program that needs to be accessed through a browser.
Specifically, the Web application program may be divided into a mobile version and a PC version according to the difference of the client types, and the page layout modes and the available server support of the two versions may be different.
In the embodiment of the application, the types of live application programs provided to the user are mobile end live application programs, PC end live application programs and Web end live application programs. The user can autonomously select the mode of participating in the live webcast according to different types of the client adopted by the user.
The present application can divide the clients into a main broadcasting client 101 and a spectator client 103, depending on the identity of the user using the clients.
The anchor client 101 is one end that sends a webcast video, and is typically a client used by an anchor (i.e., a webcast anchor user) in webcast.
The viewer client 103 refers to an end that receives and views a live video, and is typically a client employed by a viewer viewing a video in a live network (i.e., a live viewer user).
The hardware at which the anchor client 101 and viewer client 103 are directed is essentially a computer device, and in particular, as shown in fig. 1, it may be a type of computer device such as a smart phone, smart interactive tablet, and personal computer. Both the anchor client 101 and the viewer client 103 may access the internet via known network access means to establish a data communication link with the server 102.
Server 102, acting as a business server, may be responsible for further connecting with related audio data servers, video streaming servers, and other servers providing related support, etc., to form a logically associated server cluster for serving related terminal devices, such as anchor client 101 and viewer client 103 shown in fig. 1.
In the embodiment of the present application, the anchor client 101 and the audience client 103 may join in the same live broadcast room (i.e., a live broadcast channel), where the live broadcast room is a chat room implemented by means of an internet technology, and generally has an audio/video broadcast control function. The anchor user is live in the live room through the anchor client 101, and the audience of the audience client 103 can log in the server 102 to enter the live room to watch the live.
In the live broadcast room, interaction between the anchor and the audience can be realized through known online interaction modes such as voice, video, characters and the like, generally, the anchor performs programs for audience users in the form of audio and video streams, and economic transaction behaviors can also be generated in the interaction process. Of course, the application form of the live broadcast room is not limited to online entertainment, and can also be popularized to other relevant scenes, such as a video conference scene, a product recommendation sale scene and any other scenes needing similar interaction.
Specifically, the process of the viewer watching the live broadcast is as follows: a viewer may click on a live application installed on the viewer client 103 and choose to enter any one of the live rooms, triggering the viewer client 103 to load a live room interface for the viewer, the live room interface including a number of interactive components, for example: the video window, the virtual gift column, the public screen and the like can enable audiences to watch live broadcast in the live broadcast room by loading the interactive components, and perform various online interactions, wherein the online interaction modes comprise but are not limited to giving virtual gifts, speaking on the public screen and the like.
The anchor client 101 collects live audio and video stream data, sends a live broadcast room identifier and the live broadcast audio and video stream data to the server 102, the server 102 issues the live broadcast audio and video stream data to the audience client 103 in the live broadcast room corresponding to the live broadcast room identifier, and the audience client 103 in the live broadcast room outputs the live broadcast audio and video stream data, so that audiences can watch live broadcast through the audience client 103. Because spectator probably watches the live broadcast under the very dim environment of light, consequently, bright live broadcast picture will be great to spectator's eye irritation, is unfavorable for improving spectator's retention and watching for a long time, also can influence its live broadcast on internet experience. Based on the above, the embodiment of the application provides a live broadcast picture adjusting method. Referring to fig. 2, fig. 2 is a schematic flow chart of a live view adjustment method according to a first embodiment of the present application, where the method includes the following steps:
s101: the method comprises the steps that a spectator client responds to a live broadcast picture adjusting instruction to obtain a first live broadcast picture and a target background picture; the live broadcast picture adjusting instruction is generated when at least judging that the environment brightness information corresponding to the live broadcast watching environment where the current audience is located meets a preset first live broadcast picture adjusting condition and the picture brightness related information corresponding to the background area in the first live broadcast picture meets a preset second live broadcast picture adjusting condition; the first live broadcast picture is obtained by analyzing a live broadcast video stream sent by the server.
S102: and the audience client side adjusts the background area in the first live broadcast picture according to the target background picture to obtain a second live broadcast picture, and outputs the second live broadcast picture to a live broadcast room interface.
In this embodiment, the live view adjustment method is described with two execution subjects, i.e., the client and the server. Wherein, the client comprises an anchor client and a spectator client.
With respect to step S101, the viewer client acquires a first live view and a target background view in response to the live view adjustment instruction.
The first live video is obtained by analyzing a live video stream sent by the server. When a viewer client joins a certain live broadcast room, the server acquires a live broadcast audio/video stream corresponding to a live broadcast room identifier (namely, a channel identifier) and the live broadcast audio/video stream corresponding to the live broadcast room identifier, and sends the live broadcast audio/video stream corresponding to the live broadcast room identifier to the viewer client.
The live audio and video stream comprises a live video stream and a live audio stream, and it can be understood that in a network live scene, the live audio and video stream is collected by a main broadcast client and sent to a server, and a main broadcast corresponding to the main broadcast client establishes a live broadcast room corresponding to the live broadcast room identifier.
The target background picture is a background picture to be used for adjusting the background area of the first through picture. How to acquire the target background picture will be explained later.
The following describes conditions under which a live view adjustment command is generated.
The live broadcast picture adjusting instruction is generated when at least judging that the environment brightness information corresponding to the live broadcast watching environment where the current audience is located meets a preset first live broadcast picture adjusting condition and the picture brightness related information corresponding to the background area in the first live broadcast picture meets a preset second live broadcast picture adjusting condition.
The above-mentioned determination operation may be performed by the server or the viewer client.
If the server executes the live broadcast image adjustment, the server acquires the environment brightness information corresponding to the live broadcast viewing environment where the current audience is located, and at least sends the environment brightness information to the server, on one hand, the server judges whether the environment brightness information corresponding to the live broadcast viewing environment where the current audience is located meets a preset first live broadcast image adjustment condition or not, on the other hand, the server acquires the image brightness related information corresponding to a background area in a first live broadcast image and judges whether the image brightness related information corresponding to the background area in the first live broadcast image meets a preset second live broadcast image adjustment condition or not, and if both the image brightness related information and the background area in the first live broadcast image meet the preset second live broadcast image adjustment condition, the server generates and sends a live broadcast image adjustment instruction to the audience client.
If the live broadcast image adjustment instruction is executed by the audience client, the audience client directly judges whether the environment brightness information corresponding to the live broadcast viewing environment of the current audience meets a preset first live broadcast image adjustment condition or not according to the environment brightness information after acquiring the environment brightness information corresponding to the live broadcast viewing environment of the current audience, and acquires the image brightness related information corresponding to the background area in the first live broadcast image, judges whether the image brightness related information corresponding to the background area in the first live broadcast image meets a preset second live broadcast image adjustment condition or not, and if the image brightness related information meets the preset second live broadcast image adjustment condition, the audience client generates the live broadcast image adjustment instruction.
In an optional embodiment, the ambient brightness information corresponding to the live viewing environment where the current viewer is located may be determined based on a video picture acquired by a camera, where the camera may be a camera carried by the viewer client or an external camera of the viewer client. If the brightness of the video image collected by the camera is higher, it means that the ambient brightness information is higher.
In another alternative embodiment, the ambient brightness information corresponding to the live viewing environment in which the current viewer is located may be determined based on brightness data collected by a brightness sensor, where the brightness sensor may be a brightness sensor carried by the viewer client or a brightness sensor externally connected to the viewer client.
And the preset first live image adjusting condition is used for judging whether the current environment brightness indicated by the current environment brightness information is dark or not. The preset second live broadcast picture adjusting condition is used for judging whether the brightness of the background area indicated by the picture brightness related information corresponding to the background area in the first live broadcast picture is brighter or not.
The following is a description of when the live view adjustment command is generated from the perspective of the viewer client.
In an alternative embodiment, referring to fig. 3, before the viewer client responds to the live-screen adjustment instruction, S101 includes the steps of:
s103: and the audience client acquires the environment brightness information corresponding to the live watching environment.
S104: and if the environment brightness information corresponding to the live broadcast watching environment is lower than a preset first brightness threshold value, the audience client analyzes the live broadcast video stream to obtain a first live broadcast picture.
S105: a spectator client divides a background area from a first direct-playing picture, and acquires first picture brightness related information corresponding to the background area and second picture brightness related information corresponding to the background area; the first image brightness related information is an average value of brightness information of all pixel points in the background area, the second image brightness related information is a proportion of a target background pixel point in the background area, and the target background pixel point is a background pixel point of which the brightness information exceeds a preset first brightness threshold value in the background area.
S106: and if the first picture brightness related information exceeds a preset first brightness related threshold and the second picture brightness related information exceeds a preset second brightness related threshold, the audience client generates a live broadcast picture adjusting instruction.
In step S103, the viewer client acquires ambient brightness information corresponding to the live viewing environment. Reference may be made specifically to the foregoing description.
In step S104, if the ambient brightness information corresponding to the live viewing environment is lower than the preset first brightness threshold, the viewer client parses the live video stream to obtain a first live frame. That is to say, if the ambient brightness information corresponding to the live viewing environment is lower than the preset first brightness threshold, which indicates that the current ambient brightness is dim, the viewer client parses the live video stream to obtain a first live frame.
In step S105, the viewer client divides the background area from the first direct-broadcast picture, and obtains first picture brightness related information corresponding to the background area and second picture brightness related information corresponding to the background area.
The spectator client may use an existing trunk segmentation network or opencv human body contour segmentation algorithm to perform the segmentation of the background region and the human body trunk region, which is not limited in detail herein.
The first image brightness related information is an average value of brightness information of all pixel points in the background area, the second image brightness related information is a proportion of a target background pixel point in the background area, and the target background pixel point is a background pixel point of which the brightness information exceeds a preset first brightness threshold value in the background area.
If the first image brightness related information exceeds a preset first brightness related threshold and the second image brightness related information exceeds a preset second brightness related threshold, which means that the brightness of the background area is brighter, then the viewer client generates the live broadcast image adjusting instruction.
It should be noted that, the obtaining of the brightness information of each pixel point in the background region may first adjust the color space of the first live image to the hsv color space, and obtain the brightness information of the pixel point according to the brightness value therein.
In an alternative embodiment, the S103 obtaining, by the viewer client, ambient brightness information corresponding to a live viewing environment includes:
s1031: and the audience client acquires video pictures through the camera and acquires the ambient brightness information corresponding to the live watching environment.
S104: if the environment brightness information corresponding to the live broadcast watching environment is lower than a preset first brightness threshold value, the audience client analyzes the live broadcast video stream to obtain a first live broadcast picture, and the method comprises the following steps:
s1041: if the human face image is displayed in the video picture and the environment brightness information corresponding to the live watching environment is lower than a preset first brightness threshold value, the audience client analyzes the live video stream to obtain a first live picture.
The method comprises the steps that a spectator client judges whether a face image is displayed in a video image or not according to a collected video image and a preset face detection algorithm, so that the face image is displayed in the video image, and the live broadcast video stream can be analyzed only when the environment brightness information corresponding to the live broadcast watching environment is lower than a preset first brightness threshold value, and a first live broadcast image is obtained.
In this embodiment, face detection can be used to determine whether to perform corresponding picture adjustment when the audience is watching the live broadcast, so as to reduce the burden of the equipment, increase the operation speed, and improve the live broadcast experience of the audience.
In an alternative implementation, referring to fig. 4, in S101, the method for generating a live frame adjustment instruction by a viewer client includes the steps of:
s1011: the method comprises the steps that a spectator client acquires first confirmation control data, and displays a first confirmation control in a live broadcast room interface according to the first confirmation control data; and at least live broadcast picture adjustment confirmation information is displayed in the first confirmation control.
S1012: the viewer client generates a live screen adjustment instruction in response to a first trigger instruction to the first confirmation control.
The first confirmation control data includes display data of the first confirmation control and function data of the first confirmation control. The display data of the first confirmation control is used for determining the display style, the display position, the display size and the like of the first confirmation control. The function data of the first confirmation control is used for realizing the information display function, the trigger response function and the like of the first confirmation control.
Referring to fig. 5, fig. 5 is a schematic view illustrating a display of a first confirmation control in a live view interface according to an embodiment of the present application. As can be seen from fig. 5, the first confirmation control 51 is displayed in the live view interface, and live view adjustment confirmation information 52, a confirmation sub-control 53, and a cancellation sub-control 54 are displayed on the first confirmation control 51.
When the viewer clicks the confirmation sub-control 53, the viewer client is triggered to send a first trigger instruction to the first confirmation control, and then the viewer client generates a live broadcast frame adjustment instruction in response to the first trigger instruction to the first confirmation control.
When the viewer clicks the cancel sub-control 54, the viewer client is triggered to issue a second trigger command to the first confirmation control, and then the viewer client does not generate a live frame adjustment command and stops executing the process related to the live frame adjustment.
In this embodiment, the audience can autonomously select whether to adjust the live broadcast picture, so that the live broadcast experience of the audience is further improved.
How the viewer client acquires the target background picture is explained in detail below.
In an alternative embodiment, the target background screen may be a default configured background screen.
In another alternative embodiment, referring to fig. 6, the step of acquiring the first live view and the target background view in S101 includes:
s1013: the viewer client generates and sends a background picture pull request to the server.
S1014: the server responds to the background picture pulling request, obtains background picture list data and issues the background picture list data to the audience client.
S1015: the audience client receives the background picture list data and loads a background picture list according to the background picture list data; wherein, a plurality of background thumbnails corresponding to the background pictures are displayed in the background picture list.
S1016: and the audience client responds to the selected instruction of the target background thumbnail to acquire a target background picture.
In step S1013, the background-screen pull instruction includes at least the viewer client id, so that the server can confirm for which viewer client the background-screen list data is pulled. Also, different background picture list data may be downloaded based on different viewer client identifications.
With respect to step S1014, the background picture list data includes display data of the background picture list and function data of the background picture list. The display data of the background picture list is used to determine the display style, display position, display size, and the like of the background picture list. The function data of the background picture list is used for realizing the functions of displaying the background picture list, responding to a sliding instruction, responding to a selected instruction and the like.
In step S1015, several background thumbnails corresponding to the background screen are displayed in the background screen list, so that the viewer can visually see the rough style of the background screen available for adjustment.
In step S1016, the viewer may slide the background screen list, browse the background thumbnails corresponding to the plurality of background screens, select one of the background thumbnails as the target background thumbnail, generate a selection instruction for the target background thumbnail, and the viewer client obtains the target background screen in response to the selection instruction for the target background thumbnail.
Referring to fig. 7, fig. 7 is a schematic view illustrating a display of a background frame list in a live view interface according to an embodiment of the present disclosure. As can be seen, a plurality of background thumbnails 72 are displayed in the background picture list 71, and a viewer can browse the background thumbnails 72 by sliding left and right, or click the first page turning control 73 for browsing, and can select a target background thumbnail by clicking one of the background thumbnails 72, so that the viewer client can acquire the target background picture.
In this embodiment, the audience can autonomously browse and select the background picture which the audience wants to apply, so that the picture adjustment experience of the audience is further improved, and the retention rate and the watching duration of the audience in the live broadcast room are improved.
In an optional embodiment, after the live view is adjusted, the viewer may also trigger the viewer client to load the background view list according to the background view list data by long pressing the background area of the second live view output in the live view interface, so as to change the target background view.
However, the present invention is not limited to this, and may be a double-click trigger or the like instead of the long-press trigger.
In step S102, the viewer client adjusts the background area in the first live view according to the target background view to obtain a second live view, and outputs the second live view to the live view interface.
In an optional embodiment, the audience client may segment a background region and a human body region in the first live broadcast picture, generate an image mask corresponding to the background region, use the image mask to take consideration of the first live broadcast picture to obtain a first filtered image, perform background filling on the first filtered image according to a target background picture to obtain a second live broadcast picture, and output the second live broadcast picture to a live broadcast room interface.
According to the method and the device, when the ambient brightness information corresponding to the live broadcast watching environment where the current audience is located meets the preset first live broadcast picture adjusting condition and the picture brightness related information corresponding to the background area in the first live broadcast picture meets the preset second live broadcast picture adjusting condition, a live broadcast picture adjusting instruction is generated, so that the audience client responds to the live broadcast picture adjusting instruction to obtain the first live broadcast picture and a target background picture, the target background picture is used for adjusting the background area in the first live broadcast picture to obtain the second live broadcast picture, the second live broadcast picture is output to a live broadcast room interface, the live broadcast watching experience of the audience is improved in a dark live broadcast watching environment in a mode of adjusting the background area in the live broadcast picture, and the retention rate and the watching duration of the audience are improved.
For live broadcast picture adjustment, in order to improve the live broadcast viewing experience of audiences, not only the background area in the live broadcast picture can be adjusted, but also the anchor image presented in the live broadcast picture, namely the human body trunk area, can be adjusted.
In an alternative implementation, referring to fig. 8, in S102, the viewer client adjusts a background area in the first live view according to the target background view to obtain a second live view, including the steps of:
s1021: the audience client divides a human body trunk area from the first direct-broadcasting picture and divides a clothing area from the human body trunk area.
S1022: the audience client acquires average color information corresponding to the clothes area and color difference information between the average color information and preset color information.
S1023: and the audience client side adjusts the background area in the first live broadcast picture according to the target background picture, and adjusts the color information of each clothes pixel point in the clothes area in the first live broadcast picture according to the color difference information to obtain a second live broadcast picture.
The manner in which the audience client partitions the human body trunk region from the first live-action picture may refer to the foregoing, and then the audience client may partition the clothing region from the human body trunk region according to the human body skin color and the clothing color.
And then, the audience client acquires the average color information corresponding to the clothing region and the color difference information between the average color information and the preset color information.
The preset color information is dark color, and specific data thereof is not limited herein. If the pixel value of the pixel point in the clothes area is directly set as the preset color information, the live broadcast picture is unnatural, and the watching of audiences is influenced.
Therefore, in this embodiment, the audience client obtains the average color information corresponding to the clothing region, that is, the average value of the color information of each pixel point in the clothing region, calculates the color difference information according to the average color information corresponding to the clothing region and the preset color information, and adjusts the color information of each clothing pixel point in the clothing region in the first live broadcast picture by using the color difference information to obtain the second live broadcast picture.
For example, the average color information corresponding to the clothing area is a, the preset color information is B, then the color difference information is | a-B |, and the color information of each clothing pixel point in the clothing area in the first live frame is adjusted according to the color difference information, namely | a-B |' is subtracted from the color information C of each clothing pixel point. It should be noted that, if the RGB color space is adopted, the color information has three channel values, and then adjusting the color information should also be understood as adjusting the three channel values, and the minimum value of the channel values is 0, then the result of subtracting | a-B | from the color information C of each clothing pixel point cannot be less than 0.
In this embodiment, the spectator's customer end is through the color of adjustment anchor clothing, can further reduce under the dim viewing environment, and bright pixel point is to the stimulation of spectator's eye, promotes the live broadcast of watching and is watched experience to, owing to according to color difference information, adjust the color information of each clothing pixel point in the clothing region in the first picture of broadcasting all the time, consequently, the demonstration of adjustment back clothing can be comparatively natural, is difficult for being perceived by spectator.
In an optional embodiment, after the step S102 of outputting the second live screen to the live-air interface, the method includes the steps of:
s107: responding to a clothes replacement instruction by the audience client, inputting a second live broadcast picture into a pre-trained target virtual clothes changing model to obtain a third live broadcast picture, and outputting the third live broadcast picture to a live broadcast room interface; the pre-trained target virtual clothes changing model is used for replacing real clothes presented in the second live broadcast picture with target virtual clothes.
In this embodiment, the real clothes presented in the second live view can be replaced by the target virtual clothes through the pre-trained target virtual clothes changing model.
The pre-trained target virtual dressing change model can be any one of the existing deep learning neural network models, and is not limited in detail herein.
The following describes when a laundry replacement command is generated and how to determine a target virtual laundry. In an alternative embodiment, referring to fig. 9, before the response of the clothes replacement command to the viewer client in S107, the method includes the steps of:
s108: the audience client side responds to a trigger instruction of the anchor clothing area to acquire a plurality of pre-trained virtual clothing changing models and virtual clothing list data.
S109: the audience client loads a virtual clothes list according to the virtual clothes list data; the virtual clothes list is displayed with a plurality of virtual clothes thumbnails, and each virtual clothes thumbnail corresponds to a pre-trained virtual clothes changing model.
S110: and the audience client side responds to the selected instruction of the target virtual clothes thumbnail, determines a pre-trained target virtual clothes changing model, and generates and sends a clothes replacing instruction.
In step S108, the viewer may trigger the anchor clothing region of the second live broadcast screen output in the live broadcast interface, so that the viewer client generates a trigger instruction for the anchor clothing region, for example: and long-pressing the main broadcasting clothes area of the second live broadcasting picture. And then, the audience client side responds to a trigger instruction of the anchor clothing area to acquire a plurality of pre-trained virtual clothing changing models and virtual clothing list data.
If the audience client side responds to the trigger instruction of the anchor clothing area for the first time, the audience client side needs to pull a plurality of pre-trained virtual clothing changing models and virtual clothing list data from the server.
Regarding step S109, the virtual laundry list data includes display data of the virtual laundry list and function data of the virtual laundry list. The display data of the virtual clothes list is used for determining the display style, the display position, the display size and the like of the virtual clothes list. The function data of the virtual clothes list is used for realizing the functions of displaying the virtual clothes list, responding to a sliding instruction, responding to a selected instruction and the like.
A plurality of virtual clothes thumbnails are displayed in the virtual clothes list, and each virtual clothes thumbnail corresponds to a pre-trained virtual clothes changing model.
That is, if the user selects a different virtual clothes thumbnail, the real clothes presented in the second live broadcast picture will be replaced by a different pre-trained virtual clothes changing model.
Regarding step S110, the viewer may slide the virtual clothing list, browse several virtual clothing thumbnails, select one of the virtual clothing thumbnails as a target virtual clothing thumbnail, generate a selection instruction for the target background thumbnail, and the viewer client determines a pre-trained target virtual clothing changing model in response to the selection instruction for the target background thumbnail, and generate and send a clothing replacing instruction.
Referring to fig. 10, fig. 10 is a schematic view illustrating a display of a virtual clothing list in a live broadcast interface according to an embodiment of the present application. As can be seen, a plurality of virtual clothes thumbnails 102 are displayed in the virtual clothes list 101, and a viewer can browse the virtual clothes thumbnails 102 by sliding left and right, and also can browse by clicking the second page turning control 103, and the viewer can select a target virtual clothes thumbnail by clicking a certain virtual clothes thumbnail 102, so that the viewer client pre-trains a target virtual clothes changing model, generates and sends a clothes replacing instruction. In fig. 10, the virtual clothing thumbnail 102 does not show the clothing color, but only shows the clothing outline.
In an alternative embodiment, the virtual dressing model is trained by the server. Specifically, the server acquires a plurality of first training images, human body trunk data corresponding to the first training images and a plurality of virtual clothes images; the server trains a virtual dressing change model respectively according to the plurality of first training images, the human body trunk data corresponding to the first training images, each virtual clothing image, a preset optimization algorithm and a preset loss function to obtain a plurality of pre-trained virtual dressing change models.
The human body trunk data corresponding to the first training image at least comprises position information of key points of the human body trunk, and the position information can be obtained by adopting the existing human body trunk recognition algorithm.
Based on the human body trunk data corresponding to the first training image, the virtual clothes changing model can perform various kinds of processing on the virtual clothes image, such as: the virtual clothes changing model can achieve a more real clothes replacing effect by enlarging, reducing, rotating and the like and adjusting the display position of the processed virtual clothes image.
In this embodiment, to different virtual clothing images, can train a virtual model of changing clothes respectively, obtain a plurality of virtual model of changing clothes that trains in advance to follow-up spectator can select a certain virtual model of changing clothes, carries out the clothing replacement, promotes spectator's interactive experience.
In the following, how to obtain a plurality of virtual clothes images is described first, and then a specific network training process is described.
In an optional embodiment, the step of acquiring, by the server, the plurality of first training images, the human torso data corresponding to the first training images, and the plurality of virtual clothes images includes:
firstly, a server acquires a current anchor identification and a plurality of live broadcast pictures corresponding to the current anchor identification; wherein the real clothes worn by the current anchor presented in each live broadcast picture are different. That is, the server may acquire a live broadcast picture of the current anchor in different live webcasts, and the current anchor may wear different real clothes.
Then, the server acquires third picture brightness related information corresponding to the clothes area in the live broadcast picture; and the third picture brightness related information is the average value of the brightness information of each clothes pixel point in the clothes area.
And then, the server acquires a plurality of target live broadcast pictures according to third picture brightness related information corresponding to the clothes areas in the live broadcast pictures and a preset third brightness related threshold. Specifically, the server obtains a live broadcast picture, as a target live broadcast picture, in which third picture brightness related information corresponding to a clothing region in the live broadcast picture is lower than a preset third brightness related threshold.
And finally, the server obtains a plurality of virtual clothes images according to the plurality of target live broadcast pictures. Optionally, the server may search in the shopping website according to the target live broadcast picture to obtain the corresponding virtual clothes image, or the server may also segment the clothes area from the target live broadcast picture to obtain the corresponding virtual clothes image.
In an optional embodiment, the virtual dressing change model and the clothing identification model together form a confrontation neural network model, the server respectively trains a virtual dressing change model according to a plurality of first training images, human body trunk data corresponding to the first training images, each virtual clothing image, a preset optimization algorithm and a preset loss function, and a plurality of virtual dressing change models which are trained in advance are obtained, including the steps of:
the server inputs the first training image, the human body trunk data corresponding to the first training image and the virtual clothes image into the virtual clothes changing model to obtain a plurality of second training images.
Wherein the real clothes are presented in the first training image, and the real clothes are replaced by the virtual clothes in the second training image. The virtual dressing change model is a virtual dressing change model after random initialization.
And then, the label of the first training image is true, the label of the second training image is false, the server iteratively trains the clothing authentication model according to the plurality of first training images, the plurality of second training images, the preset first loss function and the preset first optimization algorithm, and the trainable parameters of the clothing authentication model are optimized until the value of the first loss function meets the preset first training termination condition, so that the currently trained clothing authentication model is obtained.
Then, the server modifies the label of the second training image into true, inputs the second training image into the currently trained clothes identification model, obtains the identification result of the second training image, and obtains a pre-trained virtual clothes changing model and a pre-trained clothes identification model if the identification result of the second training image meets a preset second training termination condition; and if the identification result of the second training image does not meet the preset second training termination condition, obtaining a value of a second loss function according to the identification result of the second training image, the label of the second training image and a preset second loss function, and optimizing trainable parameters of the virtual dressing change model according to the value of the second loss function and a preset second optimization algorithm to obtain the currently trained virtual dressing change model.
In the confrontation neural network model, when the probability of judging the second training image as true is about 0.5, the virtual clothes changing model and the clothes identification model achieve a good confrontation training effect. Therefore, the preset second training termination condition is an interval around 0.5, and when the identification result of the second training image is in the interval, the identification result of the second training image satisfies the preset second training termination condition.
If the identification result of the second training image is biased to 0, the probability that the second training image is considered to be true by the clothes identification model is close to 0, which means that the second training image generated by the virtual clothes changing model is easy to be identified by audiences, and the clothes changing effect of the virtual clothes changing model is poor. Because the label of the second training image is modified to be true, namely 1, the value of the obtained second loss function is larger according to the label of the second training image, the identification result of the second training image and the preset second loss function, and the trainable parameters of the virtual dressing change model can be greatly optimized based on the value of the second loss function and the preset second optimization algorithm to obtain the currently trained virtual dressing change model.
If the identification result of the second training image is biased to 1, the probability that the clothing identification model considers that the second training image is true is close to 1, which means that the identification effect of the clothing identification model is poor, and the false second training image is judged to be true, so that the clothing identification model needs to be trained continuously.
And finally, the server inputs the plurality of first training images, the human body trunk data corresponding to the first training images and the virtual clothes images into the currently trained virtual clothes changing model again, acquires the second training image again, and repeatedly executes the steps of iteratively training the clothes identifying model and optimizing trainable parameters of the virtual clothes changing model until the identification result of the second training image meets a preset second training termination condition to obtain the pre-trained virtual clothes changing model and the pre-trained clothes identifying model.
The first loss function, the second impairment function, the first optimization algorithm, and the second optimization algorithm are not limited herein, and may be any one of the existing loss functions and neural network optimization algorithms.
In this embodiment, the virtual clothes changing model and the clothes identification model form the confrontation neural network model, and the virtual clothes changing model and the clothes identification model are jointly trained, so that the reliability of the virtual clothes is higher, the virtual clothes can be more easily considered as real clothes worn by the anchor by the audience, and the live broadcast watching experience of the audience can be further improved.
Referring to fig. 11, fig. 11 is a schematic structural diagram of a live view adjustment system according to a second embodiment of the present application, where the system 11 includes: a server 111 and a viewer client 112;
the viewer client 112 is configured to obtain a first live view and a target background view in response to the live view adjustment instruction; the live broadcast picture adjusting instruction is generated when at least judging that the environment brightness information corresponding to the live broadcast watching environment where the current audience is located meets a preset first live broadcast picture adjusting condition and the picture brightness related information corresponding to the background area in the first live broadcast picture meets a preset second live broadcast picture adjusting condition; the first live broadcast picture is obtained by analyzing a live broadcast video stream sent by the server 111;
the viewer client 112 is configured to adjust the background area in the first live broadcast picture according to the target background picture to obtain a second live broadcast picture, and output the second live broadcast picture to a live broadcast room interface.
The live broadcast picture adjusting system and the live broadcast picture adjusting method provided by the above embodiments belong to the same concept, and the detailed implementation process is shown in the method embodiments and will not be described herein.
Please refer to fig. 12, which is a schematic structural diagram of a live view adjustment apparatus according to a third embodiment of the present application. The apparatus may be implemented as all or part of a computer device in software, hardware, or a combination of both. The apparatus 12 comprises:
a first obtaining unit 121, configured to obtain, by the viewer client, a first live view and a target background view in response to the live view adjustment instruction; the live broadcast picture adjusting instruction is generated when at least judging that the environment brightness information corresponding to the live broadcast watching environment where the current audience is located meets a preset first live broadcast picture adjusting condition and the picture brightness related information corresponding to the background area in the first live broadcast picture meets a preset second live broadcast picture adjusting condition; the first live video is obtained by analyzing a live video stream sent by a server;
a first adjusting unit 122, configured to adjust the background area in the first live broadcast picture according to the target background picture by the viewer client, so as to obtain a second live broadcast picture, and output the second live broadcast picture to a live broadcast room interface.
It should be noted that, when the live view adjustment apparatus provided in the foregoing embodiment executes the live view adjustment method, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules as needed, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the live view adjusting apparatus and the live view adjusting method provided in the above embodiments belong to the same concept, and detailed implementation processes thereof are shown in the method embodiments and are not described herein again.
Fig. 13 is a schematic structural diagram of a computer device according to a fourth embodiment of the present application. As shown in fig. 13, the computer device 13 may include: a processor 130, a memory 131, and a computer program 132 stored in the memory 131 and executable on the processor 130, such as: adjusting a live broadcast picture; the steps in the first embodiment described above are implemented when the processor 130 executes the computer program 132.
The processor 130 may include one or more processing cores, among others. The processor 130 is connected to various parts in the computer device 13 by various interfaces and lines, executes various functions of the computer device 13 and processes data by operating or executing instructions, programs, code sets or instruction sets stored in the memory 131 and calling data in the memory 131, and optionally, the processor 130 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), Programmable Logic Array (PLA). The processor 130 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing contents required to be displayed by the touch display screen; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 130, and may be implemented by a single chip.
The Memory 131 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 131 includes a non-transitory computer-readable medium. The memory 131 may be used to store instructions, programs, code sets or instruction sets. The memory 131 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for at least one function (such as touch instructions, etc.), instructions for implementing the above-described method embodiments, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. The memory 131 may optionally be at least one storage device located remotely from the processor 130.
The embodiment of the present application further provides a computer storage medium, where the computer storage medium may store a plurality of instructions, where the instructions are suitable for being loaded by a processor and executing the method steps of the foregoing embodiment, and a specific execution process may refer to specific descriptions of the foregoing embodiment, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the description of each embodiment has its own emphasis, and reference may be made to the related description of other embodiments for parts that are not described or recited in any embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, a module or a unit may be divided into only one logical function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium and used by a processor to implement the steps of the above-described embodiments of the method. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc.
The present invention is not limited to the above-described embodiments, and various modifications and variations of the present invention are intended to be included within the scope of the claims and the equivalent technology of the present invention if they do not depart from the spirit and scope of the present invention.

Claims (13)

1. A live broadcast picture adjustment method is characterized by comprising the following steps:
the method comprises the steps that a spectator client side responds to a live broadcast picture adjusting instruction to obtain a first live broadcast picture and a target background picture; the live broadcast picture adjusting instruction is generated when at least judging that the environment brightness information corresponding to the live broadcast watching environment where the current audience is located meets a preset first live broadcast picture adjusting condition and the picture brightness related information corresponding to the background area in the first live broadcast picture meets a preset second live broadcast picture adjusting condition; the first live broadcast picture is obtained by analyzing a live broadcast video stream sent by a server;
and the audience client side adjusts the background area in the first live broadcast picture according to the target background picture to obtain a second live broadcast picture, and the second live broadcast picture is output to a live broadcast room interface.
2. The live view adjustment method of claim 1, wherein the viewer client, in response to the live view adjustment instruction, is preceded by the steps of:
the audience client acquires environment brightness information corresponding to the live broadcast watching environment;
if the environment brightness information corresponding to the live viewing environment is lower than a preset first brightness threshold value, the audience client analyzes the live video stream to obtain a first live frame;
the audience client divides the background area from the first direct-playing picture, and acquires first picture brightness related information corresponding to the background area and second picture brightness related information corresponding to the background area; the first image brightness related information is an average value of brightness information of all pixel points in the background region, the second image brightness related information is a proportion of a target background pixel point in the background region, and the target background pixel point is a background pixel point in the background region, wherein the brightness information of the background region exceeds a preset first brightness threshold;
and if the first picture brightness related information exceeds a preset first brightness related threshold and the second picture brightness related information exceeds a preset second brightness related threshold, the audience client generates the live broadcast picture adjusting instruction.
3. The live view adjustment method according to claim 2, wherein the viewer client obtains ambient brightness information corresponding to the live viewing environment, and comprises:
the audience client acquires a video picture through a camera and acquires environment brightness information corresponding to the live viewing environment;
if the environment brightness information corresponding to the live viewing environment is lower than a preset first brightness threshold, the audience client analyzes the live video stream to obtain the first live picture, and the method comprises the following steps:
and if the human face image is displayed in the video picture and the environment brightness information corresponding to the live watching environment is lower than a preset first brightness threshold value, the audience client analyzes the live video stream to obtain the first live picture.
4. The live view adjustment method according to any one of claims 1 to 3, wherein the viewer client adjusts the background area in the first live view according to the target background view to obtain a second live view, and the method comprises the steps of:
the audience client divides a human body trunk area from the first direct-broadcasting picture and divides a clothing area from the human body trunk area;
the audience client acquires average color information corresponding to the clothing area and color difference information between the average color information and preset color information;
and the audience client side adjusts the background area in the first live broadcast picture according to the target background picture, and adjusts the color information of each clothes pixel point in the clothes area in the first live broadcast picture according to the color difference information to obtain the second live broadcast picture.
5. The live view adjustment method according to any one of claims 1 to 3, wherein the step of outputting the second live view to a live view interface comprises:
the audience client responds to a clothes replacement instruction, the second live broadcast picture is input to a pre-trained target virtual clothes changing model to obtain a third live broadcast picture, and the third live broadcast picture is output to a live broadcast room interface; and the pre-trained target virtual clothes changing model is used for replacing real clothes presented in the second live broadcast picture with target virtual clothes.
6. The live view adjustment method of claim 5, wherein the viewer client, in response to the clothing replacement instruction, comprises the steps of:
the audience client side responds to a trigger instruction of the anchor clothing area and acquires a plurality of pre-trained virtual clothing changing models and virtual clothing list data;
the audience client loads a virtual clothes list according to the virtual clothes list data; a plurality of virtual clothes thumbnails are displayed in the virtual clothes list, and each virtual clothes thumbnail corresponds to one pre-trained virtual clothes changing model;
and the audience client side responds to the selected instruction of the target virtual clothes thumbnail, determines the pre-trained target virtual clothes changing model, and generates and sends out the clothes replacing instruction.
7. The live view adjustment method of claim 5, wherein the viewer client, in response to the clothing replacement instruction, comprises the steps of:
the server acquires a plurality of first training images, human body trunk data corresponding to the first training images and a plurality of virtual clothes images;
and the server trains one virtual clothes changing model respectively according to the plurality of first training images, the human body trunk data corresponding to the first training images, each virtual clothes image, a preset optimization algorithm and a preset loss function to obtain a plurality of pre-trained virtual clothes changing models.
8. The live broadcast picture adjusting method according to claim 7, wherein the virtual dressing change model and the clothing discrimination model together form an anti-neural network model, the server trains one virtual dressing change model according to a plurality of first training images, torso data corresponding to the first training images, each virtual clothing image, a preset optimization algorithm and a preset loss function, respectively, to obtain a plurality of pre-trained virtual dressing change models, and the method comprises the steps of:
the server inputs the first training image, the human body trunk data corresponding to the first training image and the virtual clothes image into the virtual clothes changing model to obtain a plurality of second training images;
the server iteratively trains the clothing authentication model according to the plurality of first training images, the plurality of second training images, a preset first loss function and a preset first optimization algorithm, and optimizes trainable parameters of the clothing authentication model until the value of the first loss function meets a preset first training termination condition, so that a currently trained clothing authentication model is obtained;
the server modifies the label of the second training image into true, inputs the second training image into the currently trained clothing identification model, and obtains the identification result of the second training image;
if the identification result of the second training image meets a preset second training termination condition, the server obtains the pre-trained virtual clothes changing model and the pre-trained clothes identification model;
if the identification result of the second training image does not meet a preset second training termination condition, the server obtains a value of a second loss function according to the identification result of the second training image, the label of the second training image and a preset second loss function, and optimizes trainable parameters of the virtual dressing change model according to the value of the second loss function and a preset second optimization algorithm to obtain a currently trained virtual dressing change model;
and the server inputs a plurality of first training images, human body trunk data corresponding to the first training images and the virtual clothes images to the currently trained virtual clothes changing model again, acquires the second training images again, and repeatedly executes the steps of iteratively training the clothes identifying model and optimizing trainable parameters of the virtual clothes changing model until the identification result of the second training images meets a preset second training termination condition to obtain the pre-trained virtual clothes changing model and the pre-trained clothes identifying model.
9. The live broadcast picture adjusting method according to claim 7, wherein the step of acquiring, by the server, a plurality of first training images, human body trunk data corresponding to the first training images, and a plurality of virtual clothes images includes:
the server acquires a current anchor identification and a plurality of live broadcast pictures corresponding to the current anchor identification; wherein the real clothes worn by the current anchor presented in each live broadcast picture are different;
the server acquires third picture brightness related information corresponding to a clothing region in the live broadcast picture; the third picture brightness related information is the average value of the brightness information of all the clothes pixel points in the clothes area;
the server acquires a plurality of target live broadcast pictures according to third picture brightness related information corresponding to the clothes area in the live broadcast pictures and a preset third brightness related threshold;
and the server obtains a plurality of virtual clothes images according to the plurality of target live broadcast pictures.
10. A live view adjustment system, comprising: a server and a viewer client;
the audience client is used for responding to the live broadcast picture adjusting instruction and acquiring a first live broadcast picture and a target background picture; the live broadcast picture adjusting instruction is generated when at least judging that the environment brightness information corresponding to the live broadcast watching environment where the current audience is located meets a preset first live broadcast picture adjusting condition and the picture brightness related information corresponding to the background area in the first live broadcast picture meets a preset second live broadcast picture adjusting condition; the first live broadcast picture is obtained by analyzing a live broadcast video stream sent by the server;
and the audience client is used for adjusting the background area in the first live broadcast picture according to the target background picture to obtain a second live broadcast picture, and outputting the second live broadcast picture to a live broadcast room interface.
11. A live view adjustment apparatus, comprising:
the first acquisition unit is used for responding to the live broadcast picture adjustment instruction by the audience client to acquire a first live broadcast picture and a target background picture; the live broadcast picture adjusting instruction is generated when at least judging that the environment brightness information corresponding to the live broadcast watching environment where the current audience is located meets a preset first live broadcast picture adjusting condition and the picture brightness related information corresponding to the background area in the first live broadcast picture meets a preset second live broadcast picture adjusting condition; the first live video is obtained by analyzing a live video stream sent by a server;
and the first adjusting unit is used for adjusting the background area in the first live broadcast picture by the audience client according to the target background picture to obtain a second live broadcast picture, and outputting the second live broadcast picture to a live broadcast room interface.
12. A computer device, comprising: processor, memory and computer program stored in the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1 to 9 are implemented when the processor executes the computer program.
13. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 9.
CN202210519242.6A 2022-05-13 2022-05-13 Live broadcast picture adjusting method, system and device and computer equipment Pending CN115002554A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210519242.6A CN115002554A (en) 2022-05-13 2022-05-13 Live broadcast picture adjusting method, system and device and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210519242.6A CN115002554A (en) 2022-05-13 2022-05-13 Live broadcast picture adjusting method, system and device and computer equipment

Publications (1)

Publication Number Publication Date
CN115002554A true CN115002554A (en) 2022-09-02

Family

ID=83027866

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210519242.6A Pending CN115002554A (en) 2022-05-13 2022-05-13 Live broadcast picture adjusting method, system and device and computer equipment

Country Status (1)

Country Link
CN (1) CN115002554A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116340813A (en) * 2023-02-10 2023-06-27 深圳市快美妆科技有限公司 User behavior analysis system and method for live platform

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105979400A (en) * 2016-06-28 2016-09-28 乐视控股(北京)有限公司 Screen brightness adjusting method, device and terminal
WO2019085980A1 (en) * 2017-11-03 2019-05-09 腾讯科技(深圳)有限公司 Method and device for video caption automatic adjustment, terminal, and readable medium
CN112133260A (en) * 2019-06-24 2020-12-25 腾讯科技(深圳)有限公司 Image adjusting method and device
CN112598806A (en) * 2020-12-28 2021-04-02 深延科技(北京)有限公司 Virtual fitting method and device based on artificial intelligence, computer equipment and medium
CN112788250A (en) * 2021-02-01 2021-05-11 青岛海泰新光科技股份有限公司 Automatic exposure control method based on FPGA
CN113192464A (en) * 2020-01-14 2021-07-30 华为技术有限公司 Backlight adjusting method and electronic equipment
CN113395599A (en) * 2020-12-03 2021-09-14 腾讯科技(深圳)有限公司 Video processing method and device, electronic equipment and medium
CN114285936A (en) * 2020-09-17 2022-04-05 南京酷派软件技术有限公司 Screen brightness adjusting method and device, storage medium and terminal

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105979400A (en) * 2016-06-28 2016-09-28 乐视控股(北京)有限公司 Screen brightness adjusting method, device and terminal
WO2019085980A1 (en) * 2017-11-03 2019-05-09 腾讯科技(深圳)有限公司 Method and device for video caption automatic adjustment, terminal, and readable medium
CN112133260A (en) * 2019-06-24 2020-12-25 腾讯科技(深圳)有限公司 Image adjusting method and device
CN113192464A (en) * 2020-01-14 2021-07-30 华为技术有限公司 Backlight adjusting method and electronic equipment
CN114285936A (en) * 2020-09-17 2022-04-05 南京酷派软件技术有限公司 Screen brightness adjusting method and device, storage medium and terminal
CN113395599A (en) * 2020-12-03 2021-09-14 腾讯科技(深圳)有限公司 Video processing method and device, electronic equipment and medium
CN112598806A (en) * 2020-12-28 2021-04-02 深延科技(北京)有限公司 Virtual fitting method and device based on artificial intelligence, computer equipment and medium
CN112788250A (en) * 2021-02-01 2021-05-11 青岛海泰新光科技股份有限公司 Automatic exposure control method based on FPGA

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116340813A (en) * 2023-02-10 2023-06-27 深圳市快美妆科技有限公司 User behavior analysis system and method for live platform
CN116340813B (en) * 2023-02-10 2024-02-09 广州网优优数据技术股份有限公司 User behavior analysis system and method for live platform

Similar Documents

Publication Publication Date Title
US20210344991A1 (en) Systems, methods, apparatus for the integration of mobile applications and an interactive content layer on a display
US20210019982A1 (en) Systems and methods for gesture recognition and interactive video assisted gambling
CN113395533B (en) Virtual gift special effect display method and device, computer equipment and storage medium
US10499118B2 (en) Virtual and augmented reality system and headset display
US10068364B2 (en) Method and apparatus for making personalized dynamic emoticon
CN111510753B (en) Display device
WO2019191082A2 (en) Systems, methods, apparatus and machine learning for the combination and display of heterogeneous sources
US12002436B2 (en) Methods, systems, and media for modifying user interface colors in connection with the presentation of a video
JP2021517682A (en) Methods for improved image formation based on semantic processing and dynamic scene modeling
CN114286173A (en) Display device and sound and picture parameter adjusting method
KR20190024249A (en) Method and electronic device for providing an advertisement
CN112698905B (en) Screen saver display method, display device, terminal device and server
CN111861561A (en) Advertisement information positioning and displaying method and corresponding device, equipment and medium
CN113965813B (en) Video playing method, system, equipment and medium in live broadcasting room
WO2017112520A1 (en) Video display system
CN111343512A (en) Information acquisition method, display device and server
CN111556350B (en) Intelligent terminal and man-machine interaction method
KR20230009806A (en) An image processing apparatus and a method thereof
CN114268813A (en) Live broadcast picture adjusting method and device and computer equipment
CN115002554A (en) Live broadcast picture adjusting method, system and device and computer equipment
CN113938696B (en) Live broadcast interaction method and system based on custom virtual gift and computer equipment
US20210248789A1 (en) Intelligent Real-time Multiple-User Augmented Reality Content Management and Data Analytics System
CN113489938B (en) Virtual conference control method, intelligent device and terminal device
CN112289271A (en) Display device and dimming mode switching method
CN114501065A (en) Virtual gift interaction method and system based on face jigsaw and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination