CN113301413B - Information display method and device - Google Patents

Information display method and device Download PDF

Info

Publication number
CN113301413B
CN113301413B CN202010394227.4A CN202010394227A CN113301413B CN 113301413 B CN113301413 B CN 113301413B CN 202010394227 A CN202010394227 A CN 202010394227A CN 113301413 B CN113301413 B CN 113301413B
Authority
CN
China
Prior art keywords
area
information
video
content
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010394227.4A
Other languages
Chinese (zh)
Other versions
CN113301413A (en
Inventor
张静静
徐铁松
罗景
吴凌霄
方振亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Network Technology Co Ltd
Original Assignee
Alibaba China Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Network Technology Co Ltd filed Critical Alibaba China Network Technology Co Ltd
Priority to CN202010394227.4A priority Critical patent/CN113301413B/en
Publication of CN113301413A publication Critical patent/CN113301413A/en
Application granted granted Critical
Publication of CN113301413B publication Critical patent/CN113301413B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • H04N21/4438Window management, e.g. event handling following interaction with the user interface

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Marketing (AREA)
  • Business, Economics & Management (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides an information display method and device, wherein a first area is provided in a video playing interface, and a video picture of a video is displayed in the first area; collecting input information of a first user; identifying a first user to input first control information, and providing a second area on the video playing interface; and displaying the target content in the second area. The technical scheme provided by the embodiment of the application improves the convenience of content display, reduces the influence on video playing, and ensures the video playing effect and the content display effect.

Description

Information display method and device
Technical Field
The embodiment of the application relates to the technical field of computer application, in particular to an information display method and device.
Background
With the development of internet technology and streaming media technology, more and more network video playing platforms are provided, more and more users watch network videos, such as short videos and live webcast popular at present, are favored by more users, and many users watch the user traffic brought by the network videos and can also use the network video to popularize products or display individuals and the like, or realize quick purchase of commodities and the like in combination with an electronic commerce platform.
In order to enhance the attractiveness of network video, the network video is often associated with some extended content, such as related information of a video recorder, related information of products appearing in the video, or commodity purchasing links related to the video, so that currently, the extended content often needs to jump to a specific page for display in the video playing process, so that the content display is inconvenient and the video playing is affected.
Disclosure of Invention
The embodiment of the application provides an information display method and device, which are used for solving the technical problem that content display affects video playing in the prior art.
In a first aspect, an embodiment of the present application provides an information display method, including:
providing a first area in a video playing interface, and displaying a video picture of a video in the first area;
collecting input information of a first user; the input information comprises voice information or action information;
identifying the first user to input first control information, and providing a second area in the video playing interface;
and displaying the target content in the second area.
In a second aspect, an embodiment of the present application provides an information display method, including:
Providing a first area in a live broadcast interface, and displaying live broadcast pictures of live broadcast video in the first area;
collecting input information of a first user; the input information comprises voice information or action information;
identifying the first user to input first control information, and providing a second area in the live interface;
and displaying the target content in the second area.
In a third aspect, an embodiment of the present application provides an information display method, including:
providing a first area in a live broadcast interface, and displaying live broadcast pictures of live broadcast video in the first area;
providing a second region in the live interface in response to a first control instruction; the first control instruction is generated by identifying input information of a first user in the live video by a server and identifying the first user to input the first control information; wherein the input information comprises voice information or action information;
and displaying the target content in the second area.
In a fourth aspect, an embodiment of the present application provides an information processing method, including:
transmitting the live video to a viewing end, so that the viewing end provides a first area in a live interface, and displaying live pictures of the live video in the first area;
Identifying input information of a first user in the live video; the input information comprises voice information or action information;
identifying the first user to input first control information, and generating a first control instruction;
and sending the first control instruction to the watching end so that the watching end provides a second area in the live broadcast interface and displays target content in the second area.
In a fifth aspect, an embodiment of the present application provides an information processing method, including:
transmitting the live video to a viewing end, so that the viewing end provides a first area in a live interface, and displaying live pictures of the live video in the first area;
receiving a first content triggering request sent by a live broadcast terminal, and generating a first control instruction; the first content triggering request is generated by identifying the input information of a first user acquired by the live broadcast terminal and identifying the first control information input by the first user; wherein the input information comprises voice information or action information;
and sending the first control instruction to the viewing end so that the viewing end provides a second area in the live broadcast interface and displays target content in the second area.
In a sixth aspect, an embodiment of the present application provides an information display method, including:
providing a first area in a video playing interface, and displaying a video picture of a video in the first area;
adjusting a display size of the first region in response to a content trigger event;
providing a second area which is not covered with the first area in the video playing interface;
and displaying the target content in the second area.
In a seventh aspect, an embodiment of the present application provides an information processing method, including:
transmitting the video to a video playing end, so that the video playing end provides a first area in a video playing interface and displays video pictures in the first area;
based on a content trigger event, adjusting the display size of the first area in the video playing end;
providing a second area which is not covered by the first area in the video playing interface of the video playing end;
and displaying the target content in the second area.
In an eighth aspect, an embodiment of the present application provides an information display method, including:
displaying live pictures of live videos in a live interface;
adjusting a display size of the first region in response to a content trigger event;
Providing a second region in the live interface that does not overlap the first region;
and displaying the target content in the second area.
In a ninth aspect, an embodiment of the present application provides an information processing method, including:
transmitting the live video to a viewing end, so that the viewing end provides a first area in a live interface, and displaying live pictures of the live video in the first area;
based on a content trigger event, adjusting a display size of the first region in the viewing end;
providing a second area which is not covered with the first area in a live interface of the watching end;
and displaying the target content in the second area.
In a tenth aspect, in an embodiment of the present application, a display interface is provided, a first area is provided, and a video frame of a video is displayed in the first area;
the display interface is also used for providing a second area and displaying target content in the second area when the input information of the first user is identified and the first control information is identified to be input by the first user; the input information includes motion information or voice information.
In an eleventh aspect, in an embodiment of the present application, a display interface is provided, a first area is provided, and a video frame of a video is displayed in the first area;
The display interface is also used for adjusting the display size of the first area, providing a second area which is not covered by the first area, and displaying target content in the second area.
In a twelfth aspect, an embodiment of the present application provides an information display apparatus, including:
the first display module is used for providing a first area in the video playing interface and displaying video pictures of the video in the first area;
the first acquisition module is used for acquiring input information of a first user; wherein the input information comprises action information or voice information;
the first identification module is used for identifying that the first user inputs first control information and providing a second area in the video playing interface;
and the second display module is used for displaying the target content in the second area.
In a thirteenth aspect, an embodiment of the present application provides an information processing apparatus, including:
the first sending module is used for sending the live video to the watching end so that the watching end can provide a first area in the live interface and display live pictures in the first area;
the second identification module is used for identifying the input information of the first user in the live video; the input information comprises voice information or action information; identifying the first user to input first control information, and generating a first control instruction;
And the second sending module is used for sending the first control instruction to the watching end so that the watching end provides a second area in the live broadcast interface and displays target content in the second area.
In a fourteenth aspect, an embodiment of the present application provides an information processing apparatus, including:
the third sending module is used for sending the live video to the watching end so that the watching end can provide a first area in the live interface and display live pictures in the first area;
the receiving module is used for receiving a first content triggering request sent by the live broadcast end and generating a first control instruction; the first content triggering request is generated by identifying the input information of a first user acquired by the live broadcast terminal and identifying the first control information input by the first user; wherein the input information comprises voice information or action information;
and the fourth sending module is used for sending the first control instruction to the watching end so that the watching end provides a second area in the live broadcast interface and displays target content in the second area.
In a fifteenth aspect, an embodiment of the present application provides an information display apparatus, including:
The third display module is used for providing a first area in the video playing interface and displaying video pictures of the video in the first area;
the adjusting module is used for responding to the content triggering event and adjusting the display size of the first area;
the providing module is used for providing a second area which is not covered with the first area in the video playing interface;
and the fourth display module is used for displaying target content in the second area.
In a sixteenth aspect, an embodiment of the present application provides an information processing apparatus including:
the fifth sending module is used for sending the video to the video playing end so that the video playing end can provide a first area in a video playing interface and display video pictures in the first area;
the adjusting and triggering module is used for adjusting the display size of the first area in the video playing end based on the content triggering event; providing a second area which is not covered by the first area in the video playing interface of the video playing end; and displaying the target content in the second area.
In a seventeenth aspect, an embodiment of the present application provides an electronic device, including a storage component, a display component, and a processing component; the storage component stores one or more computer program instructions; the one or more computer program instructions are for invocation and execution by the processing component to implement the information display method as described in the first aspect above.
In an eighteenth aspect, in an embodiment of the present application, there is provided a computing device including a storage component and a processing component; the storage component stores one or more computer program instructions; the one or more computer program instructions are for invocation and execution by the processing component to implement the information processing method described in the fourth aspect above.
In a nineteenth aspect, in an embodiment of the present application, a computing device includes a storage component and a processing component; the storage component stores one or more computer program instructions; the one or more computer program instructions are for invocation and execution by the processing component to implement the information processing method of the fifth aspect described above.
In a twentieth aspect, an embodiment of the present application provides an electronic device, including a storage component, a display component, and a processing component; the storage component stores one or more computer program instructions; the one or more computer program instructions are for invocation and execution by the processing component to implement the information display method as described in the seventh aspect above.
In a twenty-first aspect, an embodiment of the present application provides an electronic device, including a storage component, a display component, and a processing component; the storage component stores one or more computer program instructions; the one or more computer program instructions are for invocation and execution by the processing component to implement the information processing method as described in the eighth aspect above.
In the embodiment of the application, a first area is provided in a video playing interface, and a video picture of a video is displayed in the first area; and acquiring input information of the first user, and providing a second area in the video playing interface and displaying target content in the second area when the first user is identified to input the first control information. According to the embodiment of the application, the first area and the second area are provided in the video playing interface, so that video and content can be watched simultaneously in the video playing interface without page skip, video playing is not affected, and video playing effect and content display effect are ensured.
These and other aspects of the application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart illustrating one embodiment of an information display method provided by the present application;
FIGS. 2a to 2c are schematic views respectively showing interface display of a video playing interface in a practical application according to an embodiment of the present application;
FIG. 3 is a flow chart illustrating one embodiment of an information processing method provided by the present application;
fig. 4 is a schematic structural diagram of an embodiment of a network live broadcast system according to the present application;
FIG. 5 is a flowchart of another embodiment of an information display method provided by the present application;
FIG. 6 is a flowchart of another embodiment of an information display method provided by the present application;
FIG. 7a is a flow chart of yet another embodiment of an information processing method provided by the present application;
FIG. 7b is a flow chart illustrating yet another embodiment of an information processing method provided by the present application;
FIGS. 8 a-8 c are schematic views respectively showing interface displays of a live interface in a practical application according to an embodiment of the present application;
FIG. 9 is a flowchart of another embodiment of an information display method provided by the present application;
FIG. 10 is a flow chart showing yet another embodiment of an information processing method provided by the present application;
FIG. 11 is a flowchart of another embodiment of an information display method provided by the present application;
FIG. 12 is a flowchart of another embodiment of an information display method provided by the present application;
FIG. 13 is a flowchart showing still another embodiment of an information processing method provided by the present application;
fig. 14 is a schematic view showing the structure of an embodiment of an information display device provided by the present application;
FIG. 15 is a schematic diagram illustrating the construction of one embodiment of an electronic device provided by the present application;
fig. 16 is a schematic view showing the structure of an embodiment of an information processing apparatus provided by the present application;
FIG. 17 illustrates a schematic diagram of one embodiment of a computing device provided by the present application;
fig. 18 is a schematic diagram showing the structure of a further embodiment of an information processing apparatus provided by the present application;
FIG. 19 illustrates a schematic diagram of a configuration of yet another embodiment of a computing device provided by the present application;
fig. 20 is a schematic view showing the structure of a further embodiment of an information display device provided by the present application;
FIG. 21 is a schematic diagram of another embodiment of an electronic device according to the present application;
fig. 22 is a schematic diagram showing the structure of a further embodiment of an information processing apparatus provided by the present application;
FIG. 23 illustrates a schematic diagram of a computing device in accordance with yet another embodiment of the present application.
Detailed Description
In order to enable those skilled in the art to better understand the present application, the following description will make clear and complete descriptions of the technical solutions according to the embodiments of the present application with reference to the accompanying drawings.
In some of the flows described in the specification and claims of the present application and in the foregoing figures, a plurality of operations occurring in a particular order are included, but it should be understood that the operations may be performed out of order or performed in parallel, with the order of operations such as 101, 102, etc., being merely used to distinguish between the various operations, the order of the operations themselves not representing any order of execution. In addition, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first" and "second" herein are used to distinguish different messages, devices, modules, etc., and do not represent a sequence, and are not limited to the "first" and the "second" being different types.
The technical scheme of the embodiment of the application is applied to a network video playing scene, such as a short video playing scene or a network live broadcast scene, wherein the short video is a video with a playing time less than a preset time such as 5 minutes, and the network live broadcast is an information network publishing mode with bidirectional circulation in which information is synchronously produced and published along with the occurrence and development processes of an event on site.
Because the network video can bring more user traffic, the network video can be adopted to promote products or display individuals, or can be combined with an e-commerce platform to realize, so that users can be guided to know goods more quickly, and quick purchase of the goods can be realized. In order to enhance the attractiveness of network videos, the network videos are also generally associated with some extended content, such as related information of video publishers, related information of products appearing in the videos, commodity purchasing links related to the videos, or information for interaction with watching users, etc., however, the extended content often needs to jump to a specific page for watching, so that content display is not convenient enough, and the videos are easily interrupted, thereby affecting video playing effects.
In order to improve the convenience of content display and not to influence video playing, the inventor provides a technical scheme of the application through a series of researches, and in the embodiment of the application, a first area is provided in a video playing interface, and a video picture of a video is displayed in the first area; when the first user inputs the first control information, a second area is provided in the video playing interface, and target content is displayed in the second area, so that the user can watch video and video related content in the video playing interface at the same time, video playing is not affected, video playing effect and content display effect are guaranteed, and in the embodiment of the application, the display of the target content can be triggered based on corresponding action information or voice information of the first user, the content display can be triggered without operating equipment, and the convenience of content display is improved.
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
Fig. 1 is a flowchart of an embodiment of an information display method according to an embodiment of the present application, where the method may include the following steps:
101: a first region is provided in the video playing interface, and a video picture of the video is displayed in the first region.
The technical scheme of the embodiment can be executed by the video playing end. In the network live broadcast scene, the video may specifically refer to a live broadcast video, the video playing interface is a live broadcast interface, and the video playing end may refer to a live broadcast end, so that in the network live broadcast scene, the technical scheme of the embodiment may be executed by the live broadcast end.
In the network live broadcast scene, the live broadcast terminal can be responsible for collecting the sound and/or the picture of the live broadcast site where the live broadcast user is located in real time, obtaining the live broadcast video and uploading the live broadcast video to the server. The watching user requests to watch the live video through the watching end, and the service end can send the live video to the watching end, and the watching end plays the live video. And displaying the live video live pictures in the live interface of the watching end and the live interface of the live end.
Optionally, the first area may be used to cover the entire video playing interface for displaying a video frame, and the display size of the video frame may be adaptively adjusted according to the display size of the first area.
102: input information of a first user is collected.
Wherein the input information may include motion information or voice information.
The first user may refer to a video watching user at a video playing end, and when in a network live broadcast scene, a live broadcast video is obtained by collecting the sound and/or the picture of a live broadcast scene where the live broadcast user is located in real time, the first user may specifically refer to the live broadcast user, and in the live broadcast process, the live broadcast end may collect the input information of the live broadcast user in an environment space, which may also be referred to as a live broadcast scene.
As an alternative, the input information may be referred to as action information.
Thus, collecting input information of the first user may include:
and collecting action information of the first user.
The motion information may include, for example, user motion information such as a gesture motion, a limb motion, a facial motion, or an expression motion, for example, the first user may perform a lifting motion of a finger from bottom to top, so that corresponding gesture motion information may be acquired. As another example, the first user may perform a limb motion with an arm raised, as another example, the first user may perform a face motion with a blink, as another example, the first user may perform an expression motion with a smile, and so on.
As another alternative, the input information may refer to voice information of the first user.
Thus, collecting input information of the first user may include:
and collecting voice information of the first user.
The first user can perform voice input in the environment space, so that corresponding voice information can be acquired and obtained.
103: the first user is identified to input first control information, and a second area is provided in the video playing interface.
Based on the input information of the first user, if the first user is identified to input the first control information, a second area may be provided in the video playing interface. The first control information can generate a content trigger event, so that a second area can be provided in the video playing interface to display related content in response to the content trigger event, and the user operation is facilitated.
In an alternative manner, when the input information is action information, the first control information may refer to a first predetermined action;
thus, identifying the first user to input the first control information, providing the second region in the video playback interface may include:
a first user is identified to perform a first predetermined action and a second area is provided in the video playback interface.
The first predetermined action may be a preconfigured user action, such as a bottom-up hand-up action, or the like.
In another alternative, when the input information may be voice information input by the first user, the first control information may refer to first key information.
Thus, identifying the first user to input the first control information, providing the second region in the video playback interface may include:
the first user voice is identified to input first key information, and a second area is provided in the video playing interface.
The first key information may be a pre-configured key word such as "start display content" or the like.
The second area is provided in the video playing interface, and the second area may be displayed in a manner of being overlaid on the first area, or of course, the display size and/or the display position of the first area may be adjusted, so that the first area and the second area do not overlay each other, and a possible layout manner will be described in detail below.
104: and displaying the target content in the second area.
The target content may refer to content associated with a video.
After recognizing that the first user inputs the first control information, the target content may also be requested to be acquired from the server. And further, after the second area is provided in the video playing interface, the target content may be displayed in the second area.
Optionally, after identifying that the first user inputs the first control information, a first content trigger request may be specifically sent to the server, so that after the server receives the first content trigger request, the target content associated with the video is determined, and the target content is provided to the video playing end.
In an alternative manner, the target content may refer to, for example, related information of the first user, such as user attribute information of the first user, such as the number of fans, age, gender, liveness, and the like. The video may be obtained by recording live sound and/or pictures where the first user is located, and in the live network scene, the first user may be a live user.
In another alternative, the target content may refer to related content associated with the currently played content, such as product related information that the currently played content relates to a product. In a network live broadcast scene, a live broadcast user can explain different products on a live broadcast scene, a watching user obtains explanation contents from the live broadcast scene through live broadcast video, the target contents can be, for example, product related information of the currently explained products of the live broadcast user, and when the products are associated with Shang Ping, the product related information can be, for example, product pages provided by an electronic commerce platform or link prompt information of the product pages and the like.
In yet another alternative, the target content may refer to related information for interaction with the user, such as electronic coupon retrieval prompt information, etc.
In yet another alternative, the target content may also refer to related content that needs to be promoted in connection with the actual situation, and so on.
Further, limited to the display area of the second area, displaying the target content in the second area may include:
the target content is displayed in a sliding manner in the second area.
I.e. at least part of the content of the target content can be displayed first in the second area; and if the sliding switching operation for the target content is detected, displaying at least part of the content which is not currently displayed in the target content in the second area.
In this embodiment, a first area is provided in a video playing interface, and a video picture of a video is displayed in the first area; when the first user inputs the first control information, a second area is provided in the video playing interface, and target content is displayed in the second area, so that video and video related content can be watched simultaneously in the video playing interface, video playing is not affected, video playing effect and content display effect are guaranteed, display of the target content can be triggered based on action information or voice information of the first user, content display can be triggered without operating equipment, and convenience of content display is improved.
There are many possible implementations of providing the second area in the video playing interface, and as an alternative, providing the second area in the video playing interface may include:
adjusting the display size of the first area;
a second area which is not overlapped with the first area is provided in the video playing interface.
Optionally, adjusting the display size of the first region may refer to reducing the first region from the first display size to the second display size; the first display size may be a display size that is full of the entire video playing interface, and the second display size is smaller than the first display size.
The display size of the first area is reduced, so that the video picture is reduced.
In addition, after adjusting the display size of the first area, the display position of the first area in the video playing interface may also be adjusted, for example, the video frame may be adjusted to an upper boundary area of the video playing interface, so that a lower boundary area in the video playing interface is a blank area for displaying the second area, and so on.
By adjusting the display size of the first area, an uncovered area of the first area exists in the video playing interface, so that a second area can be provided in the uncovered area, and target content is displayed in the second area, so that video pictures and target content can be displayed in the video playing interface at the same time, the target content does not shade the video pictures, video playing is not influenced, and video playing effect and content displaying effect are guaranteed.
For ease of understanding, an interface layout diagram of providing the first region 20 in the video playback interface is shown as in fig. 2a, and after adjusting the display size of the first region 20, a second region 21 is provided in the video playback interface, as shown in fig. 2b, an interface layout diagram of the second region 21 and the first region 20 is shown.
As another alternative, providing the second region in the video playback interface may include:
determining a central area corresponding to a preset range where the central position in the first area is located;
around the central region, a second region surrounding the central region is provided.
Since the main content in the video picture is mainly shown in the central area in the first area, the central area may be first determined and a second area surrounding the central area may be provided around the central area.
The central area may be a square area, a circular area or other irregularly shaped areas, and the central area corresponding to the predetermined range where the central position is located may mean that the distance between the boundary of the central area and the central position may be within a certain distance range.
Further, as yet another alternative, providing the second region in the video playback interface may include:
Adjusting the display size of the first area, and moving the first area to the center position of the video playing interface;
around the first region, a second region surrounding the first region is provided.
The display size of the first area can be reduced, the first area is moved to the center of the video playing interface, and the second area is deployed around the first area.
As shown in fig. 2c, to provide a second area 21 on the video playing interface, a further interface layout diagram of the second area 21 and the first area 20 is different from fig. 2b in that fig. 2b is a top-bottom layout relationship, and fig. 2c is a surrounding layout relationship. Of course, fig. 2b to 2c only illustrate several possible layout relationships of the first area and the second area, and the present application is not limited thereto, and for example, the first area may be located on the right, the upper or the lower surface of the second area, or the first area and the second area may be adjusted to any shape.
In some embodiments, before collecting the input information of the first user, the method may further include:
providing a first sub-region in the second region in the video playback interface;
and displaying content prompt information of the target content in the first subarea.
The content prompt information may be used to prompt the first user to input first control information.
Alternatively, displaying the content hint information of the target content in the first sub-area may include:
first content in the target content is displayed in the first sub-area.
And outputting the target content in the second area, wherein when the second area is not completely displayed in the video playing interface, the first content refers to part of the target content in the first subarea. The purpose of prompting the user of target content is achieved through the first content.
In some embodiments, providing the second region in the video playback interface may include:
adjusting the display size of the first area;
and in the first area adjusting process, the second area is moved to an uncovered area of the first area in the video playing interface.
And moving the second region into the video playing interface in the adjustment process of the first region to present a dynamic display effect, which can be helpful for improving the attraction of the target content.
Optionally, adjusting the display size of the first region may include:
adjusting the first area from the first display size to the second display size according to a predetermined adjustment speed or a predetermined adjustment acceleration;
Then in the first region adjustment process, moving the second region to the uncovered region of the first region in the video playing interface may be:
and in the first area adjusting process, moving the second area to an uncovered area of the first area in the video playing interface according to a preset adjusting speed or preset adjusting acceleration.
Further, as still another embodiment, displaying the target content in the second area may include:
displaying content prompt information of the target content in the second area;
respectively adjusting the display sizes of the first area and the second area in response to the content display event;
and displaying the target content in the second area.
Wherein, the display size of the first area may be reduced, and the display size of the second area may be increased, and the first area and the second area may not overlap each other, so that the target content may be displayed in the second area.
The content presentation event may be, for example, a triggering operation of a pointer on the content prompt information, such as a clicking operation, etc.
In some embodiments, the method may further comprise:
in response to the content cancellation event, canceling the display of the target content in the video playback interface and restoring the first region to the original size.
The original size may be a first display size of the first area covering the entire video playing interface.
Wherein the content cancellation event may refer to the first user inputting the second control information in the environment space.
As an alternative, in response to the content cancellation event, canceling the display of the target content in the video playback interface and restoring the video frame to the original size may include:
performing image recognition on a first user;
and recognizing the first user to execute a second preset gesture, canceling the display of the target content in the video playing interface, and restoring the video picture to the original size.
The second predetermined gesture may be, for example, an opposite gesture to the first predetermined gesture, or a preconfigured specific gesture, etc.
As another alternative, in response to the content cancellation event, canceling the display of the target content in the video playback interface and restoring the video frame to the original size may include:
identifying the first user voice to input the second key information, canceling the display of the target content in the video playing interface, and restoring the video picture to the original size
The second key information may be a pre-configured key word or the like, such as "cancel display of XX content" or the like.
As yet another alternative, in response to the content cancellation event, canceling the display of the target content in the video playback interface and restoring the video frame to the original size may include:
and identifying the second user to execute a second preset facial action, a second preset limb action or a second preset expression action, cancelling display of target content in the video playing interface, and restoring the video picture to the original size.
In addition, the content cancellation event may also refer to the target content being displayed for longer than a second predetermined time period, and thus, in some embodiments, canceling the display of the target content in the video playback interface and restoring the video frame to the original size may include:
generating a content cancellation event after the target content display duration exceeds a second predetermined duration;
in response to the content cancellation event, canceling the display of the target content in the video playback interface and restoring the video frame to the original size.
Fig. 3 is a flowchart of an embodiment of an information processing method provided by the present application, where the technical solution of the embodiment may be executed by a server, and the method may include the following steps:
301: and sending the video to a video playing end so that the video playing end provides a first area in a video playing interface and displays video pictures in the first area.
302: the target content is determined.
Alternatively, after receiving the first content triggering request of the video playing end, determining the target content associated with the video.
The first content trigger request may be sent by the video playing end when the first user inputs the first control information based on the input information of the first user in the environment space.
In addition, as another alternative, the method may also be to acquire the input information of the first user in the environment space acquired by the video playing end, and determine the target content associated with the video when the first user is identified to input the first control information.
303: and sending the target content to the video playing end so that the video playing end provides a second area in the video playing interface and displays the target content in the second area.
In one implementation, the video may be recorded from the voice of the anchor user and/or the site. With the development of the short video age, many anchor users can record short videos to perform personal display or product popularization. The short video can be combined with the e-commerce platform and used for popularizing commodities in the e-commerce platform, and target content associated with the short video can refer to product related information of popularizing products by a host user in the short video. The first user may be a video watching user corresponding to the video playing end, and when the video playing end plays the short video, the first area may be provided in the video playing interface first, so as to display the video picture. In order to learn about the target content, the first user may perform a first predetermined gesture or input first control information such as first key information by voice, so that after recognizing that the first control information is obtained, a second area may be provided in the video playing interface, and the target content may be displayed in the second area. Therefore, the first user can conveniently and simultaneously view the video picture and the target content without jumping to other pages, and the playing effect of the video is not affected. And the first user can realize the trigger display of the target content only by executing action input or voice input in the environment space, so that the operation convenience is improved.
In still another practical application, the technical scheme of the embodiment of the application can be applied to a network live broadcast scene, namely, a video can be specifically a live broadcast video. The technical scheme of the application is described below by taking a main network live broadcast scene as an example. In a live webcast scenario, the technical scheme of the present application may be applied to a live webcast system as shown in fig. 4, where the live webcast system mainly includes a live webcast end 401, a service end 402, and a viewing end 403, where the viewing end 403 may have at least one. The live broadcast terminal 401 is responsible for collecting the sound and/or the picture of the live broadcast site where the live broadcast user is located in real time, obtaining the live broadcast video and uploading the live broadcast video to the service terminal 402. Live videos of different live users can be distinguished through a live broadcast room, and the live users can firstly apply for the live broadcast room from a service end 402 through a live broadcast end 401 to record and upload live videos in real time. A viewing user can request to enter a certain live broadcast room through the viewing end 403, the service end 402 can send live broadcast video of the live broadcast room to the viewing end 403, the viewing end 403 can play the live broadcast video, and the viewing end 403 can display live broadcast pictures in the live broadcast video in a live broadcast interface. The live video may also be played in the live terminal 401, and the live terminal may also display live pictures in the live interface.
The viewing end 403 may be configured in an electronic device such as a mobile phone, a tablet computer, a smart watch, etc., the service end 402 may be implemented by a CDN (Content Delivery Network ) system, and the live broadcast end 401 may be configured by an electronic device having an acquisition function and an OBS (Open Broadcaster Software, open source live broadcast software) push function, such as a mobile phone with a camera, a tablet, etc., where the application is not limited to implementing network live broadcast by adopting the above live broadcast technical scheme. In addition, as can be appreciated by those skilled in the art, the live video may need to be uploaded to the server after processing such as encoding, transcoding, compression, etc., and accordingly, the viewing end and the live end may need to play the live video after processing such as decoding, decompression, etc., which are the same as in the prior art and are not repeated.
The viewing end and the live end may be independent application programs, or may be a functional module integrated in a certain application program.
It should be noted that, the live broadcast end and the viewing end in fig. 4 may be implemented in practical applications for a mobile phone, a tablet computer, or other electronic devices with video capturing functions, and are not limited to the device configuration shown in fig. 3.
In the live broadcast scene of the electronic commerce, a live broadcast user can introduce one or more products to be promoted on the live broadcast scene, the electronic commerce platform can provide product purchasing capability, the electronic commerce platform can also provide a product page, and the product page can comprise product detailed description information, product transaction controls and the like.
When the live video is played, some extended content associated with the live video, such as product related information of a live user explaining a product, interaction information interacted with a watching user, introduction information related to the live user and the like, can be displayed in the live interface. At present, the expanded contents are usually corresponding prompt controls displayed in a live interface, specific contents can be displayed in corresponding pages in a skip mode based on triggering operation of the prompt controls, for example, product page link controls can be displayed in a live interface of a viewing end, triggering operation of the product page link controls can be skipped to product pages for displaying, and the like, but the live interruption can be caused by the display mode, and playing of live videos is affected. By adopting the technical scheme of the embodiment of the application, the video playing effect and the content displaying effect can be ensured at the same time.
Based on the network live broadcast system shown in fig. 4, fig. 5 is a flowchart of another embodiment of an information display method provided by the present application, where the technical scheme of the present embodiment is executed by a live broadcast end, and the method may include the following steps:
501: and providing a first area in the live interface, and displaying a live picture of the live video in the first area.
502: input information of a first user is collected.
In a live network scenario, the first user may be a live user.
503: the first user is identified to input first control information and a second region is provided in the live interface.
504: and displaying the target content in the second area.
The difference between the embodiment shown in fig. 5 and the embodiment shown in fig. 1 is that the video playing end is specifically a live end, the video is a live video, and other identical or similar steps can be described in the embodiment shown in fig. 1, and will not be described in detail here.
Because the watching end synchronously plays the live video, a first area is also provided in the live interface of the watching end so as to display a live image of the live video, the live end recognizes that a first user inputs first control information, can send a first content trigger request to the service end, and the service end can generate a first control instruction based on the first content trigger request and send the first control instruction to the watching end, so that the watching end can provide a second area in the live interface based on the first control instruction and display target content in the second area.
In addition, the target content may also refer to target content associated with the live video determined after the server receives the first content trigger request, and thus, in some embodiments, after identifying that the first user inputs the first control information, the method may further include:
the method comprises the steps that a first content triggering request is sent to a server side, the server side determines target content based on the first content triggering request, and a first control instruction is sent to a watching side, so that the watching side provides a second area in a live broadcast interface, and the target content is displayed in the second area;
and obtaining target content sent by the server.
In some embodiments, the input information may refer to action information;
identifying the first user to enter the first control information, providing the second region in the live interface may include:
a first user is identified to perform a first predetermined action, and a second region is provided in the live interface.
In some embodiments, the input information may refer to voice information;
identifying the first user to enter first control information and providing a second region in the live interface includes:
a first user voice input of first key information is identified, and a second region is provided in the live interface.
In some embodiments, providing the second region in the live interface includes:
adjusting the display size of the first area;
a second region is provided in the live interface that does not overlap the first region.
In some embodiments, providing the second region in the live interface may include:
determining a central area corresponding to a preset range where the central position in the first area is located;
around the central region, a second region surrounding the central region is provided.
In some embodiments, the method may further comprise:
providing a first sub-region in a second region in a live interface;
displaying content prompt information of target content in the first subarea; the content prompt information is used for prompting the first user to input first control information.
Alternatively, displaying the content hint information of the target content in the first sub-area may include:
first content in the target content is displayed in the first sub-area.
In some embodiments, providing the second region in the live interface may include:
adjusting the display size of the first area;
and in the first area adjustment process, the second area is moved to an uncovered area of the first area in the live interface.
In some embodiments, adjusting the display size of the first region may include:
Adjusting the first area from the first display size to the second display size according to a predetermined adjustment speed or a predetermined adjustment acceleration;
then moving the second region to the uncovered region of the first region in the live interface during the first region adjustment may include:
and in the first area adjustment process, moving the second area to an uncovered area of the first area in the live interface according to a preset adjustment speed or preset adjustment acceleration.
As yet another embodiment, displaying the target content in the second area may include:
displaying content prompt information of the target content in the second area;
respectively adjusting the display sizes of the first area and the second area in response to the content display event;
and displaying the target content in the second area.
The content presentation event may refer to detection of a trigger operation for the content cue information.
In addition, the server side can identify the input information of the first user, and the server side can identify whether the first user inputs the first control information from the live video, if so, a first control instruction can be generated, the first control instruction is sent to the watching end or the live video, and the watching end or the live video can provide the second area in the live video interface.
Therefore, the embodiment of the application also provides an information display method, which is executed by the live broadcast end and can comprise the following steps:
the live broadcasting terminal provides a first area in a live broadcasting interface and displays live broadcasting pictures of live broadcasting video in the first area;
responding to a first control instruction sent by a server side, and providing a second area in a live broadcast interface; the first control instruction may be generated by the server identifying input information of a first user in the live video, and identifying the first user to input the first control information;
and displaying the target content in the second area.
The first control instruction may be generated when the server identifies that the first user in the live video inputs the first control information.
Fig. 6 is a flowchart of another embodiment of an information display method provided by the present application, where the technical scheme of the present embodiment is executed by a viewing end, and the method may include the following steps:
601: and providing a first area in the live interface, and displaying a live picture of the live video in the first area.
602: and responding to the first control instruction, and providing a second area in the video playing interface.
The first control instruction is generated by identifying the first user to input the first control information based on the input information of the first user.
603: and displaying the target content in the second area.
As an alternative implementation, providing the second region in the video playing interface in response to the first control instruction may include:
receiving a first control instruction sent by a server; the first control instruction is specifically generated by the server identifying input information of a first user in the live video and identifying the first user to input the first control information;
responding to the first control instruction, and providing a second area in the live interface;
and displaying the target content in the second area.
As another alternative implementation manner, the first control instruction may be generated when the server receives a first content trigger request sent by the live broadcast end, where the first content trigger request is generated by identifying, by the live broadcast end, that the first user inputs the first control information based on input information of the first user.
The target content may also be determined by the server receiving the first content trigger request.
In some embodiments, providing the second region in the live interface includes:
adjusting the display size of the first area;
a second region is provided in the live interface that does not overlap the first region.
In some embodiments, providing the second region in the live interface may include:
Determining a central area corresponding to a preset range where the central position in the first area is located;
around the central region, a second region surrounding the central region is provided.
In some embodiments, the method may further comprise:
providing a first sub-region in a second region in a live interface;
and displaying content prompt information of the target content in the first subarea.
Alternatively, the content hint information may refer to the first content in the target content.
In some embodiments, providing the second region in the live interface includes:
adjusting the display size of the first area;
and in the first area adjustment process, the second area is moved to an uncovered area of the first area in the live interface.
Optionally, adjusting the display size of the first region may include:
adjusting the first area from the first display size to the second display size according to a predetermined adjustment speed or a predetermined adjustment acceleration;
then moving the second region to the uncovered region of the first region in the live interface during the first region adjustment may include:
and in the first area adjustment process, moving the second area to an uncovered area of the first area in the live interface according to a preset adjustment speed or preset adjustment acceleration.
As yet another embodiment, displaying the target content in the second area may include:
displaying content prompt information of the target content in the second area;
respectively adjusting the display sizes of the first area and the second area in response to the content display event;
and displaying the target content in the second area.
The content presentation event may be detecting a trigger operation for the content alert information.
Fig. 7a is a flowchart of another embodiment of an information processing method provided by the present application, where the technical scheme of the present embodiment is executed by a server, and the method may include the following steps:
7011: and sending the live video to the watching end, so that the watching end provides a first area in the live interface and displays the live picture in the first area.
The live video can be collected and uploaded by a live terminal.
7012: and identifying the input information of the first user in the live video.
Wherein the input information includes voice information or motion information.
7013: first control information input by a first user is identified, and a first control instruction is generated.
7014: and sending the first control instruction to the watching end so that the watching end provides a second area in the live broadcast interface and displays the target content in the second area.
In addition, as a further embodiment, the identification of the input information of the first user may also be performed by the live broadcast end, so, as shown in fig. 7b, the present application also provides an information processing method, which is performed by the server end, and may include the following steps:
7021: and sending the live video to the watching end, so that the watching end provides a first area in the live interface, and displaying live pictures of the live video in the first area.
7022: and receiving a first content triggering request sent by the live broadcast terminal, and generating a first control instruction.
The first content triggering request is generated by identifying the acquired input information of the first user for the live broadcast terminal and identifying the first control information input by the first user; wherein the input information includes voice information or action information;
7023: and sending the first control instruction to the watching end so that the watching end provides a second area in the live broadcast interface and displays the target content in the second area.
Alternatively, the target content may be the target content associated with the video determined after the server receives the first content trigger request from the live side.
For ease of understanding, an interface schematic showing a live view in a first area 80 of a live interface of a viewing end is shown as shown in fig. 8 a.
The input information of the live user in the environment space can be identified in the live user live process, and when the live user is identified to input the first control information, the first control instruction can be triggered to be generated.
As can be seen from the live view shown in fig. 8a, the live user may perform a predetermined limb action, such as a hand lifting operation, at which time the generation of the first control command may be triggered. In response to the first control instruction, the viewing end may provide a second area 81 in the viewing end live interface for displaying the target content, as shown in fig. 8 b.
In order not to affect the normal display of the first area, the display size of the first area 80 may be adjusted, and the second area 81 and the first area 80 may not overlap each other.
Of course, in order to facilitate the user to perceive the target content in advance, the attraction of the target content is effectively improved. In response to the first control instruction, as shown in fig. 8c, the viewing end may first provide a first sub-area 82 in the second area in the live interface, and display content prompt information of the target content in the first sub-area 82, where the content prompt information may be the first content displayed in the first sub-area 82 in the target content.
Thereafter, in response to the content presentation event, the second region may be moved into the live interface again to present the target content, i.e., as shown in fig. 8 b.
The content presentation event may be, for example, a content cue information display period exceeding a first predetermined period, or a trigger operation for the content cue information such as a click operation or the like being detected.
Fig. 8a to 8c show a live interface of a viewing end, and it is understood that the live interface of the live end is similar to the live interface of the viewing end, and will not be illustrated herein.
It should be noted that, the live interface is not limited to only displaying live pictures and target content, and may naturally include some other presentation information, such as a live display area, a live transmission control, a forwarding control, a collection control, a praise control, related introduction content of a host, or a red package pickup control that may be provided in a live scene of an electronic commerce, where the presentation information may be overlaid on the live pictures for display, and the application is not limited thereto.
Fig. 9 is a flowchart of another embodiment of an information display method according to the present application, where the method may include the following steps:
901: a first region is provided in the video playing interface, and a video picture of the video is displayed in the first region.
The technical scheme of the embodiment can be executed by the video playing end. In the network live scene, the video specifically refers to a live video, the video playing interface is a live interface, and the video playing end can refer to a live end or a watching end, so that the technical scheme of the embodiment can be executed by the live end or the watching end.
Alternatively, the first area may span the entire video playback interface.
902: in response to a content trigger event, the display size of the first region is adjusted.
As an optional way, in the network live broadcast scenario, when the technical scheme of the embodiment is executed by the live broadcast end, responding to the content triggering event, adjusting the display size of the first area may include:
collecting input information of a first user;
the first control information input by the first user is recognized, and the display size of the first area is adjusted.
That is, the content trigger event may be the first user entering first control information.
The input information may be voice information or action information, and the first control information may refer to a first predetermined action or first key information.
Of course, as another alternative, a content trigger control may be displayed in the video playback interface, and the content trigger event may be a trigger operation for the content trigger control.
Alternatively, the content trigger event may refer to video playing for a specified playing duration or reaching specified playing content, and so on.
In addition, in the network live broadcast scenario, when the technical scheme of the embodiment is executed by the viewing end, the content triggering event may refer to receiving a first control instruction sent by the server end, where the first control instruction may be generated by the server end identifying input information of a first user in the live broadcast video, and identifying the first user to input the first control information.
903: a second area which is not overlapped with the first area is provided in the video playing interface.
The layout of the first area and the second area in the video playing interface can be, for example, as shown in fig. 2b and fig. 2c, and the present application is not limited to these two layout modes.
904: and displaying the target content in the second area.
The target content may refer to video-associated content.
In an alternative manner, the target content may refer to, for example, related information of the first user, such as user attribute information of the first user, such as the number of fans, age, gender, liveness, and the like. The video may be obtained by recording live sound and/or pictures where the first user is located, and in the live network scene, the first user may be a live user.
In another alternative, the target content may refer to related content associated with the currently played content, such as product related information that the currently played content relates to a product. In a network live broadcast scene, a live broadcast user can explain different products on a live broadcast scene, a watching user obtains explanation contents from the live broadcast scene through live broadcast video, the target contents can be, for example, product related information of the currently explained products of the live broadcast user, and when the products are associated with Shang Ping, the product related information can be, for example, product pages provided by an electronic commerce platform or link prompt information of the product pages and the like.
In yet another alternative, the target content may refer to related information for interaction with the user, such as electronic coupon retrieval prompt information, etc.
In yet another alternative, the target content may also refer to related content that needs to be promoted in connection with actual situations, and so on.
Optionally, adjusting the display size of the first area may be reducing the display size of the first area, and may be reducing the first area from the first display size to the second display size; the first display size may be a display size that is full of the entire video playing interface, and the second display size is smaller than the first display size.
In addition, after adjusting the display size of the first area, the display position of the first area in the video playing interface may also be adjusted, for example, the video frame may be adjusted to an upper boundary area of the video playing interface, so that a lower boundary area in the video playing interface is a blank area for displaying the second area, and so on.
By adjusting the display size of the first area, an uncovered area of the first area exists in the video playing interface, so that a second area can be provided in the uncovered area, and target content is displayed in the second area, so that video pictures and target content can be displayed in the video playing interface at the same time, the target content does not shade the video pictures, video playing is not influenced, and video playing effect and content displaying effect are guaranteed.
In addition, after adjusting the display size of the first area, the display position of the first area in the video playing interface may also be adjusted, for example, the first area may be adjusted to an upper boundary area of the video playing interface, so that a lower boundary area in the video playing interface is an uncovered area, to provide a second area in the uncovered area, and so on.
In this embodiment, by adjusting the display size of the first area, the first area and the second area that do not overlap each other may be provided in the video playing interface, so that the target content may be displayed in the second area, so that the video frame and the target content are displayed in the video playing interface at the same time, and no page skip is required. And the target content can not block the video picture, so that the video playing is not influenced, and the video playing effect and the content display effect are ensured.
Because the second region has a limited display area, in some embodiments, displaying the target content in the second region may include:
the target content is displayed in a sliding manner in the second area.
I.e. at least part of the content of the target content can be displayed first in the second area; if a slide switching operation for the target content is detected, at least part of the content and the like which are not currently displayed in the target content are displayed in the second area.
In some embodiments, providing a second region in the video playback interface that is not overlapping the first region may include:
moving the first area to the center position of the video playing interface;
around the first region, a second region surrounding the first region is provided. The specific layout can be seen for example in fig. 2 c.
In some embodiments, in response to the content trigger event, the method may further comprise, prior to adjusting the display size of the first region:
providing a first sub-region in the second region in the video playback interface;
and displaying content prompt information of the target content in the first subarea.
For video viewing users, the user is often unaware before the target content is displayed, and in order to effectively attract the user, content prompt information of the target content may be displayed in a video playing interface first.
In some embodiments, the content trigger event may refer to the content alert message being displayed for a period of time exceeding a first predetermined period of time.
The content prompt information may be generated based on the target content, for example, summary information of the target content, and the like.
In addition, the content hint information may be a first content in the target content, where the first content may refer to content in the target content that is located in the first sub-area.
Accordingly, the content cue information for displaying the target content in the first sub-area may be to display the first content in the target content in the first sub-area.
To further enhance the video playback effect, in some embodiments, providing a second region in the video playback interface that is not overlapping the first region may include:
and in the first area adjusting process, the second area is moved to an uncovered area of the first area in the video playing interface.
Optionally, adjusting the display size of the first region may include:
adjusting the first area from the first display size to the second display size according to a predetermined adjustment speed or a predetermined adjustment acceleration;
then moving the second region to the uncovered region of the first region in the video playback interface during the first region adjustment may include:
and in the first area adjusting process, moving the second area to an uncovered area of the first area in the video playing interface according to a preset adjusting speed or preset adjusting acceleration.
I.e. the target content may gradually move into the uncovered area.
As yet another embodiment, displaying the target content in the second region includes:
displaying content prompt information of the target content in the second area;
Respectively adjusting the display sizes of the first area and the second area in response to the content display event;
and displaying the target content in the second area.
The content presentation event may refer to, for example, detection of a trigger operation for the content cue information, or may refer to the content cue information being displayed for longer than a first predetermined time period, or the like.
As can be seen from the foregoing description, the content trigger event may refer to the first user inputting the first control information.
As an alternative, the input information may refer to action information input by the first user.
Thus, identifying the first user to input the first control information, providing the second region in the video playback interface may include:
a first user is identified to perform a first predetermined action and a second area is provided in the video playback interface.
The first predetermined action may be a preconfigured user action, such as a bottom-up hand-up action, or the like.
As another alternative, the input information may refer to voice information input by the first user.
Thus, identifying the first user to input the first control information, providing the second region in the video playback interface may include:
the first user voice is identified to input first key information, and a second area is provided in the video playing interface.
The first key information may be a pre-configured key word such as "start display content" or the like.
In addition, in the network live scene, when the video is a live video, and when the embodiment shown in fig. 9 is executed by the viewing end, as a further embodiment, in response to the content trigger event, adjusting the display size of the first area includes:
receiving a first control instruction sent by a server side, and adjusting the display size of a first area; the first control instruction is generated by identifying input information of a first user in the live video by the server and identifying the first user to input the first control information.
That is, the content trigger event may refer to receiving a first control instruction sent by the server.
In some embodiments, the method may further comprise:
responding to a content triggering event, and sending a first content triggering request to a server side so as to facilitate the server side to determine target content associated with the video;
and acquiring the target content from the server.
In some embodiments, the method may further comprise:
in response to the content cancellation event, canceling the display of the target content in the video playback interface and restoring the first region to the original size.
The original size may be a first display size of the first area covering the entire video playing interface.
Wherein the content cancellation event may refer to the first user inputting the second control information in the environment space.
As an alternative, in response to the content cancellation event, canceling the display of the target content in the video playback interface and restoring the video frame to the original size may include:
performing image recognition on a first user;
and recognizing the first user to execute a second preset gesture, canceling the display of the target content in the video playing interface, and restoring the video picture to the original size.
The second predetermined gesture may be, for example, an opposite gesture to the first predetermined gesture, or a preconfigured specific gesture, etc.
As another alternative, in response to the content cancellation event, canceling the display of the target content in the video playback interface and restoring the video frame to the original size may include:
identifying the first user voice to input the second key information, canceling the display of the target content in the video playing interface, and restoring the video picture to the original size
The second key information may be a pre-configured key word or the like, such as "cancel display of XX content" or the like.
As yet another alternative, in response to the content cancellation event, canceling the display of the target content in the video playback interface and restoring the video frame to the original size may include:
And identifying the second user to execute a second preset facial action, a second preset limb action or a second preset expression action, cancelling display of target content in the video playing interface, and restoring the video picture to the original size.
In addition, the content cancellation event may also refer to the target content being displayed for longer than a second predetermined time period, and thus, in some embodiments, canceling the display of the target content in the video playback interface and restoring the video frame to the original size may include:
generating a content cancellation event after the target content display duration exceeds a second predetermined duration;
in response to the content cancellation event, canceling the display of the target content in the video playback interface and restoring the video frame to the original size.
Fig. 10 is a flowchart of an embodiment of an information processing method provided by the present application, where the embodiment is executed by a server, and may include the following steps:
1001: and sending the video to a video playing end so that the video playing end provides a first area in a video playing interface and displays video pictures of the video in the first area.
1002: and adjusting the display size of the first area in the video playing end based on the content triggering event.
The content trigger event may be detected by the video playing end and sent to the server end.
The implementation of the content trigger event may be described in detail in the foregoing, and will not be described in detail herein.
1003: and providing a second area which is not covered with the first area in a video playing interface of the video playing end.
1004: and displaying the target content in the second area.
Optionally, the target content associated with the video may also be first determined based on the content trigger event.
In a practical application, the technical solution of the embodiment of the present application may be applied to a network live broadcast scenario, as shown in fig. 11, which is a flowchart of another embodiment of an information display method provided by the present application, where the technical solution of the embodiment may be executed by a live broadcast end, and the method may include the following steps:
1101: and the live broadcast terminal displays live broadcast pictures of the live broadcast video in the live broadcast interface.
1102: in response to a content trigger event, the display size of the first region is adjusted.
As an alternative, in response to the content trigger event, adjusting the display size of the first region includes:
collecting input information of a first user;
the first control information input by the first user is recognized, and the display size of the first area is adjusted.
The specific implementation manner of the first control information may be referred to in the foregoing, and a detailed description thereof will not be repeated here.
1103: a second region is provided in the live interface that does not overlap the first region.
1104: and displaying the target content in the second area.
The viewing end can synchronously play the live video, so that a first area is provided in a live interface of the viewing end to display live pictures of the live video, the live end recognizes that a first user inputs first control information, can send a first content trigger request to the service end, and can generate a first control instruction based on the first content trigger request and send the first control instruction to the viewing end, and accordingly the viewing end can provide a second area in the live interface based on the first control instruction and display target content in the second area.
In addition, the target content may also refer to target content associated with the live video determined after the server receives the first content trigger request, and thus, in some embodiments, after identifying that the first user inputs the first control information, the method may further include:
the method comprises the steps that a first content triggering request is sent to a server side, so that the server side sends a first control instruction to a viewing side based on the first content triggering request, the viewing side provides a second area in a seven live broadcast interface, and target content is displayed in the second area;
And obtaining target content sent by the server.
The difference between fig. 11 and fig. 10 is that the video is specifically a live video, the video playing end is a live end, and other identical or similar operations may be detailed in fig. 10, and the detailed description thereof will not be repeated here.
As shown in fig. 12, a flowchart of another embodiment of an information display method provided by the present application, where the technical solution of the present embodiment may be executed by a viewing end, the method may include the following steps:
1201: and the viewing end displays the live broadcast picture of the live broadcast video in the live broadcast interface.
1202: in response to a content trigger event, the display size of the first region is adjusted.
As an alternative, in response to the content trigger event, adjusting the display size of the first region includes:
collecting input information of a second user;
and recognizing that a second user inputs second control information, and adjusting the display size of the first area.
The second user may refer to a viewing user who views live video corresponding to the viewing end.
The input information may refer to action information or voice information, and the second control information may refer to a second predetermined action or second key information.
That is, the content trigger event may refer to the second user entering second control information.
As another alternative, adjusting the display size of the first region in response to the content trigger event includes:
receiving a first control instruction, and adjusting the display size of the first area; the first control instruction is generated by identifying the first user to input the first control information based on the input information of the first user.
Optionally, the first control instruction may specifically be generated when the server receives a first content trigger request sent by the live broadcast end, where the first content trigger request is generated by identifying, by the live broadcast end, that the first user inputs the first control information based on input information of the first user.
Of course, the first control instruction may also be generated by the service end based on the live video identifying the first user inputting the first control information.
1203: a second region is provided in the live interface that does not overlap the first region.
1204: and displaying the target content in the second area.
The difference between fig. 12 and fig. 10 is that the video is specifically a live video, the video playing end is a viewing end, and other identical or similar operations may be detailed in fig. 10, and the detailed description thereof will not be repeated here.
Fig. 13 is a flowchart of another embodiment of an information processing method provided by the present application, where the technical scheme of the present embodiment is executed by a server, and the method may include the following steps:
1301: and sending the live video to the watching end, so that the watching end provides a first area in the live interface and displays the live picture in the first area.
1302: based on the content trigger event, a display size of the first region in the viewing end is adjusted.
Wherein the content trigger event may be the first user entering first control information.
Thus, as an alternative, adjusting the display size of the first region in the viewing end based on the content trigger event may include:
and receiving a first content triggering request sent by the live broadcast terminal for detecting the first control information input by the first user, and adjusting the display size of a first area in the watching terminal.
As another alternative, adjusting the display size of the first region in the viewing end based on the content trigger event may include:
and identifying a first user in the live video to input first control information, and adjusting the display size of a first area in the watching end.
1303: providing a second area which is not covered with the first area in a live broadcast interface of the watching end;
1304: and displaying the target content in the second area.
In addition, as a further embodiment, the embodiment of the present application also provides a display interface, in which a first area is provided, and a video picture of a video is displayed in the first area;
The display interface is also used for providing a second area and displaying target content in the second area under the condition that the input information of the first user is identified and the first control information is input by the first user; the input information includes motion information or voice information.
As yet another embodiment, the embodiment of the present application further provides a display interface, where a first area is provided in the display interface, and a video frame of a video is displayed in the first area;
the display interface is also used for adjusting the display size of the first area, providing a second area which is not covered by the first area, and displaying target content in the second area.
Fig. 14 is a schematic structural diagram of an embodiment of an information display device provided by the present application, where in practical application, the information display device may be configured as a video playing end, and in a network live broadcast scenario, the video playing end may be a live broadcast end.
The apparatus may include:
a first display module 1401 for providing a first area in a video playing interface and displaying a video picture of a video in the first area;
a first acquisition module 1402, configured to acquire input information of a first user; wherein the input information includes action information or voice information;
A first recognition module 1403, configured to recognize that the first user inputs the first control information, and provide a second area in the video playing interface;
a second display module 1404 for displaying the target content in the second area.
In the network live broadcast scene, the first display module may be specifically configured to display a live broadcast picture of a live broadcast video in a live broadcast interface.
In some embodiments, the input information is action information;
the first recognition module is specifically configured to recognize that the first user performs a first predetermined action, and provide a second area in the video playing interface.
In some embodiments, the input information is action information;
the first recognition module is specifically configured to recognize that the first user inputs the first key information by voice, and provide the second area in the video playing interface.
In some embodiments, the first identification module providing the second region in the video playback interface comprises: adjusting the display size of the first area; a second area which is not overlapped with the first area is provided in the video playing interface.
In some embodiments, the first identification module providing the second region in the video playback interface comprises: determining a central area corresponding to a preset range where the central position in the first area is located; around the central region, a second region surrounding the central region is provided.
In some embodiments, the first display module is further configured to provide a first sub-region of the second region in the video playback interface; displaying content prompt information of target content in the first subarea; the content prompt information is used for prompting the first user to input first control information.
In some embodiments, the first display module displaying the content hint information of the target content in the first sub-area includes: first content in the target content is displayed in the first sub-area.
In some embodiments, the first identification module providing the second region in the video playback interface comprises: adjusting the display size of the first area; and in the first area adjusting process, the second area is moved to an uncovered area of the first area in the video playing interface.
In some embodiments, the first recognition module adjusting the display size of the first region includes: adjusting the first area from the first display size to the second display size according to a predetermined adjustment speed or a predetermined adjustment acceleration;
the first identification module moving the second area to an uncovered area of the first area in the video playing interface in the first area adjustment process includes: and in the first area adjusting process, moving the second area to an uncovered area of the first area in the video playing interface according to a preset adjusting speed or preset adjusting acceleration.
In some embodiments, the second display module is specifically configured to display content prompt information of the target content in the second area; respectively adjusting the display sizes of the first area and the second area in response to the content display event; and displaying the target content in the second area.
In some embodiments, the apparatus may further comprise:
the content triggering module is used for identifying that a first user inputs first control information and sending a first content triggering request to the server so that the server can determine target content associated with the video; and obtaining target content sent by the server.
The information display device shown in fig. 14 may perform the information display method described in the embodiment shown in fig. 1, and its implementation principle and technical effects are not repeated. The specific manner in which the respective modules, units, and operations of the information display apparatus in the above embodiments are performed has been described in detail in the embodiments related to the method, and will not be described in detail here.
In one possible design, the information display apparatus of the embodiment shown in fig. 14 may be implemented as an electronic device, and in practical applications, the electronic device may be, for example, a mobile phone, a tablet computer, a personal computer, or the like.
As shown in fig. 15, the electronic device may include a storage component 1501, a display component 1502, and a processing component 1503;
the storage component 1501 stores one or more computer instructions for execution by the processing component 1503 to implement the information display method illustrated in fig. 1.
Of course, the electronic device may necessarily also include other components, such as input/output interfaces, communication components, and the like. The input/output interface provides an interface between the processing component and a peripheral interface module, which may be an output device, an input device, etc. The communication component is configured to facilitate wired or wireless communication between the electronic device and other devices, and the like.
The embodiment of the application also provides a computer readable storage medium, which stores a computer program, and the computer program can realize the information display method of the embodiment shown in fig. 1 when being executed by a computer.
Fig. 16 is a schematic structural diagram of an embodiment of an information processing apparatus according to the present application, where the apparatus may be configured as a server.
The apparatus may include:
a first sending module 1601, configured to send a live video to a viewing end, so that the viewing end provides a first area in a live interface, and displays a live screen in the first area;
A second identifying module 1602, configured to identify input information of a first user in the live video; the input information includes voice information or motion information; identifying first control information input by a first user, and generating a first control instruction;
the second sending module 1603 is configured to send the first control instruction to the viewing end, so that the viewing end provides a second area in the live interface, and displays the target content in the second area.
The information display device shown in fig. 16 may perform the information processing method described in the embodiment shown in fig. 7a, and its implementation principle and technical effects are not repeated. The specific manner in which the respective modules, units, and operations of the information display apparatus in the above embodiments are performed has been described in detail in the embodiments related to the method, and will not be described in detail here.
In one possible design, the information processing apparatus of the embodiment shown in fig. 16 may be implemented as a computing device, which may include a storage component 1701 and a processing component 1702 as shown in fig. 17;
the storage component 1701 may store one or more computer instructions for execution by the processing component 1702 to implement the information processing method as illustrated in fig. 7 a.
Wherein the processing component 1703 may include one or more processors to execute computer instructions to perform all or part of the steps of the methods described above. Of course, the processing component may also be implemented as one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors or other electronic elements for executing the methods described above.
The storage component 1701 is configured to store various types of data to support operations at the terminal. The memory component may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
Of course, the computing device may necessarily include other components, such as input/output interfaces, communication components, and the like. The input/output interface provides an interface between the processing component and a peripheral interface module, which may be an output device, an input device, etc. The communication component is configured to facilitate wired or wireless communication between the computing device and other devices, and the like.
The computing device may be a physical device or an elastic computing host provided by the cloud computing platform, and at this time, the computing device may be a cloud server, and the processing component, the storage component, and the like may be a base server resource rented or purchased from the cloud computing platform.
The embodiment of the present application also provides a computer readable storage medium storing a computer program, where the computer program when executed by a computer can implement the information processing method of the embodiment shown in fig. 7 a.
Fig. 18 is a schematic structural diagram of an embodiment of an information processing apparatus according to the present application, where the apparatus may be configured as a server.
The apparatus may include:
a third sending module 1801, configured to send the live video to the viewing end, so that the viewing end provides a first area in the live interface, and displays a live image in the first area;
the receiving module 1802 is configured to receive a first content triggering request sent by a live broadcast end, and generate a first control instruction; the first content triggering request is generated by identifying the acquired input information of the first user for the live broadcast terminal and identifying the first control information input by the first user; wherein the input information includes voice information or action information;
And a fourth sending module 1803, configured to send the first control instruction to the viewing end, so that the viewing end provides a second area in the live interface, and displays the target content in the second area.
The information processing apparatus of fig. 18 may perform the information processing method described in the embodiment of fig. 7b, and its implementation principle and technical effects will not be described again. The specific manner in which the respective modules, units, and operations of the information display apparatus in the above embodiments are performed has been described in detail in the embodiments related to the method, and will not be described in detail here.
In one possible design, the information processing apparatus of the embodiment shown in fig. 18 may be implemented as a computing device, which may include a storage component 1901 and a processing component 1902, as shown in fig. 19;
the storage component 1901 can store one or more computer instructions for execution by the processing component 1902 to invoke, for example, to implement the information processing method illustrated in fig. 7 b.
Of course, the computing device may necessarily include other components, such as input/output interfaces, communication components, and the like. The input/output interface provides an interface between the processing component and a peripheral interface module, which may be an output device, an input device, etc. The communication component is configured to facilitate wired or wireless communication between the computing device and other devices, and the like.
The computing device may be a physical device or an elastic computing host provided by the cloud computing platform, and at this time, the computing device may be a cloud server, and the processing component, the storage component, and the like may be a base server resource rented or purchased from the cloud computing platform.
In addition, the embodiment of the present application further provides a computer readable storage medium storing a computer program, where the computer program when executed by a computer can implement the information processing method of the embodiment shown in fig. 7 b.
Fig. 20 is a schematic structural diagram of an embodiment of an information display device according to the present application, where the device may be configured as a video playing end.
The apparatus may include:
a third display module 2001 for providing a first area in the video playing interface and displaying a video picture of the video in the first area;
an adjustment module 2002 for adjusting a display size of the first region in response to the content trigger event;
a providing module 2003 for providing a second area, which is not overlapped with the first area, in the video playing interface;
a fourth display module 2004 for displaying the target content in the second region.
The information display device shown in fig. 20 may perform the information display method described in the embodiment shown in fig. 9, and its implementation principle and technical effects are not repeated. The specific manner in which the respective modules, units, and operations of the information display apparatus in the above embodiments are performed has been described in detail in the embodiments related to the method, and will not be described in detail here.
In one possible design, the information display apparatus of the embodiment shown in fig. 20 may be implemented as an electronic device, and in practical applications, the electronic device may be, for example, a mobile phone, a tablet computer, a personal computer, or the like.
As shown in fig. 21, the electronic device may include a storage component 2101, a display component 2102, and a processing component 2103;
the storage component 2101 stores one or more computer instructions for execution by the processing component 2103 to implement the information display method illustrated in fig. 9.
Of course, the electronic device may necessarily also include other components, such as input/output interfaces, communication components, and the like. The input/output interface provides an interface between the processing component and a peripheral interface module, which may be an output device, an input device, etc. The communication component is configured to facilitate wired or wireless communication between the electronic device and other devices, and the like.
In addition, the embodiment of the present application further provides a computer readable storage medium storing a computer program, where the computer program when executed by a computer can implement the information processing method of the embodiment shown in fig. 9.
Fig. 22 is a schematic structural diagram of an embodiment of an information processing apparatus according to the present application, where the apparatus may be configured as a server.
The apparatus may include:
a fifth sending module 2201, configured to send a video to a video playing end, so that the video playing end provides a first area in a video playing interface, and displays a video picture in the first area;
an adjustment triggering module 2202, configured to adjust a display size of the first area in the video playing end based on the content triggering event; providing a second area which is not covered with the first area in a video playing interface of the video playing end; and displaying the target content in the second area.
The information display device shown in fig. 22 may perform the information processing method described in the embodiment shown in fig. 7b, and its implementation principle and technical effects are not repeated. The specific manner in which the respective modules, units, and operations of the information display apparatus in the above embodiments are performed has been described in detail in the embodiments related to the method, and will not be described in detail here.
In one possible design, the information processing apparatus of the embodiment shown in fig. 22 may be implemented as a computing device, which may include a storage component 2301 and a processing component 2302 as shown in fig. 23;
The storage component 2301 may store one or more computer instructions for execution by the processing component 2302 to implement the information processing method illustrated in fig. 10.
Of course, the computing device may necessarily include other components, such as input/output interfaces, communication components, and the like. The input/output interface provides an interface between the processing component and a peripheral interface module, which may be an output device, an input device, etc. The communication component is configured to facilitate wired or wireless communication between the computing device and other devices, and the like.
The computing device may be a physical device or an elastic computing host provided by the cloud computing platform, and at this time, the computing device may be a cloud server, and the processing component, the storage component, and the like may be a base server resource rented or purchased from the cloud computing platform.
In addition, an embodiment of the present application further provides a computer readable storage medium storing a computer program, where the computer program when executed by a computer can implement the information processing method of the embodiment shown in fig. 10.
Wherein the processing components referred to in the foregoing embodiments can include one or more processors to execute computer instructions to perform all or part of the steps of the methods described above. Of course, the processing component may also be implemented as one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors or other electronic elements for executing the methods described above.
The storage component is configured to store various types of data to support operations in the respective devices. The memory component may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The display component may be an Electroluminescent (EL) element, a liquid crystal display or a micro display having a similar structure, or a retina-directly displayable or similar laser scanning type display.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (49)

1. An information display method, comprising:
providing a first area in a video playing interface, and displaying a video picture of a video in the first area;
collecting input information of a first user; the input information comprises voice information or action information;
identifying the first user to input first control information, and providing a second area in the video playing interface;
displaying target content in the second area; the target content comprises one or more of related information of the first user, related information of the current playing content and related information of interaction with the first user.
2. The method of claim 1, wherein the input information is action information;
the identifying the first user to input first control information, the providing a second region in the video playback interface comprising:
and identifying the first user to execute a first preset action, and providing a second area in the video playing interface.
3. The method of claim 1, wherein the input information is action information;
the identifying the first user to input first control information, the providing a second region in the video playback interface comprising:
And recognizing the first user voice to input first key information, and providing a second area in the video playing interface.
4. The method of claim 1, wherein providing a second region in the video playback interface comprises:
adjusting the display size of the first area;
and providing a second area which is not covered with the first area in the video playing interface.
5. The method of claim 1, wherein providing a second region in the video playback interface comprises:
determining a central area corresponding to a preset range where the central position in the first area is located;
a second region surrounding the central region is provided around the central region.
6. The method as recited in claim 1, further comprising:
providing a first sub-region of the second region in the video playback interface;
displaying content prompt information of the target content in the first subarea; the content prompt information is used for prompting a first user to input the first control information.
7. The method of claim 6, wherein displaying content cues for the target content in the first sub-region comprises:
And displaying first content in the target content in the first subarea.
8. The method of claim 6, wherein providing a second region in the video playback interface comprises:
adjusting the display size of the first area;
and in the first region adjustment process, the second region is moved to an uncovered region of the first region in the video playing interface.
9. The method of claim 8, wherein the resizing the display of the first region comprises:
adjusting the first area from a first display size to a second display size according to a predetermined adjustment speed or a predetermined adjustment acceleration;
the moving the second area to the uncovered area of the first area in the video playing interface in the first area adjustment process includes:
and in the first region adjustment process, moving a second region to an uncovered region of the first region in the video playing interface according to the preset adjustment speed or the preset adjustment acceleration.
10. The method of claim 1, wherein the displaying the target content in the second area comprises:
Displaying content prompt information of target content in the second area;
respectively adjusting the display sizes of the first area and the second area in response to a content display event;
and displaying the target content in the second area.
11. The method as recited in claim 1, further comprising:
identifying the first user to input first control information, and sending a first content triggering request to a server side so that the server side can determine the target content associated with the video;
and acquiring the target content sent by the server.
12. An information display method, comprising:
providing a first area in a live broadcast interface, and displaying live broadcast pictures of live broadcast video in the first area;
collecting input information of a first user; the input information comprises voice information or action information;
identifying the first user to input first control information, and providing a second area in the live interface;
displaying target content in the second area; the target content comprises one or more of related information of the first user, related information of the current playing content and related information of interaction with the first user.
13. The method as recited in claim 12, further comprising:
and identifying the first user to input first control information, sending a first content triggering request to a server side so that the server side can send a first control instruction to a viewing side, wherein the viewing side is used for responding to the first control instruction to provide a second area in a live interface of the viewing side and displaying the target content in the second area.
14. An information display method, comprising:
providing a first area in a live broadcast interface, and displaying live broadcast pictures of live broadcast video in the first area;
providing a second region in the live interface in response to a first control instruction; the first control instruction is generated by identifying input information of a first user in the live video by a server and identifying the first user to input the first control information; wherein the input information comprises voice information or action information;
displaying target content in the second area; the target content comprises one or more of related information of the first user, related information of the current playing content and related information of interaction with the first user.
15. An information processing method, characterized by comprising:
transmitting the live video to a viewing end, so that the viewing end provides a first area in a live interface, and displaying live pictures of the live video in the first area;
identifying input information of a first user in the live video; the input information comprises voice information or action information;
identifying the first user to input first control information, and generating a first control instruction;
sending the first control instruction to the watching end so that the watching end provides a second area in the live broadcast interface and displays target content in the second area; the target content comprises one or more of related information of the first user, related information of the current playing content and related information of interaction with the first user.
16. An information processing method, characterized by comprising:
transmitting the live video to a viewing end, so that the viewing end provides a first area in a live interface, and displaying live pictures of the live video in the first area;
receiving a first content triggering request sent by a live broadcast terminal, and generating a first control instruction; the first content triggering request is generated by identifying the input information of a first user acquired by the live broadcast terminal and identifying the first control information input by the first user; wherein the input information comprises voice information or action information;
The first control instruction is sent to the watching end, so that the watching end provides a second area in the live broadcast interface, and target content is displayed in the second area; the target content comprises one or more of related information of the first user, related information of the current playing content and related information of interaction with the first user.
17. An information display method, comprising:
providing a first area in a video playing interface, and displaying a video picture of a video in the first area;
adjusting a display size of the first region in response to a content trigger event;
providing a second area which is not covered with the first area in the video playing interface;
displaying target content in the second area; the target content comprises the associated information of the current playing content.
18. The method as recited in claim 17, further comprising:
providing a first sub-region of the second region in the video playback interface;
and displaying content prompt information of the target content in the first subarea.
19. The method of claim 18, wherein the displaying content cues for the target content in the first sub-region comprises:
And displaying first content in the target content in the first subarea.
20. The method of claim 18, wherein providing a second area in the video playback interface that does not overlap the first area comprises:
and in the first region adjustment process, the second region is moved to an uncovered region of the first region in the video playing interface.
21. The method of claim 18, wherein adjusting the display size of the first region in response to a content trigger event comprises:
and after the content prompt information is displayed for a first preset time, adjusting the display size of the first area.
22. The method of claim 17, wherein the displaying the target content in the second region comprises:
displaying content prompt information of target content in the second area;
respectively adjusting the display sizes of the first area and the second area in response to a content display event;
and displaying the target content in the second area.
23. The method as recited in claim 17, further comprising:
Responding to a content trigger event, and sending a first content trigger request to a server;
and acquiring the target content associated with the video from the server.
24. The method of claim 17, wherein adjusting the display size of the first region in response to a content trigger event comprises:
collecting input information of a first user; wherein the input information comprises voice information or action information;
and identifying the first control information input by the first user, and adjusting the display size of the first area.
25. The method of claim 24, wherein the input information comprises action information;
the identifying the first user to input first control information, and adjusting the display size of the first area includes:
and identifying the first user to execute a first preset action, and adjusting the display size of the first area.
26. The method of claim 24, wherein the input information comprises voice information;
the identifying the first user to input first control information, and adjusting the display size of the first area includes:
and recognizing the first key information input by the first user voice, and adjusting the display size of the first area.
27. The method of claim 17, wherein the video is live video; the adjusting the display size of the first region in response to the content trigger event includes:
receiving a first control instruction sent by a server side, and adjusting the display size of the first area; the first control instruction is generated by identifying input information of a first user in the live video by the server and identifying the first user to input the first control information.
28. The method of claim 17, wherein the resizing the display of the first region comprises:
and reducing the first area from the first display size to the second display size.
29. The method as recited in claim 17, further comprising:
and in response to a content cancellation event, canceling the provision of the first area in the video playing interface and restoring the video picture to an original size.
30. An information processing method, characterized by comprising:
transmitting the video to a video playing end, so that the video playing end provides a first area in a video playing interface and displays video pictures in the first area;
Based on a content trigger event, adjusting the display size of the first area in the video playing end;
providing a second area which is not covered by the first area in the video playing interface of the video playing end;
displaying target content in the second area; the target content comprises the associated information of the current playing content.
31. An information display method, comprising:
providing a first area in a live broadcast interface, and displaying live broadcast pictures of live broadcast video in the first area;
adjusting a display size of the first region in response to a content trigger event;
providing a second region in the live interface that does not overlap the first region;
displaying target content in the second area; the target content comprises the associated information of the current playing content.
32. The method as recited in claim 31, further comprising:
and responding to the content triggering event, sending a first content triggering request to a server side so that the server side can send a first control instruction to a viewing side, responding to the first control instruction by the viewing side, providing a second area in a live interface of the viewing side, and displaying the target content in the second area.
33. The method of claim 31, wherein adjusting the display size of the first region in response to a content trigger event comprises:
collecting input information of a second user; wherein the input information comprises voice information or action information;
and identifying the second user to input second control information, and adjusting the display size of the first area.
34. The method of claim 33, wherein the input information comprises action information;
the identifying the second user to input second control information, and adjusting the display size of the first area includes:
and identifying the second user to execute a second preset action, and adjusting the display size of the first area.
35. The method of claim 33, wherein the input information comprises voice information;
the identifying the second user to input second control information, and adjusting the display size of the first area includes:
and recognizing the second key information input by the second user voice, and adjusting the display size of the first area.
36. The method of claim 31, wherein adjusting the display size of the first region in response to a content trigger event comprises:
Receiving a first control instruction sent by a server side, and adjusting the display size of the first area; the first control instruction is generated by identifying the first user to input first control information based on the input information of the first user.
37. An information processing method, characterized by comprising:
transmitting the live video to a viewing end, so that the viewing end provides a first area in a live interface, and displaying live pictures of the live video in the first area;
based on a content trigger event, adjusting a display size of the first region in the viewing end;
providing a second area which is not covered with the first area in a live interface of the watching end;
displaying target content in the second area; the target content comprises the associated information of the current playing content.
38. A display interface, characterized in that a first area is provided, and a video picture of a video is displayed in the first area;
the display interface is also used for providing a second area and displaying target content in the second area when the input information of the first user is identified and the first control information is identified to be input by the first user; the input information comprises action information or voice information; the target content comprises one or more of related information of the first user, related information of the current playing content and related information of interaction with the first user.
39. A display interface, characterized in that a first area is provided, and a video picture of a video is displayed in the first area;
the display interface is also used for adjusting the display size of the first area, providing a second area which is not covered by the first area, and displaying target content in the second area; the target content comprises the associated information of the current playing content.
40. An information display device, comprising:
the first display module is used for providing a first area in the video playing interface and displaying video pictures of the video in the first area;
the first acquisition module is used for acquiring input information of a first user; wherein the input information comprises action information or voice information;
the first identification module is used for identifying that the first user inputs first control information and providing a second area in the video playing interface;
a second display module for displaying target content in the second area; the target content comprises one or more of related information of the first user, related information of the current playing content and related information of interaction with the first user.
41. An information processing apparatus, characterized by comprising:
the first sending module is used for sending the live video to the watching end so that the watching end can provide a first area in the live interface and display live pictures in the first area;
the second identification module is used for identifying the input information of the first user in the live video; the input information comprises voice information or action information; identifying the first user to input first control information, and generating a first control instruction;
the second sending module is used for sending the first control instruction to the watching end so that the watching end provides a second area in the live broadcast interface and displays target content in the second area; the target content comprises one or more of related information of the first user, related information of the current playing content and related information of interaction with the first user.
42. An information processing apparatus, characterized by comprising:
the third sending module is used for sending the live video to the watching end so that the watching end can provide a first area in the live interface and display live pictures in the first area;
The receiving module is used for receiving a first content triggering request sent by the live broadcast end and generating a first control instruction; the first content triggering request is generated by identifying the input information of a first user acquired by the live broadcast terminal and identifying the first control information input by the first user; wherein the input information comprises voice information or action information;
a fourth sending module, configured to send the first control instruction to the viewing end, so that the viewing end provides a second area in the live broadcast interface, and displays target content in the second area; the target content comprises one or more of related information of the first user, related information of the current playing content and related information of interaction with the first user.
43. An information display device, comprising:
the third display module is used for providing a first area in the video playing interface and displaying video pictures of the video in the first area;
the adjusting module is used for responding to the content triggering event and adjusting the display size of the first area;
the providing module is used for providing a second area which is not covered with the first area in the video playing interface;
A fourth display module for displaying target content in the second area; the target content comprises the associated information of the current playing content.
44. An information processing apparatus, characterized by comprising:
the fifth sending module is used for sending the video to the video playing end so that the video playing end can provide a first area in a video playing interface and display video pictures in the first area;
the adjusting and triggering module is used for adjusting the display size of the first area in the video playing end based on the content triggering event; providing a second area which is not covered by the first area in the video playing interface of the video playing end; displaying target content in the second area; the target content comprises the associated information of the current playing content.
45. An electronic device is characterized by comprising a storage component, a display component and a processing component; the storage component stores one or more computer program instructions; the one or more computer program instructions for invocation and execution by the processing component to implement the information display method of any one of claims 1 to 11.
46. A computing device comprising a storage component and a processing component; the storage component stores one or more computer program instructions; the one or more computer program instructions for invocation and execution by the processing component to implement the information processing method of claim 15.
47. A computing device comprising a storage component and a processing component; the storage component stores one or more computer program instructions; the one or more computer program instructions for invocation and execution by the processing component to implement the information processing method of claim 16.
48. An electronic device is characterized by comprising a storage component, a display component and a processing component; the storage component stores one or more computer program instructions; the one or more computer program instructions for invocation and execution by the processing component to implement the information display method of any of claims 17 to 29.
49. A computing device comprising a storage component and a processing component; the storage component stores one or more computer program instructions; the one or more computer program instructions are for invocation and execution by the processing component to implement the information processing method of claim 30.
CN202010394227.4A 2020-05-11 2020-05-11 Information display method and device Active CN113301413B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010394227.4A CN113301413B (en) 2020-05-11 2020-05-11 Information display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010394227.4A CN113301413B (en) 2020-05-11 2020-05-11 Information display method and device

Publications (2)

Publication Number Publication Date
CN113301413A CN113301413A (en) 2021-08-24
CN113301413B true CN113301413B (en) 2023-09-29

Family

ID=77318085

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010394227.4A Active CN113301413B (en) 2020-05-11 2020-05-11 Information display method and device

Country Status (1)

Country Link
CN (1) CN113301413B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114168250A (en) * 2021-12-30 2022-03-11 北京字跳网络技术有限公司 Page display method and device, electronic equipment and storage medium
CN117544795A (en) * 2023-11-03 2024-02-09 书行科技(北京)有限公司 Live broadcast information display method, management method, device, equipment and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110139161A (en) * 2018-02-02 2019-08-16 阿里巴巴集团控股有限公司 Information processing method and device in live streaming
CN110784754A (en) * 2019-10-30 2020-02-11 北京字节跳动网络技术有限公司 Video display method and device and electronic equipment
CN111131876A (en) * 2019-12-13 2020-05-08 深圳市咨聊科技有限公司 Control method, device and terminal for live video and computer readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107396165B (en) * 2016-05-16 2019-11-22 杭州海康威视数字技术股份有限公司 A kind of video broadcasting method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110139161A (en) * 2018-02-02 2019-08-16 阿里巴巴集团控股有限公司 Information processing method and device in live streaming
CN110784754A (en) * 2019-10-30 2020-02-11 北京字节跳动网络技术有限公司 Video display method and device and electronic equipment
CN111131876A (en) * 2019-12-13 2020-05-08 深圳市咨聊科技有限公司 Control method, device and terminal for live video and computer readable storage medium

Also Published As

Publication number Publication date
CN113301413A (en) 2021-08-24

Similar Documents

Publication Publication Date Title
US11537267B2 (en) Method and device for search page interaction, terminal and storage medium
US10148928B2 (en) Generating alerts based upon detector outputs
US10425679B2 (en) Method and device for displaying information on video image
JP6165846B2 (en) Selective enhancement of parts of the display based on eye tracking
US11153658B2 (en) Image display method and generating method, device, storage medium and electronic device
CN112753225A (en) Video processing for embedded information card location and content extraction
JP6469313B2 (en) Information processing method, terminal, and computer storage medium
US20120120296A1 (en) Methods and Systems for Dynamically Presenting Enhanced Content During a Presentation of a Media Content Instance
US10846535B2 (en) Virtual reality causal summary content
US20130335447A1 (en) Electronic device and method for playing real-time images in a virtual reality
CN113301413B (en) Information display method and device
CN110876079B (en) Video processing method, device and equipment
CN111327917A (en) Live content preview method, device, equipment and storage medium
US11721228B2 (en) Method and system for implementing AI-powered augmented reality learning devices
US9886403B2 (en) Content output device for outputting content
CN113965665A (en) Method and equipment for determining virtual live broadcast image
US10631061B2 (en) System and method for displaying and interacting with notifications
CN112188221B (en) Play control method, play control device, computer equipment and storage medium
CN111954022B (en) Video playing method and device, electronic equipment and readable storage medium
CN112288877A (en) Video playing method and device, electronic equipment and storage medium
CN114143568B (en) Method and device for determining augmented reality live image
US11617017B2 (en) Systems and methods of presenting video overlays
CN111698563A (en) Content sending method and device based on AI virtual anchor and storage medium
US20230007335A1 (en) Systems and methods of presenting video overlays
US20230007334A1 (en) Systems and methods of presenting video overlays

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230728

Address after: No. 699, Wangshang Road, Binjiang District, Hangzhou, Zhejiang

Applicant after: Alibaba (China) Network Technology Co.,Ltd.

Address before: Box 847, four, Grand Cayman capital, Cayman Islands, UK

Applicant before: ALIBABA GROUP HOLDING Ltd.

GR01 Patent grant
GR01 Patent grant