CN113301414A - Interface generation processing method and device, electronic equipment and computer storage medium - Google Patents

Interface generation processing method and device, electronic equipment and computer storage medium Download PDF

Info

Publication number
CN113301414A
CN113301414A CN202010648544.4A CN202010648544A CN113301414A CN 113301414 A CN113301414 A CN 113301414A CN 202010648544 A CN202010648544 A CN 202010648544A CN 113301414 A CN113301414 A CN 113301414A
Authority
CN
China
Prior art keywords
video
image
characteristic information
video frame
color value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010648544.4A
Other languages
Chinese (zh)
Other versions
CN113301414B (en
Inventor
李小康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202010648544.4A priority Critical patent/CN113301414B/en
Publication of CN113301414A publication Critical patent/CN113301414A/en
Application granted granted Critical
Publication of CN113301414B publication Critical patent/CN113301414B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/38Creation or generation of source code for implementing user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4858End-user interface for client configuration for modifying screen layout parameters, e.g. fonts, size of the windows

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • User Interface Of Digital Computer (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The embodiment of the invention provides an interface generation processing method and device, electronic equipment and a computer storage medium. The interface comprises a video area and a background area, and the method comprises the following steps: acquiring image characteristic information of a video frame image of a video area; and adjusting the display content of the background area according to the image characteristic information. The method of the invention can change the display content of the background area along with the video frame image of the video area.

Description

Interface generation processing method and device, electronic equipment and computer storage medium
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to an interface generation processing method and device, electronic equipment and a computer storage medium.
Background
In the prior art, when a user watches videos, the occupation ratio of the videos in a player interface may be different. For example, a video played in full-screen mode may fill the entire player interface, with a percentage of 100% in the player interface, where the user does not see interface elements outside of the video frame image of the video.
In some cases, the video frame image of the video may not fill the entire player interface, which causes the player interface to display the background area, and if the user can see the background area, there may be a case where the display content of the background area does not adapt to the video frame image, which affects the viewing experience of the user.
Disclosure of Invention
In view of the above, embodiments of the present invention provide an interface generation processing scheme to at least partially solve the above problems.
According to a first aspect of the embodiments of the present invention, there is provided a method for generating and processing an interface, where the interface includes a video area and a background area, the method including: acquiring image characteristic information of a video frame image of a video area; and adjusting the display content of the background area according to the image characteristic information.
According to a second aspect of the embodiments of the present invention, there is provided an interface generation processing apparatus, including: the acquisition module is used for acquiring image characteristic information of a video frame image of a video area; and the adjusting module is used for adjusting the display content of the background area according to the image characteristic information.
According to a third aspect of embodiments of the present invention, there is provided an electronic apparatus, including: the display screen is used for displaying the display content of the video frame image and the background area; the processor is used for acquiring image characteristic information of a video frame image of the video area; and adjusting the display content of the background area according to the image characteristic information.
According to a fourth aspect of embodiments of the present invention, there is provided a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the interface generation processing method according to the first aspect.
According to the interface generation processing scheme provided by the embodiment of the invention, the display content of the background area outside the background area is adjusted according to the image characteristic information of the video frame image, so that the display content of the background area can change along with the video frame image displayed in the video area, the display content of the background area is richer, monotony is avoided, the display content of the background area can change automatically, the automation of display content adjustment is improved, and the labor intensity of workers is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present invention, and it is also possible for a person skilled in the art to obtain other drawings based on the drawings.
Fig. 1a is a flowchart illustrating steps of a method for generating and processing an interface according to a first embodiment of the present invention;
fig. 1b is a schematic interface diagram of a player according to a first embodiment of the present invention;
fig. 1c is a schematic interface diagram of another player according to the first embodiment of the present invention;
FIG. 1d is a schematic diagram illustrating an interface change in a usage scenario according to a first embodiment of the present invention;
FIG. 2 is a flowchart illustrating steps of a method for generating and processing an interface according to a second embodiment of the present invention;
fig. 3a is a flowchart illustrating steps of a method for generating and processing an interface according to a third embodiment of the present invention;
FIG. 3b is a flowchart illustrating a usage scenario according to a third embodiment of the present invention;
fig. 4 is a block diagram of an interface generation processing apparatus according to a fourth embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the embodiments of the present invention, the technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments of the present invention shall fall within the scope of the protection of the embodiments of the present invention.
The following further describes specific implementation of the embodiments of the present invention with reference to the drawings.
Example one
Referring to fig. 1a, a flowchart illustrating steps of a method for generating and processing an interface according to a first embodiment of the present invention is shown.
In this embodiment, the interface generation processing method is executed by the terminal device, and in other embodiments, the method may also be executed by the server.
The interface generation processing method of the embodiment includes the following steps:
step S102: image characteristic information of a video frame image of a video area is acquired.
In this embodiment, a video is displayed through an interface, where the interface includes a video area and a background area, and the video area is used to display a video frame image in the video, and may also display other content as needed. The background area is the portion of the interface outside the video area. If the video area is in a full-screen state, namely the video area completely covers the whole interface, the background area is invisible, otherwise, if the video area is not in the full-screen state, the background area is visible. The video may be a long video, a short video, a video of an electronic album, or the like, or may be plain text information or the like displayed in an image manner.
In this embodiment, a description will be given taking an example in which a user views a video through a web player or an application player on a terminal device. As shown in fig. 1B, a is a display screen of the terminal device, B is a video area occupied by the video frame image, and an area other than B is a background area. Besides the display content and the video frame image of the background area, other content, such as advertisement content, knowledge content related to the video, recommended content related to the video, or character information or character works related to the video, may also be displayed in the interface as required.
In the video playing process, the video frame image displayed in the video area changes, and in order to make the difference between the displayed content (such as an image or a color) in the background area and the contrast of the video frame image displayed in the video area always smaller, the displayed content in the background area needs to be automatically and dynamically adjusted according to the displayed video frame image.
In this embodiment, the bright-dark contrast may be determined according to the contrast of the display content in the background area and the contrast of the video frame image, or may be determined according to the contrast of the display content in the background area and the video frame image, the respective saturation of the display content in the background area and the video frame image, and the respective brightness, which is not limited in this embodiment.
In order to automatically and dynamically adjust the display content of the background area, image characteristic information of the displayed video frame image needs to be acquired, for example, at least one of the following: the color value information of the background partial image in the video frame image, the foreground partial image of the video frame image, and the video frame image. Therefore, the display content of the background area can be adjusted in the subsequent steps based on the image characteristic information, and the effect of reducing the contrast of light and shade is achieved.
The video frame image may be a video frame image presented in the video area at the current time, for example, a video frame image presented at B in fig. 1B, or a video frame image presented at B in fig. 1 c. The background partial image of the video frame image may be a portion other than a person located in the foreground, and accordingly, the foreground partial image may be a person part of the foreground. The color value information of the video frame image may be color value information obtained by extracting color values from the entire video frame image in any suitable manner in the prior art, or color value information obtained by extracting color values from a background partial image in the video frame image, or the like.
In the present embodiment, the color value information includes at least one of rgb color value information, rgba color value information, hsl color value information, hex color value information, and argb color value information.
For example, for a video frame image played in a web player based on HTML5, the color value information thereof may be at least one of rgb color value information, rgba color value information, hsl color value information, and hex color value information recognizable by the web player of HTML 5.
For another example, the color value information may be argb color value information that can be recognized by the application player for the video frame image played by the application player.
Step S104: and adjusting the display content of the background area according to the image characteristic information.
For different image characteristic information, the display content of the background area can be configured to be corresponding content, so that the light and shade contrast of the background area and the video area is smaller.
For example, if the image feature information includes a background partial image of the video pointer image, the display content in the background region may be adjusted to the background partial image, or adjusted to an image obtained by performing processing such as soft lighting and color mixing on the background partial image, so that the light-dark contrast between the display content in the background region of the player interface and the video frame image is less than or equal to the first light-dark threshold, and the consistency between the background region and the video region is better, and visual fatigue caused by long-time viewing by a user may be avoided.
For another example, if the image feature information is color value information of the video frame image, the color value of the display content in the background area may be adjusted according to the color value information, so that the color value of the display content in the background area is the same as or similar to the color value of the background partial image of the video frame image, so as to ensure that the light-dark contrast between the background area and the video frame image is less than or equal to the first light-dark threshold. The first dimming threshold may be determined as needed, which is not limited in this embodiment.
Of course, in other embodiments, the display content of the background area may be adjusted in other suitable manners, which is not limited in this embodiment.
The following describes the implementation process of the method of this embodiment in detail with reference to a specific usage scenario as follows:
in this usage scenario, the user views the video through a web player on the terminal device. Since the video frame image does not fill the entire interface (which may be a display screen-full interface), the background area of the player interface is visible to the user. Usually, this background area in the prior art is a solid color, such as white, black, etc., or the developer may preset some other colors or images as the display content of the background area. However, in this case, the display contents of the background area are monotonous, and if the contrast with the displayed video frame image is too large, the appearance is not good, the user is likely to feel tired of vision, and the eyesight is likely to be impaired.
In order to solve this problem, in the present usage scenario, for the currently presented video frame image (denoted as image frame 1 for convenience of description), an image is obtained by processing the presented video frame image in real time, for example, and acquiring color value information thereof. According to the color value information, the color value of the content shown in the background area is adjusted, for example, the color value of the content shown in the background area is set as the color value of the image feature information, and the adjusted player interface is shown as interface 1 in fig. 1 c.
After the image frame 1 is displayed, the currently displayed video frame image of the video area of the player interface is changed into an image frame 2. Similarly, image characteristic information, such as color value information, of the image frame 2 is acquired. According to the color value information, the display content of the background area is adjusted to make the color value of the background area image of the image frame 2, so that the display content of the background area of the player interface can be adjusted according to the image feature information of the displayed video frame image, and the set player interface is as shown in the interface 2 in fig. 1 c.
Through the embodiment, the display content of the background area outside the background area is adjusted according to the image characteristic information of the video frame image, so that the display content of the background area can change along with the video frame image displayed in the video area, the display content of the background area is richer, monotony is avoided, the display content of the background area can change automatically, the automation of adjusting the display content is improved, and the labor intensity of workers is reduced.
Example two
Referring to fig. 2, a flowchart illustrating steps of a method for generating and processing an interface according to a second embodiment of the present invention is shown.
In the present embodiment, the method is described as an example performed by a terminal device. The interface generation processing method includes steps S102 to S104.
In order to enable the display content in the background area to be automatically adjusted in the video playing process and reduce the contrast between the display content in the background area and the video frame image in the video area, step S102 may be implemented as: and periodically acquiring image characteristic information of the video frame image in the video domain according to a preset acquisition period.
The capture period may be determined as required, for example, the video is a series of video frame images with a time sequence relationship, the capture period may be to capture one video frame image per a set number of video frame images (the set number may be determined as required, such as 1, 3, etc.), or the capture period may also be to capture each video frame image.
The durations of two adjacent acquisition periods may be the same or different. For example, the duration of the acquisition period may be a fixed duration, as described above for a set number of video frame images per interval. For another example, the duration of the capturing period may also be a non-fixed duration, such as when the contrast between the displayed video frame image and the previous video frame image is greater than or equal to a second brightness threshold (which may be determined as required), it is determined that the display content of the background area needs to be replaced, and the image feature information is captured at this time, in which case, the duration of the capturing period may be non-fixed.
And for each acquisition period, processing the displayed video frame image in real time and acquiring image characteristic information. Alternatively, the video frame image may be processed in advance, the obtained image feature information may be stored, and when the image feature information of the video frame image needs to be acquired, the corresponding image feature information may be directly read from the storage space, which is not limited in this embodiment.
Optionally, in order to meet personalized requirements of different users and achieve adjustment of display content of the background area according to the requirements of the users, the image feature information of the video frame image in the video domain may be acquired as follows: and acquiring image characteristic information corresponding to the effective adjustment style option from the video frame image of the video area according to the effective adjustment style option.
An adjustment switch option and an adjustment style option are configured in a setting interface of a webpage player or an application program player of the terminal device, and a user can control whether to adjust the display content or not by controlling the adjustment switch option, so that more choices can be provided for the user. For example, when using a web player or an application player, a user may operate an adjustment switch option, for example, when needing to adjust the display content of the background area, adjust the state of the adjustment switch option to an on state; or when the user does not need to adjust the display content of the background area, the state of the adjustment switch option is adjusted to be in the closed state.
The adjustment style options correspond to different processing operations on the display content, so that different display styles are formed, a user can select the adjustment style options according to own requirements, and the adjustment style options selected by the user are used as effective adjustment style options.
The adjustment style option may be configured as needed, for example, it may be a blurring style, a nostalgic style, a black and white style, and the like, which is not limited in this embodiment.
Therefore, in the process of playing the video through the webpage player or the application program player, if the adjustment switch option is in an open state, when the image characteristic information is obtained, the image characteristic information corresponding to the adjustment switch option is obtained according to the effective adjustment style option. For example, if the adjustment style option that has been validated is blurring style, a background partial image thereof may be obtained from the video frame image.
Corresponding to step S102, step S104 may be implemented as: and periodically adjusting the display content of the background area according to the periodically acquired image characteristic information.
According to each acquisition cycle, after the image characteristic information of the displayed video frame image is acquired, the display content of the background area is adjusted according to the image characteristic information, so that the display content of the background area can be automatically adjusted at intervals, the display content of the background area is richer, and the light-dark contrast of the display content of the background area and the displayed video frame image is less than or equal to a first light-dark threshold, so that visual fatigue cannot be brought to a user, and the health of the user is protected. In addition, the display content does not need to be manually preset in the mode, and the labor intensity is reduced.
By the embodiment, the image characteristic information can be periodically acquired, so that the display content of the background area is adjusted according to the acquired image characteristic information, the display content of the background area automatically changes along with the played video frame image, the problems of monotonous and tedious background area are solved, the display content of the background area does not need to be preset manually, and the labor intensity is reduced.
EXAMPLE III
Referring to fig. 3a, a flowchart illustrating steps of a method for generating and processing an interface according to a third embodiment of the present invention is shown.
In this embodiment, the interface generation processing method includes the foregoing steps S102 to S104, which may be implemented by the first or second embodiment.
In order to improve adaptability, the displayed content in the background area can be well adjusted for different players, network delays and the like, and step S102 includes substeps S1021 to substep S1022.
Substep S1021: and determining an acquisition strategy of the image characteristic information according to available hardware computing resources of playing equipment for playing the video, whether the video is live content or not and at least one of network delay.
Taking the example that the user plays the video through the terminal device, the available hardware computing resources include available memory, available CPU, available GPU, and the like of the terminal device.
In this embodiment, different acquisition strategies may be determined for different available hardware computing resources, different playing environments, network delay, and the like, so as to ensure that the determined acquisition strategies have better adaptability to the state of the current playing device, thereby ensuring the playing fluency of the video, and avoiding the playing device from being stuck or generating too much heat.
In a specific implementation, the acquisition policy includes two sub-policies, a content policy and a mode policy.
The content policy is used for indicating the content contained in the image feature information to be acquired, such as only containing the background partial image, only containing color value information, only containing the foreground partial image, or containing at least two of the background partial image, the foreground partial image and the color value information.
The mode policy is used to indicate an acquisition mode of the image feature information, such as locally processing a displayed video frame image in real time and acquiring the image feature information, or acquiring image feature information obtained by processing a video frame image in advance from a storage space, or processing the video frame image in real time and acquiring the image feature information through a server.
In one embodiment, the sub-step S1021 may separately determine two sub-policies, in which case the sub-step S1021 includes:
step I: and determining that the image characteristic information indicated to be acquired in the acquisition strategy is at least two of a background partial image, a foreground partial image and color value information, or the background partial image, or the foreground partial image, or the color value information according to available hardware computing resources of a playing device for playing the video, a preset second resource threshold value and a preset third resource threshold value.
The second resource threshold and the third resource threshold may be determined as desired, e.g., the second resource threshold is greater than the third resource threshold.
Taking the second resource threshold as 70% and the third resource threshold as 50% as an example, if the currently available hardware computing resource is greater than or equal to the second resource threshold, which indicates that the available hardware computing resource is more, it is determined that the content policy indicates that the image information in the acquisition policy is at least two of the background partial image, the foreground partial image, and the color value information.
Or, if the current available hardware computing resource is smaller than the second resource threshold and is greater than or equal to the third resource threshold, which indicates that the available hardware computing resource is generally sufficient, determining that the content policy indication image information in the acquisition policy is a background partial image or a foreground partial image.
Because more hardware computing resources are required to be occupied for obtaining the background partial image or the foreground partial image, when the hardware computing resources are sufficient, the image information contains the background partial image or the foreground partial image, and therefore the display content of a subsequent background area is richer than that of a single color change.
In addition, the background partial image or the foreground partial image may be subjected to blurring, sharpening, clipping, and the like as needed. If the display screen of the playing device is a display screen with a resolution greater than or equal to the first resolution, the obtaining of the background partial image or the foreground partial image in the format png may be instructed, so that more image details are retained in the background partial image or the foreground partial image. Or, if the display screen of the playing device is a display screen with a resolution smaller than the first resolution, the obtaining of the background partial image or the foreground partial image in the jpg format may be instructed, so that a part of image details of the background partial image or the foreground partial image is discarded, thereby reducing the occupation of hardware computing resources.
Or, if the current available hardware computing resource is smaller than the third resource threshold value, which indicates that the available hardware computing resource is more tense, determining that the content policy in the acquisition policy indicates that the image feature information is color value information.
Since the color value information occupies a smaller amount of hardware computing resources than the background partial image or the foreground partial image, only the color value information can be acquired when the hardware computing resources are less intense.
Different formats of color value information may be employed for different types of players. For example, for a web player, the color value information may be rgb color value information, rgba color value information, hsl color value information, hex color value information, or the like. For an application player, the color value information may be argb color value information or the like.
Step II: and determining to locally process the displayed video frame image in real time to acquire the image characteristic information or acquiring the image characteristic information from a server side according to whether the video is live content, network delay and a preset delay threshold.
If the video is live content, the video frame image displayed by the video is acquired in real time, and the time delay required to be ensured is relatively small, so that the displayed video frame image needs to be processed in real time. If the network delay is larger than or equal to the preset delay threshold, the network condition is poor, so that the video frame image needs to be processed locally, and the mode strategy in the acquisition strategy is determined to indicate that the video frame image is processed locally in real time so as to acquire the image characteristic information.
Or, if the video is live content and the network delay is smaller than the delay threshold, the determination mode policy may instruct the server to process the video frame image in real time, and acquire the image feature information obtained after processing from the server.
Or, if the video is recorded content, the video may be processed in real time or in advance through a server or locally according to needs, which is not limited in this embodiment.
Substep S1022: and acquiring the image characteristic information of the video frame image of the video area according to the determined acquisition strategy.
In a specific implementation, the acquisition policy indicates that the video frame image is locally processed in real time to obtain image characteristic information, and the image characteristic information includes at least one of a background partial image, a foreground partial image, and color value information.
When the background partial image or the foreground partial image is obtained, the displayed video frame image can be subjected to foreground and background segmentation, so that the independent foreground partial image and the background partial image are segmented.
In one case, the background partial image and/or the foreground partial image may be directly used as part of the image feature information. In another case, the background partial image and/or the foreground partial image may be processed appropriately, such as blurring or filling up blank portions, and the processed background partial image and/or foreground partial image may be used as a part of the image feature information.
When color value information is acquired, the color value information can be extracted from the background partial image or the foreground partial image, or the color value information can be directly extracted from the complete video frame image. And then the extracted color value information is taken as a part of the image characteristic information.
Of course, in other embodiments, the image feature information may be obtained in other suitable manners as needed, and this embodiment is not limited to this.
After the image feature information is obtained, the display content of the background area can be adjusted according to the state of the playing device and the content contained in the image feature information.
For example, step S104 includes sub-step S1041 or sub-step S1042.
Substep S1041: and if the available hardware computing resources of the playing equipment for playing the video are greater than or equal to a first resource threshold value, adjusting the background area to display the background partial image in the image characteristic information.
Because the hardware computing resources consumed by rendering the image are more, when the available hardware computing resources are larger than or equal to the first resource threshold value, the display content of the background area is adjusted to be the background partial image or the foreground partial image, so that the richer display effect is ensured, and the phenomenon that the playing equipment is blocked due to the occupation of too much hardware computing resources is avoided.
The first resource threshold may be determined as needed, which is not limited in this embodiment, for example, the first resource threshold may be 20%.
Substep S1042: and if the available hardware computing resources of the playing equipment for playing the video are smaller than the first resource threshold value, adjusting the color value of the display content in the background area according to the color value information of the image characteristic information.
When the available hardware computing resources are smaller than the first resource threshold value, the hardware computing resources are indicated to be relatively tense, in order to save resources, the color value of the display content in the background area is adjusted according to the color value information in the image characteristic information, so that the color of the display content in the background area is relatively attached to the color of the background partial image or the foreground partial image of the video frame image in the video area, and therefore the phenomenon that visual fatigue is caused to a user due to overlarge light and shade contrast is avoided.
It should be noted that, in addition to setting the display content of the background region by using the above sub-step S1041 or sub-step S1042, if the image feature information only includes the background partial image or the foreground partial image, the display content of the background region may be directly adjusted according to the background partial image or the foreground partial image; alternatively, if the image feature information includes only color value information, the display content of the background area may be set directly according to the color value information.
The following describes an implementation process of the method with reference to a specific usage scenario:
as shown in fig. 3b, it shows the adjustment process of the presentation content of the primary background area of the interface generation processing method in the present usage scenario.
In the present usage scenario, in the process of playing a video by a user through a playing device (such as a mobile phone):
process A: whether the video area is in a full screen state is detected.
It should be noted that the full screen state herein does not necessarily refer to the full screen of the web player or the application player, but whether the displayed video frame image occupies the entire display screen. For example, if the web page player is in full screen, but the video frame image displayed in the web page player does not occupy the whole display screen, it is also determined that the web page player is not in full screen.
If the mobile terminal is in a full screen state, no processing is performed; or, if the display content is not in the full screen state, it indicates that the background area can be seen by the user, and therefore, the display content of the background area needs to be set, and the process B is executed.
And a process B: and if the video frame image is not the full screen, entering a strategy center, and making a decision according to available hardware computing resources of the playing equipment, whether the video is live broadcast content, network delay and the like.
In the policy center, available hardware computing resources of the playing device, whether the video is live content, network delay, whether the player is a web player or an application player, and the like are analyzed, so as to determine an acquisition policy.
For example, if the available memory of the available hardware computing resources of the playback device is greater than or equal to the second resource threshold, the player is a web player, and the resolution of the display screen is greater than the first resolution, the content policy indicates that the image information to be acquired includes a background partial image and color value information, the background partial image is in the png format, and the color value information may be at least one of rgb color value information, rgba color value information, hex color value information, and hsl color value information.
Meanwhile, if the video is live content and the network delay is larger than the delay threshold, the mode strategy indication in the acquisition strategy locally processes the displayed video frame image in real time.
Of course, in other usage scenarios, the background partial image may be an image in other suitable format, such as GIF.
And a process C: and acquiring image information from the video frame image according to the acquisition strategy.
The background portion image and color value information are acquired from the video frame image by any suitable means according to an acquisition strategy. The background partial image may be referred to as a frame image.
The process D is as follows: and adjusting the display content of the background area according to the image characteristic information.
For a webpage player, a video area is a playing control drawn on a webpage, the playing control and the webpage do not belong to the same layer, and the playing control can shield part of the webpage, so that the whole webpage can be set when the display content of a background area is set.
For the application program player, the background area may be an area outside the video area, that is, an area not used for displaying the video frame image, and may be adjusted according to the display content of the background area during setting.
If the image characteristic information only includes the background partial image or the color value information, the background partial image included in the image characteristic information is set in the background area for displaying, or the color value of the display content of the background area is set to be the color value corresponding to the color value information.
If the image characteristic information comprises the background partial image and the color value information, whether the available hardware computing resources of the playing equipment are larger than a first resource threshold value or not can be judged, and if the available hardware computing resources are larger than or equal to the first resource threshold value, the display content of the background area is adjusted according to the background partial image; otherwise, if the value is smaller than the first resource threshold value, the display content of the background area is adjusted according to the color value information.
Through the adjustment, the effect presented during the display is that the display content of the background area outside the video area changes along with the video frame image displayed in the video area. Because the atmosphere background formed by the display content in the background area and the video frame image displayed in the video area are synchronously displayed and smoothly transited, the difference value of the light-dark contrast between the display content in the background area and the video frame image displayed in the video area is reduced, the immersive watching experience is achieved, and the visual fatigue of a user is not easy to generate. In addition, the mode of intelligently setting the display content of the background area based on the displayed video greatly reduces the workload of the staff of video websites and video application programs, improves the operation and maintenance efficiency, and solves the problem that the atmosphere background is irrelevant to the played video possibly caused by manual configuration and theme presetting, so that the display content of the background area can sense the change of the displayed video frame image.
Through the embodiment, the display content of the background area outside the background area is adjusted according to the image characteristic information of the video frame image, so that the display content of the background area can change along with the video frame image displayed in the video area, the display content of the background area is richer, monotony is avoided, the display content of the background area can change automatically, the automation of adjusting the display content is improved, and the labor intensity of workers is reduced.
Example four
Referring to fig. 4, a block diagram of an interface generation processing apparatus according to a fourth embodiment of the present invention is shown.
In this embodiment, the interface generation processing device includes: an obtaining module 402, configured to obtain image feature information of a video frame image of a video region; an adjusting module 404, configured to adjust the display content of the background area according to the image feature information.
Optionally, the obtaining module 402 is configured to periodically collect image feature information of a video frame image of a video region according to a preset collection period; the adjusting module 404 is configured to periodically adjust the display content of the background area according to the periodically acquired image feature information.
Optionally, the image feature information includes at least one of: the color value information of the background partial image, the foreground partial image and the video frame image in the video frame image.
Optionally, the adjusting module 404 includes: a background map adjusting module 4041, configured to adjust the background area to display a background partial image in the image feature information if an available hardware computing resource of a playing device that plays the video is greater than or equal to a first resource threshold; or the color value adjusting module 4042 is configured to adjust the color value of the display content in the background region according to the color value information of the image feature information if the available hardware computing resource of the playback device that plays the video is smaller than the first resource threshold.
Optionally, the color value information comprises at least one of rgb color value information, rgba color value information, hsl color value information, hex color value information, and argb color value information.
Optionally, the obtaining module 402 includes: a policy determining module 4021, configured to determine an acquisition policy of the image feature information according to available hardware computing resources of a playing device that plays the video, whether the video is live content, or not, and network latency; the information obtaining module 4022 is configured to obtain the image feature information of the video frame image in the video region according to the determined obtaining policy.
Optionally, the policy determining module 4021 is configured to determine, according to available hardware computing resources of a playback device that plays the video, a preset second resource threshold and a preset third resource threshold, that the acquired image feature information indicated in the acquisition policy is at least two of a background partial image, a foreground partial image and color value information, or is the background partial image, or is the foreground partial image, or is the color value information.
Optionally, the policy determining module 4021 is configured to determine to locally process the displayed video frame image in real time to obtain the image feature information according to whether the video is live content, network delay, and a preset delay threshold, or to obtain the image feature information from a server.
Optionally, the obtaining module 402 is configured to obtain, according to the validated adjustment style option, image feature information corresponding to the validated adjustment style option from the video frame image of the video area.
The interface generation processing apparatus of this embodiment is used to implement the corresponding interface generation method in the foregoing multiple method embodiments, and has the beneficial effects of the corresponding method embodiments, which are not described herein again. In addition, the functional implementation of each module in the interface generating apparatus of this embodiment can refer to the description of the corresponding part in the foregoing method embodiment, and is not repeated here.
EXAMPLE five
Referring to fig. 5, a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention is shown, and the specific embodiment of the present invention does not limit the specific implementation of the electronic device.
As shown in fig. 5, the electronic device may include: a processor (processor)502, a Communications Interface (Communications Interface)504, a memory (memory)506, and a Communications bus 508 and a display screen.
Wherein:
the processor 502, communication interface 504, and memory 506 communicate with one another via a communication bus 508.
A communication interface 504 for communicating with other electronic devices or servers.
The processor 502 is configured to execute the program 510, and may specifically execute relevant steps in the above-described interface generation processing method embodiment.
The display screen is used for displaying the video image and the content displayed in the background area.
In particular, program 510 may include program code that includes computer operating instructions.
The processor 502 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement an embodiment of the present invention. The intelligent device comprises one or more processors which can be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
And a memory 506 for storing a program 510. The memory 506 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The interface includes a video area and a background area, and the program 510 may be specifically configured to cause the processor 502 to perform the following operations: acquiring image characteristic information of a video frame image of a video area; and adjusting the display content of the background area according to the image characteristic information.
In an optional implementation, the program 510 is further configured to enable the processor 502 to periodically acquire the image feature information of the video frame image of the video region according to a preset acquisition period when acquiring the image feature information of the video frame image of the video region; the adjusting the display content of the background area according to the image feature information includes: and periodically adjusting the display content of the background area according to the periodically acquired image characteristic information.
In an alternative embodiment, the image characteristic information includes at least one of: the color value information of the background partial image, the foreground partial image and the video frame image in the video frame image.
In an optional implementation, the program 510 is further configured to, when the content of the background area is adjusted according to the image feature information, if an available hardware computing resource of a playback device that plays the video is greater than or equal to a first resource threshold, adjust the background area to display a background partial image in the image feature information; or if the available hardware computing resources of the playing device playing the video are smaller than the first resource threshold, adjusting the color value of the display content in the background area according to the color value information of the image characteristic information.
In an alternative embodiment, the color value information comprises at least one of rgb color value information, rgba color value information, hsl color value information, hex color value information and argb color value information.
In an alternative embodiment, the program 510 is further configured to, when acquiring image feature information of a video frame image in a video area, cause the processor 502 to determine an acquisition policy of the image feature information according to at least one of available hardware computing resources of a playback device playing the video, whether the video is live content, and network latency; and acquiring the image characteristic information of the video frame image of the video area according to the determined acquisition strategy.
In an optional implementation, the program 510 is further configured to, when determining the acquisition policy of the image feature information according to available hardware computing resources of a playback device that plays the video, whether the video is live content, and network latency, determine that the acquired image feature information in the acquisition policy indicates that the acquired image feature information is at least two of a background partial image, a foreground partial image, and color value information, or is a background partial image, a foreground partial image, or is color value information, according to available hardware computing resources of a playback device that plays the video, a preset second resource threshold, and a preset third resource threshold.
In an optional implementation manner, the program 510 is further configured to enable the processor 502, when determining the obtaining policy of the image feature information according to available hardware computing resources of a playing device that plays the video, whether the video is live content or not, and a network delay, determine to locally process the displayed video frame image in real time to obtain the image feature information according to whether the video is live content or not, the network delay, and a preset delay threshold, or obtain the image feature information from a server.
In an alternative embodiment, the program 510 is further configured to, when acquiring the image characteristic information of the video frame image of the video area, acquire, according to the validated adjustment style option, the image characteristic information corresponding to the validated adjustment style option from the video frame image of the video area.
For specific implementation of each step in the program 510, reference may be made to corresponding steps and corresponding descriptions in units in the above embodiment of the interface generation processing method, which are not described herein again. It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described devices and modules may refer to the corresponding process descriptions in the foregoing method embodiments, and are not described herein again.
It should be noted that, according to the implementation requirement, each component/step described in the embodiment of the present invention may be divided into more components/steps, and two or more components/steps or partial operations of the components/steps may also be combined into a new component/step to achieve the purpose of the embodiment of the present invention.
The above-described method according to an embodiment of the present invention may be implemented in hardware, firmware, or as software or computer code storable in a recording medium such as a CD ROM, a RAM, a floppy disk, a hard disk, or a magneto-optical disk, or as computer code originally stored in a remote recording medium or a non-transitory machine-readable medium downloaded through a network and to be stored in a local recording medium, so that the method described herein may be stored in such software processing on a recording medium using a general-purpose computer, a dedicated processor, or programmable or dedicated hardware such as an ASIC or FPGA. It is understood that the computer, processor, microprocessor controller or programmable hardware includes memory components (e.g., RAM, ROM, flash memory, etc.) that can store or receive software or computer code that, when accessed and executed by the computer, processor or hardware, implements the interface generation processing methods described herein. Further, when a general-purpose computer accesses code for implementing the generation processing method of the interface shown here, execution of the code converts the general-purpose computer into a special-purpose computer for executing the generation processing method of the interface shown here.
Those of ordinary skill in the art will appreciate that the various illustrative elements and method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present embodiments.
The above embodiments are only for illustrating the embodiments of the present invention and not for limiting the embodiments of the present invention, and those skilled in the art can make various changes and modifications without departing from the spirit and scope of the embodiments of the present invention, so that all equivalent technical solutions also belong to the scope of the embodiments of the present invention, and the scope of patent protection of the embodiments of the present invention should be defined by the claims.

Claims (13)

1. A method for generating an interface, the interface including a video area and a background area, the method comprising:
acquiring image characteristic information of a video frame image of a video area;
and adjusting the display content of the background area according to the image characteristic information.
2. The method according to claim 1, wherein the acquiring image characteristic information of the video frame image of the video area comprises:
periodically acquiring image characteristic information of a video frame image of a video area according to a preset acquisition period;
the adjusting the display content of the background area according to the image feature information includes: and periodically adjusting the display content of the background area according to the periodically acquired image characteristic information.
3. The method of claim 1 or 2, wherein the image feature information comprises at least one of: the color value information of the background partial image, the foreground partial image and the video frame image in the video frame image.
4. The method of claim 3, wherein the adjusting the display content of the background area according to the image feature information comprises:
if the available hardware computing resources of the playing equipment playing the video are larger than or equal to a first resource threshold value, adjusting the background area to display a background partial image in the image characteristic information; or
And if the available hardware computing resources of the playing equipment for playing the video are smaller than the first resource threshold value, adjusting the color value of the display content in the background area according to the color value information of the image characteristic information.
5. The method according to claim 3, wherein the color value information comprises at least one of rgb color value information, rgba color value information, hsl color value information, hex color value information, and argb color value information.
6. The method according to claim 1 or 2, wherein the acquiring image characteristic information of the video frame image of the video area comprises:
determining an acquisition strategy of the image characteristic information according to available hardware computing resources of playing equipment for playing the video, whether the video is at least one of live content and network delay;
and acquiring the image characteristic information of the video frame image of the video area according to the determined acquisition strategy.
7. The method of claim 6, wherein the determining the acquisition policy of the image feature information according to at least one of available hardware computing resources of a playback device playing the video, whether the video is live content, and network latency comprises:
and determining that the image characteristic information indicated to be acquired in the acquisition strategy is at least two of a background partial image, a foreground partial image and color value information, or the background partial image, or the foreground partial image, or the color value information according to available hardware computing resources of a playing device for playing the video, a preset second resource threshold value and a preset third resource threshold value.
8. The method of claim 6, wherein the determining the acquisition policy of the image feature information according to at least one of available hardware computing resources of a playback device playing the video, whether the video is live content, and network latency comprises:
and determining to locally process the displayed video frame image in real time to acquire the image characteristic information or acquiring the image characteristic information from a server side according to whether the video is live content, network delay and a preset delay threshold.
9. The method according to claim 1, wherein the acquiring image characteristic information of the video frame image of the video area comprises:
and acquiring image characteristic information corresponding to the effective adjustment style option from the video frame image of the video area according to the effective adjustment style option.
10. An interface generation processing apparatus, comprising:
the acquisition module is used for acquiring image characteristic information of a video frame image of a video area;
and the adjusting module is used for adjusting the display content of the background area according to the image characteristic information.
11. An electronic device, comprising:
the display screen is used for displaying the display content of the video frame image and the background area;
the processor is used for acquiring image characteristic information of a video frame image of the video area; and adjusting the display content of the background area according to the image characteristic information.
12. The electronic device of claim 11, further comprising a memory;
the memory is used for at least storing the video and the acquired image characteristic information.
13. A computer storage medium on which a computer program is stored, which program, when executed by a processor, implements the interface generation processing method according to any one of claims 1 to 9.
CN202010648544.4A 2020-07-07 2020-07-07 Interface generation processing method and device, electronic equipment and computer storage medium Active CN113301414B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010648544.4A CN113301414B (en) 2020-07-07 2020-07-07 Interface generation processing method and device, electronic equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010648544.4A CN113301414B (en) 2020-07-07 2020-07-07 Interface generation processing method and device, electronic equipment and computer storage medium

Publications (2)

Publication Number Publication Date
CN113301414A true CN113301414A (en) 2021-08-24
CN113301414B CN113301414B (en) 2023-06-02

Family

ID=77318339

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010648544.4A Active CN113301414B (en) 2020-07-07 2020-07-07 Interface generation processing method and device, electronic equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN113301414B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113885830A (en) * 2021-10-25 2022-01-04 北京字跳网络技术有限公司 Sound effect display method and terminal equipment
CN115145442A (en) * 2022-06-07 2022-10-04 杭州海康汽车软件有限公司 Environment image display method and device, vehicle-mounted terminal and storage medium
CN115442657A (en) * 2021-10-15 2022-12-06 佛山欧神诺云商科技有限公司 Method, device, medium and product for dynamically adjusting image resolution

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080001971A1 (en) * 2006-06-29 2008-01-03 Scientific-Atlanta, Inc. Filling Blank Spaces of a Display Screen
CN109413352A (en) * 2018-11-08 2019-03-01 北京微播视界科技有限公司 Processing method, device, equipment and the storage medium of video data
US10257487B1 (en) * 2018-01-16 2019-04-09 Qualcomm Incorporated Power efficient video playback based on display hardware feedback
CN110852938A (en) * 2019-10-28 2020-02-28 腾讯科技(深圳)有限公司 Display picture generation method and device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080001971A1 (en) * 2006-06-29 2008-01-03 Scientific-Atlanta, Inc. Filling Blank Spaces of a Display Screen
US10257487B1 (en) * 2018-01-16 2019-04-09 Qualcomm Incorporated Power efficient video playback based on display hardware feedback
CN109413352A (en) * 2018-11-08 2019-03-01 北京微播视界科技有限公司 Processing method, device, equipment and the storage medium of video data
CN110852938A (en) * 2019-10-28 2020-02-28 腾讯科技(深圳)有限公司 Display picture generation method and device and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115442657A (en) * 2021-10-15 2022-12-06 佛山欧神诺云商科技有限公司 Method, device, medium and product for dynamically adjusting image resolution
CN115442657B (en) * 2021-10-15 2023-12-26 佛山欧神诺云商科技有限公司 Method, equipment, medium and product for dynamically adjusting resolution of image picture
CN113885830A (en) * 2021-10-25 2022-01-04 北京字跳网络技术有限公司 Sound effect display method and terminal equipment
CN115145442A (en) * 2022-06-07 2022-10-04 杭州海康汽车软件有限公司 Environment image display method and device, vehicle-mounted terminal and storage medium
CN115145442B (en) * 2022-06-07 2024-06-11 杭州海康汽车软件有限公司 Method and device for displaying environment image, vehicle-mounted terminal and storage medium

Also Published As

Publication number Publication date
CN113301414B (en) 2023-06-02

Similar Documents

Publication Publication Date Title
CN113301414B (en) Interface generation processing method and device, electronic equipment and computer storage medium
CN108600781B (en) Video cover generation method and server
CN110418149B (en) Video live broadcast method, device, equipment and storage medium
CN108347647B (en) Video picture displaying method, device, television set and storage medium
CN107179889B (en) Interface color adjusting method, webpage color adjusting method and webpage color adjusting device
US20130176486A1 (en) Pillarboxing Correction
WO2017016171A1 (en) Window display processing method, apparatus, device and storage medium for terminal device
US20240144976A1 (en) Video processing method, device, storage medium, and program product
US9934560B2 (en) User sliders for simplified adjustment of images
TW202011174A (en) Systems and methods for driving a display
KR20180000729A (en) Display device and control method therefor
CN112102422B (en) Image processing method and device
CN108989872B (en) Android television background fast switching method, framework, server and storage medium
CN113727166A (en) Advertisement display method and device
CN113597061A (en) Method, apparatus and computer readable storage medium for controlling a magic color light strip
CN114399437A (en) Image processing method and device, electronic equipment and storage medium
CN115641824A (en) Picture adjustment device, display device, and picture adjustment method
CN103713832A (en) Display processing method and electronic devices
CN110650352A (en) Video processing method of IPTV browser
CN104093069B (en) A kind of video broadcasting method and player device
CN113411553A (en) Image processing method, image processing device, electronic equipment and storage medium
CN106528161B (en) Terminal device, page display processing device and method
CN105120369A (en) Menu background color processing method and apparatus
CN112929682B (en) Method, device and system for transparently processing image background and electronic equipment
CN111414221B (en) Display method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant