CN116996727A - Processing method, processing device, terminal equipment and medium - Google Patents

Processing method, processing device, terminal equipment and medium Download PDF

Info

Publication number
CN116996727A
CN116996727A CN202210444084.2A CN202210444084A CN116996727A CN 116996727 A CN116996727 A CN 116996727A CN 202210444084 A CN202210444084 A CN 202210444084A CN 116996727 A CN116996727 A CN 116996727A
Authority
CN
China
Prior art keywords
surface layer
size
view
layer view
canvas
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210444084.2A
Other languages
Chinese (zh)
Inventor
罗泽鑫
王辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210444084.2A priority Critical patent/CN116996727A/en
Publication of CN116996727A publication Critical patent/CN116996727A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure discloses a processing method, a processing device, terminal equipment and a medium. The method comprises the following steps: acquiring original size information of a surface layer view; adjusting the size of the skin view based on screen size; and adjusting the canvas size of the target surface layer view based on the original size information of the target surface layer view so that the deviation between the adjusted display effect and the original stretching effect of the target surface layer view is within a set range, wherein the target surface layer view is a surface layer view in a screen. According to the method, the problem of poor textureView performance can be avoided by arranging a plurality of surface layer views for preloading layout; the original size of the surface layer view is adjusted based on the screen size, so that no overlapping area exists among the surface layer views; the canvas size of the target surface layer view is adjusted to adjust the display effect of the video corresponding to the target surface layer view, so that the deviation between the display effect and the original stretching effect of the target surface layer view is reduced, and the video playing effect is improved.

Description

Processing method, processing device, terminal equipment and medium
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a processing method, a processing device, terminal equipment and a medium.
Background
In video playback scenes (such as small video or live playback scenes), a preload function is generally required to achieve smoothness when switching between video up and down, so that multiple textureviews can be used for simultaneous layout when using a texture view (i.e., textureView).
However, due to the poor performance of TextureView itself, the direct use of multiple textureviews with simultaneous layout can affect the effect of preloading.
Disclosure of Invention
The embodiment of the disclosure provides a processing method, a processing device, terminal equipment and a medium, so as to solve the problem of poor TextureView performance.
In a first aspect, an embodiment of the present disclosure provides a processing method, including:
original size information of the surface layer view is obtained;
adjusting the size of the skin view based on screen size;
and adjusting the canvas size of the target surface layer view based on the original size information of the target surface layer view so that the deviation between the adjusted display effect and the original stretching effect of the target surface layer view is within a set range, wherein the target surface layer view is a surface layer view in a screen.
In a second aspect, embodiments of the present disclosure further provide a processing apparatus, including:
The acquisition module is used for acquiring the original size information of the surface layer view;
the first adjusting module is used for adjusting the size of the surface layer view based on the screen size;
the second adjusting module is used for adjusting the canvas size of the target surface layer view based on the original size information of the target surface layer view so that the deviation between the adjusted display effect and the original stretching effect of the target surface layer view is within a set range, wherein the target surface layer view is a surface layer view in a screen.
In a third aspect, an embodiment of the present disclosure further provides a terminal device, including:
one or more processors;
a storage means for storing one or more programs;
the one or more programs are executed by the one or more processors, so that the one or more processors implement the processing methods provided by the embodiments of the present disclosure.
In a fourth aspect, the present disclosure also provides a computer-readable medium having stored thereon a computer program which, when executed by a processor, implements the processing method provided by the embodiments of the present disclosure.
The embodiment of the disclosure provides a processing method, a processing device, terminal equipment and a medium, wherein the processing method, the processing device, the terminal equipment and the medium firstly acquire original size information of a surface layer view; adjusting the size of the skin view based on the screen size; and adjusting the canvas size of the target surface layer view based on the original size information of the target surface layer view so that the deviation between the adjusted display effect and the original stretching effect of the target surface layer view is within a set range, wherein the target surface layer view is a surface layer view in a screen. According to the method, the problem of poor textureView performance can be avoided by arranging a plurality of surface layer views for preloading layout; by adjusting the original size of the surface view (i.e., surfmeview) to the screen size, it can be ensured that there is no overlap area between the multiple surface views; the canvas size of the target surface layer view is adjusted to adjust the display effect of the video corresponding to the target surface layer view, so that the deviation between the display effect and the original stretching effect of the target surface layer view is reduced, and the video playing effect is improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a schematic diagram of an implementation of a SurfaceView layout provided by an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of a processing method according to a first embodiment of the disclosure;
fig. 3 is a schematic flow chart of a processing method according to a second embodiment of the disclosure;
fig. 4 is a schematic diagram of a screen coordinate system according to a second embodiment of the disclosure;
FIG. 5 is a schematic diagram illustrating an implementation of adjusting the SurfaceView width and height according to a second embodiment of the present disclosure;
FIG. 6 is a schematic diagram illustrating an implementation of adjusting a SurfaceView canvas according to a second embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a processing device according to a third embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a terminal device according to a fourth embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
In the following embodiments, optional features and examples are provided in each embodiment at the same time, and the features described in the embodiments may be combined to form multiple alternatives, and each numbered embodiment should not be considered as only one technical solution. Furthermore, embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
To achieve fluency in small video/live up-down switching, a preload function is typically required, so 3 textureviews are used for simultaneous layout when textureviews are used. However, again due to the poor performance of TextureView itself, the present disclosure may consider switching TextureView to surafmeview (i.e., skin view) controls, which also requires 3 surfaceviews to be laid out simultaneously.
But the direct use of 3 surfacview simultaneous layouts also presents the following problems: on the one hand, partial android equipment does not support two SurfaceView to participate in synthesis at the same time, which can lead to video frame to be synthesized by a hardware writer (Hardware Composer, HWC), animation frame to be synthesized by a graphics processor (Graphics Processing Unit, GPU), and finally, the situation that GPU synthesis and HWC synthesis are staggered is formed, so that video pictures with characters are dithered; meanwhile, because of GPU synthesis, certain power consumption loss is caused. On the other hand, due to the service requirement, the size of part of the surface view may exceed the parent control area, so that only part of the video content is displayed on the screen; during the process of sliding the video up and down and during the still playing, the adjacent two surface views may have coverage problems, for example, the surface view which is not currently playing may be covered on the surface view which is currently playing, so as to affect the effect of video playing.
Fig. 1 is a schematic implementation diagram of a surface view layout provided in an embodiment of the present disclosure. As shown in fig. 1, video a corresponds to surfmeview 1; the video B corresponds to SurfaceView2; video C corresponds to SurfaceView3. During the sliding process of the video up and down and the still playing, the content of the surface view1 may be covered on the surface view2 (i.e. an overlapping area is generated), and the effect of playing the video is affected because the surface view2 is playing the video.
The embodiment of the disclosure provides a processing method, which can realize normal play of video without coverage display problem and GPU synthesis under the condition that 3 SurfaceView are simultaneously laid out in a small video/live scene.
Example 1
Fig. 2 is a flow chart of a processing method provided in an embodiment of the present disclosure, where the method may be applicable to a case of processing a surface layer view in a playing scene such as a small video or live broadcast, and the method may be performed by a processing apparatus, where the apparatus may be implemented by software and/or hardware and is generally integrated on a terminal device, and in this embodiment, the terminal device includes but is not limited to: a computer, a notebook computer, a smart phone, a tablet computer, or the like.
As shown in fig. 2, a processing method provided in a first embodiment of the disclosure includes the following steps:
s110, acquiring original size information of the surface layer view.
In this embodiment, the skin view may be represented as a SurfaceView. The surface View may be an inherited class of View (i.e., view), which is a class capable of presenting an image, i.e., drawing and displaying the image on a screen of a terminal device. A Surface layer (i.e., surface) dedicated to rendering is embedded in the Surface view, and the format and size of the Surface can be controlled and adjusted. The original size information may be understood as size information set when creating a corresponding surface view based on the original business logic.
In a small video or live video playing scene, a preloading function may be generally adopted to realize smoothness when the video is switched up and down. The preloading function may be understood as preloading the SurfaceView of the last video or videos and the SurfaceView of the next video or videos of the currently playing video in the screen to be ready for use.
The skin view of the acquired raw size information may be a preloaded skin view. It is not limited here how the original size information is obtained, such as obtaining the original size information of the surface view by the interception means.
Optionally, the method is applied to a video playing scene, and the number of the surface layer views is three.
The processing method provided by the embodiment can be applied to video playing scenes. The video playing scene can be a scene of a small video or a live video of a certain application program, which is played in a sliding manner up and down.
The number of the surface views can be three, that is, a scheme of simultaneously arranging 3 surface views can be adopted, that is, the surface view containing the currently playing video (the surface view of the currently playing video can be understood as the surface view corresponding to the video currently playing in the screen, and also can be understood as the currently selected surface view), the surface view of the last video of the currently playing video, and the surface view of the next video of the currently playing video (that is, the surface view of the currently unselected video which is adjacently arranged with the surface view of the currently playing video).
In this embodiment, the original size information of the SurfaceView of the currently playing video may be obtained, and the original size information of all the SurfaceView of the current layout may also be obtained. It is understood that the original size information of each surface view is set according to the service logic corresponding to each surface view, and may be the same or different. The original size information of how the SurfaceView is obtained is not limited here.
Optionally, obtaining the original size information of the surface layer view includes: when the size of the surface layer view is set, the original size information of the surface layer view is acquired, wherein the original size information comprises the width information of the corresponding surface layer view and the height information of the corresponding surface layer view.
The process of obtaining the original size information of the SurfaceView may be: in the process of playing small videos or live videos, when the size of the surface view is set, the original size information of the surface view can be obtained. Wherein the original size information may include width information of the corresponding surface view and height information of the corresponding surface view. The width information may be understood as the width size of the surface view and the height information may be understood as the height size of the surface view.
S120, adjusting the size of the surface layer view based on the screen size.
In the present embodiment, the screen may be understood as a display screen of the terminal device. The screen size is understood to be the height size and width size of the display screen. After the original size information of the surface view is obtained, since the size corresponding to the original size information of the part of the surface view may be larger than the screen size (for example, the height and width of the surface view are both larger than the height and width of the screen, or the height of the surface view is larger than the height of the screen, etc.), in order to avoid the situation that an overlapping area exists between multiple surface views when the video is slid up and down, the size of the surface view may be adjusted based on the screen size.
The size of the surface view is not limited here, as long as it can be ensured that there is no overlapping area between the plurality of surface views or an overlapping area smaller than when the size of the surface view is not adjusted. For example, the size of all the acquired surfmeview may be adjusted to be the screen size based on the screen size; or the size of the SurfaceView of the currently played video can be adjusted to be the screen size based on the screen size; or may be the screen size based on the size of the screen size adjustment portion surfmeview. The partial SurfaceView comprises SurfaceView of the currently played video. For example, when the size of the overlay view is adjusted based on the screen size, the adjusted size may be smaller than the original size information and greater than or equal to the screen size.
Optionally, adjusting the size of the skin view based on the screen size includes: the size of the skin view is adjusted to the screen size.
S130, adjusting the canvas size of the target surface layer view based on the original size information of the target surface layer view so that the deviation of the adjusted display effect and the original stretching effect of the target surface layer view is within a set range.
In this embodiment, the target surface layer view may be understood as a surface view in the screen, that is, may be understood as a surface view corresponding to a video currently being played in the screen. The Canvas may be represented as Canvas. The canvas provides a drawing method, and can draw basic graphics to the bitmap of the bottom layer; the bitmap may serve as a surface upon which the canvas is placed. The canvas size is adjustable and is understood to be the height and width of the canvas.
The adjusted display effect can be understood as the video scale stretching effect displayed by the target SurfaceView after the canvas size of the target SurfaceView is adjusted. The original stretching effect can be understood as a video proportional stretching effect displayed by the target SurfaceView under the original business logic, and can also be understood as the original size information and the video proportional stretching effect displayed by the target SurfaceView under the original canvas size.
The setting range can be a preset threshold range and can be flexibly set according to actual requirements. Based on the original size information of the target surface view, the canvas size of the target surface view can be adjusted, so that the deviation between the adjusted display effect and the original stretching effect of the target surface view is within a set range. The canvas size of how the target surfacview is adjusted is not limited here. Illustratively, the canvas size of the target surface view may be adjusted to a size corresponding to the original size information of the target surface view; on the basis, the canvas can be shifted in the up, down, left and/or right directions, so that the target surface view is positioned in the middle of the canvas or in other areas in the canvas, and the deviation between the adjusted display effect and the original stretching effect of the target surface view can be ensured to be within a set range.
The first embodiment of the disclosure provides a processing method, which includes firstly obtaining original size information of a surface layer view; adjusting the size of the skin view based on the screen size; and adjusting the canvas size of the target surface layer view based on the original size information of the target surface layer view so that the deviation between the adjusted display effect and the original stretching effect of the target surface layer view is within a set range, wherein the target surface layer view is a surface layer view in a screen. According to the method, the problem of poor textureView performance can be avoided by arranging a plurality of surface layer views for preloading layout; by adjusting the original size of the surface layer views to the screen size, it can be ensured that no overlapping area exists between the plurality of surface layer views; the canvas size of the target surface layer view is adjusted to adjust the display effect of the video corresponding to the target surface layer view, so that the deviation between the display effect and the original stretching effect of the target surface layer view is reduced, and the video playing effect is improved.
Example two
Fig. 3 is a flow chart of a processing method according to a second embodiment of the disclosure. The second embodiment is a refinement based on the above embodiments. In this embodiment, a process of adjusting the canvas size of the target skin view based on the original size information of the target skin view is specifically described. For details not yet described in detail in this embodiment, refer to embodiment one.
As shown in fig. 3, a processing method provided in a second embodiment of the present disclosure may be applied to a video playing scene, where the number of surface views may be three, and the method includes the following steps:
s210, when the size of the surface layer view is set, acquiring original size information of the surface layer view.
In this embodiment, the original size information may include width information of the corresponding skin view and height information of the corresponding skin view. Raw size information for one or more skin views may be obtained.
S220, adjusting the size of the surface layer view to be the screen size.
In this embodiment, the width of the top view is adjusted to the width of the screen, and the height of the top view is adjusted to the height of the screen.
S230, adjusting the canvas size of the target surface layer view to the size corresponding to the original size information of the target surface layer view through a size setting interface.
In this embodiment, the sizing interface may be understood as an interface for setting the canvas size, for example the sizing interface may be a transaction. Specifically, through the transaction, setBuffersize interface, the canvas size of the target surface layer view can be adjusted to the size corresponding to the original size information of the target surface layer view, i.e. the width and the height of the canvas of the target surface layer view are adjusted to be consistent with the width and the height corresponding to the original size information of the target surface layer view.
Fig. 4 is a schematic diagram of a screen coordinate system according to a second embodiment of the disclosure. As shown in FIG. 4, the screen may correspond to a screen coordinate system, and the skin view and its canvas may make a corresponding coordinate point determination based on this screen coordinate system. The screen can be understood as a rectangular frame, wherein the upper left corner of the rectangular frame can be regarded as the origin coordinate (i.e., (0, 0) point) of the screen coordinate system, the straight line of the upper boundary line of the rectangular frame can be regarded as the abscissa axis (i.e., x-axis) of the screen coordinate system, and the right direction of the upper boundary line is the positive x-axis direction; the straight line of the left boundary line of the rectangular frame can be regarded as the ordinate axis (i.e. y axis) of the screen coordinate system, and the vertical downward direction of the left boundary line is the y axis positive axis direction.
It should be noted that the target surface view and its canvas are coincident before adjustment, i.e., the dimensions are uniform. When the canvas size of the target surface layer view is adjusted to the size corresponding to the original size information of the target surface layer view, scaling adjustment can be performed by taking the upper left corner coordinate of the target surface layer view as a fixed datum point.
S240, shifting the canvas through the shifting setting interface so that the display effect after shifting is consistent with the original shifting effect.
In this embodiment, the original offset effect may be understood as an offset effect corresponding to the original size information of the target surface layer view. The offset setting interface may be understood as an interface for setting the canvas offset size, for example, the offset setting interface may be a transaction. Specifically, the canvas can be shifted by a certain distance in the up, down, left and/or right directions through the transaction.
Optionally, offsetting the canvas through the offset setting interface includes: determining an offset based on the size of the target skin view and the canvas size; canvas offset is performed based on the offset.
Wherein, based on the screen coordinate system, the coordinates of the upper right corner and the lower left corner of the target surface view and the coordinates of the upper right corner and the lower left corner of the canvas can be determined according to the size of the target surface view and the canvas size. It is understood that the upper right and lower left corner coordinates of the target skin view may be understood as the maximum value of the abscissa and the maximum value of the ordinate of the target skin view, respectively; the upper right and lower left corner coordinates of the canvas are understood to be the maximum value of the canvas abscissa and the maximum value of the ordinate, respectively.
For example, for canvas offsets in the lateral direction, the absolute value of the difference between the upper right-hand corner coordinates of the target skin view and the upper right-hand corner coordinates of the canvas may be determined first, then an offset is determined and the offset is less than the absolute value of the difference, and finally the offset is offset to the left in the lateral direction based on the offset. Accordingly, for canvas offsets in the portrait orientation, the absolute value of the difference between the lower left corner coordinates of the target skin view and the lower left corner coordinates of the canvas may be determined first, then an offset is determined and less than the absolute value of the difference, and finally the canvas is offset upward in the portrait orientation based on the offset.
Optionally, determining the offset based on the size of the target skin view and the canvas size includes: subtracting the average value of the maximum value of the canvas abscissa from the maximum value of the target surface layer view abscissa, and determining the average value as the offset of the abscissa; and subtracting the average value of the maximum value of the ordinate of the canvas from the maximum value of the ordinate of the target surface layer view to determine the offset of the ordinate.
For the canvas offset in the transverse direction, the average value of the maximum value of the canvas abscissa subtracted from the maximum value of the target surface layer view abscissa can be determined as the offset of the abscissa. Correspondingly, for canvas offset in the longitudinal direction, the average of the maximum values of the ordinate of the canvas subtracted from the maximum value of the ordinate of the target surface layer view can be determined as the offset of the ordinate. On this basis, the canvas may be offset based on the offset on the abscissa and the offset on the ordinate.
The second embodiment provides a processing method, which embodies a process of adjusting the canvas size of the target surface layer view based on the original size information of the target surface layer view. According to the method, the original size of the surface layer views is adjusted to be the screen size, so that no overlapping area exists among the surface layer views; on the basis, the display effect of the target surface layer view can be adjusted to be consistent with the original stretching effect by adjusting the size of the canvas of the target surface layer view and shifting the canvas, so that the video playing effect is improved.
The present disclosure is exemplarily described below:
the embodiment of the disclosure provides a processing method, which comprises the following specific implementation processes:
1. when the SurfaceView width and height are set in a small video or live broadcast scene, intercepting the setting logic, recording the width and height to be set, and forcibly setting the width and height to be the screen width and height.
Fig. 5 is a schematic diagram of implementation of adjusting a surface view width and height according to a second embodiment of the disclosure. As shown in fig. 5, since the size of the SurfaceView is set to be consistent with the screen of the terminal device (such as a smart phone), at this time, there is no overlapping area between the multiple SurfaceView, at this time, only the SurfaceView of the video B is located in the screen to play the video, and at the same time, only the SurfaceView of the video B participates in the synthesis. In this case there is no overlay problem nor GPU synthesis problem.
2. Because the width and height of the SurfaceView are modified, the video playing proportion is inconsistent with the service expectation; therefore, the drawing of the surface view is required to be scaled and offset through the acquired width and height of the service setting, and the display effect is ensured to be consistent with the display effect of the surface view stretching scheme adopted by the service originally.
Fig. 6 is a schematic implementation diagram of adjusting a surfmeview canvas according to a second embodiment of the present disclosure. As shown in fig. 6, firstly, the canvas of the surface view is zoomed through the transaction. SetBuffersize to ensure the consistency with the size originally set by the service; on the basis, the canvas of the surface view is offset through the transaction. The dashed line area is a canvas area of SurfaceView, is not an area actually participating in GPU synthesis, and only the content in the solid line can participate in GPU synthesis, so that the GPU synthesis problem does not exist.
According to the embodiment of the disclosure, the original width and height setting of the SurfaceView is intercepted, and the width and height of the SurfaceView are forcedly set to be consistent with the screen, so that only one SurfaceView is ensured to be on the screen at present, and the problems of possible GPU synthesis and coverage are avoided. On the basis, the effect consistent with the direct zooming of the SurfaceView is achieved by zooming and shifting the canvas of the SurfaceView.
Example III
Fig. 7 is a schematic structural diagram of a processing apparatus according to a third embodiment of the present disclosure, where the apparatus may be implemented by software and/or hardware and is generally integrated on a terminal device.
As shown in fig. 7, the apparatus includes: an acquisition module 310, a first adjustment module 320, and a second adjustment module 330;
the acquiring module 310 is configured to acquire original size information of the surface layer view;
a first adjustment module 320, configured to adjust the size of the surface layer view based on the screen size;
the second adjusting module 330 is configured to adjust a canvas size of the target surface layer view based on original size information of the target surface layer view, so that a deviation between the adjusted display effect and an original stretching effect of the target surface layer view is within a set range, where the target surface layer view is a surface layer view in a screen.
In this embodiment, the device first obtains, through the obtaining module 310, original size information of the surface layer view; then, the size of the surface view is adjusted based on the screen size through the first adjustment module 320; finally, the canvas size of the target surface layer view is adjusted by the second adjusting module 330 based on the original size information of the target surface layer view, so that the deviation between the adjusted display effect and the original stretching effect of the target surface layer view is within a set range, wherein the target surface layer view is a surface layer view in the screen. The device can ensure that no overlapping area exists among a plurality of surface layer views by adjusting the original size of the surface layer views to be the screen size; the canvas size of the target surface layer view is adjusted to adjust the display effect of the video corresponding to the target surface layer view, so that the deviation between the display effect and the original stretching effect of the target surface layer view is reduced, and the video playing effect is improved.
Optionally, the obtaining module 310 is specifically configured to:
when the size of the surface layer view is set, the original size information of the surface layer view is acquired, wherein the original size information comprises the width information of the corresponding surface layer view and the height information of the corresponding surface layer view.
Optionally, the first adjusting module 320 is specifically configured to:
and adjusting the size of the surface layer view to be the screen size.
Optionally, the second adjusting module 330 specifically includes:
the first adjusting unit is used for adjusting the canvas size of the target surface layer view to the size corresponding to the original size information of the target surface layer view through a size setting interface;
and the second adjusting unit is used for offsetting the canvas through the offset setting interface so as to enable the display effect after offset to be consistent with the original offset effect, wherein the original offset effect is the offset effect corresponding to the original size information of the target surface layer view.
Optionally, the second adjusting unit specifically includes:
a determining subunit, configured to determine an offset based on a size of the target surface layer view and a canvas size;
and the offset subunit is used for carrying out canvas offset based on the offset.
Optionally, the determining subunit is specifically configured to:
Subtracting the average value of the maximum value of the canvas abscissa from the maximum value of the target surface layer view abscissa, and determining the average value as the offset of the abscissa;
and subtracting the average value of the maximum value of the ordinate of the canvas from the maximum value of the ordinate of the target surface layer view to determine the offset of the ordinate.
Optionally, the method is applied to a video playing scene, and the number of the surface layer views is three.
The processing device can execute the processing method provided by any embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of the execution method.
Example IV
Fig. 8 is a schematic structural diagram of a terminal device according to a fourth embodiment of the present disclosure. Fig. 8 shows a schematic structural diagram of a terminal device 400 suitable for use in implementing embodiments of the present disclosure. The terminal device 400 in the embodiments of the present disclosure may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, personal digital assistants (Personal Digital Assistant, PDA), tablet computers (Portable Android Device, PAD), portable multimedia players (Portable Media Player, PMP), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The terminal device 400 shown in fig. 8 is only one example, and should not impose any limitation on the functions and scope of use of the embodiments of the present disclosure.
As shown in fig. 8, the terminal apparatus 400 may include one or more processors (e.g., central processing units, graphic processors, etc.) 401, which may perform various appropriate actions and processes according to programs stored in a Read-Only Memory (ROM) 402 or programs loaded from a storage 408 into a random access Memory (Random Access Memory, RAM) 403. The one or more processors 401 implement the methods as provided by the present disclosure. In the RAM403, various programs and data necessary for the operation of the electronic device 400 are also stored. The processor 401, the ROM 402, and the RAM403 are connected to each other by a bus 404. An Input/Output (I/O) interface 405 is also connected to bus 404.
In general, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a liquid crystal display (Liquid Crystal Display, LCD), a speaker, a vibrator, and the like; storage 408 including, for example, magnetic tape, hard disk, etc., storage 408 being for storing one or more programs; and a communication device 409. The communication means 409 may allow the terminal device 400 to communicate with other devices wirelessly or by wire to exchange data. While fig. 8 shows a terminal device 400 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via communications device 409, or from storage 408, or from ROM 402. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processor 401.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-Only Memory (ROM), an erasable programmable read-Only Memory (EPROM or flash Memory), an optical fiber, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as the hypertext transfer protocol (Hyper Text Transfer Protocol, HTTP), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a Local Area Network (LAN), a Wide Area Network (WAN), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the terminal device 400; or may exist alone without being assembled into the terminal device 400.
The computer readable medium stores one or more computer programs which when executed by a processor implement the method of: the computer-readable medium carries one or more programs that, when executed by the electronic device, cause the terminal device 400 to: computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. Each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented in software or hardware. The name of a module does not in some cases define the module itself.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a field programmable gate array (Field Programmable Gate Array, FPGA), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a special standard product (Application Specific Standard Parts, ASSP), a System On Chip (SOC), a complex programmable logic device (Complex Programming logic device, CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, example 1 provides a processing method, comprising:
acquiring original size information of a surface layer view;
adjusting the size of the skin view based on screen size;
and adjusting the canvas size of the target surface layer view based on the original size information of the target surface layer view so that the deviation between the adjusted display effect and the original stretching effect of the target surface layer view is within a set range, wherein the target surface layer view is a surface layer view in a screen.
In accordance with one or more embodiments of the present disclosure, example 2 is in accordance with the method of example 1,
the obtaining the original size information of the surface layer view includes:
when the size of the surface layer view is set, the original size information of the surface layer view is acquired, wherein the original size information comprises the width information of the corresponding surface layer view and the height information of the corresponding surface layer view.
In accordance with one or more embodiments of the present disclosure, example 3 is in accordance with the method of example 1,
the resizing the skin view based on screen size includes:
and adjusting the size of the surface layer view to be the screen size.
In accordance with one or more embodiments of the present disclosure, example 4 is in accordance with the method of example 1,
The adjusting canvas size of the target surface layer view based on the original size information of the target surface layer view comprises the following steps:
the canvas size of the target surface layer view is adjusted to be the size corresponding to the original size information of the target surface layer view through a size setting interface;
and shifting the canvas through a shifting setting interface so as to enable the shifted display effect to be consistent with the original shifting effect, wherein the original shifting effect is the shifting effect corresponding to the original size information of the target surface layer view.
In accordance with one or more embodiments of the present disclosure, example 5 is a method according to example 4,
the said setting up the interface through the offset, carry on the offset to canvas, including:
determining an offset based on the size of the target skin view and the canvas size;
canvas offset is performed based on the offset.
In accordance with one or more embodiments of the present disclosure, example 6 is in accordance with the method of example 5,
the determining an offset based on the size of the target skin view and the canvas size includes:
subtracting the average value of the maximum value of the canvas abscissa from the maximum value of the target surface layer view abscissa, and determining the average value as the offset of the abscissa;
And subtracting the average value of the maximum value of the ordinate of the canvas from the maximum value of the ordinate of the target surface layer view to determine the offset of the ordinate.
In accordance with one or more embodiments of the present disclosure, example 7 is in accordance with the methods of examples 1-6,
the method is applied to video playing scenes, and the number of the surface layer views is three.
According to one or more embodiments of the present disclosure, example 8 provides a processing apparatus comprising:
the acquisition module is used for acquiring the original size information of the surface layer view;
the first adjusting module is used for adjusting the size of the surface layer view based on the screen size;
the second adjusting module is used for adjusting the canvas size of the target surface layer view based on the original size information of the target surface layer view so that the deviation between the adjusted display effect and the original stretching effect of the target surface layer view is within a set range, wherein the target surface layer view is a surface layer view in a screen.
According to one or more embodiments of the present disclosure, example 9 provides a terminal device, comprising:
one or more processors;
a storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the methods of any of examples 1-7.
In accordance with one or more embodiments of the present disclosure, example 10 provides a computer-readable medium having stored thereon a computer program which, when executed by a processor, implements the method of any of examples 1-7.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (10)

1. A method of processing, the method comprising:
acquiring original size information of a surface layer view;
adjusting the size of the skin view based on screen size;
and adjusting the canvas size of the target surface layer view based on the original size information of the target surface layer view so that the deviation between the adjusted display effect and the original stretching effect of the target surface layer view is within a set range, wherein the target surface layer view is a surface layer view in a screen.
2. The method according to claim 1, wherein the acquiring the original size information of the skin view includes:
when the size of the surface layer view is set, the original size information of the surface layer view is acquired, wherein the original size information comprises the width information of the corresponding surface layer view and the height information of the corresponding surface layer view.
3. The method of claim 1, wherein the resizing the overlay view based on screen size comprises:
and adjusting the size of the surface layer view to be the screen size.
4. The method of claim 1, wherein adjusting the canvas size of the target skin view based on the original size information of the target skin view comprises:
the canvas size of the target surface layer view is adjusted to be the size corresponding to the original size information of the target surface layer view through a size setting interface;
and shifting the canvas through a shifting setting interface so as to enable the shifted display effect to be consistent with the original shifting effect, wherein the original shifting effect is the shifting effect corresponding to the original size information of the target surface layer view.
5. The method of claim 4, wherein the offsetting the canvas through the offset setting interface comprises:
determining an offset based on the size of the target skin view and the canvas size;
canvas offset is performed based on the offset.
6. The method of claim 5, wherein the determining an offset based on the size of the target skin view and canvas size comprises:
Subtracting the average value of the maximum value of the canvas abscissa from the maximum value of the target surface layer view abscissa, and determining the average value as the offset of the abscissa;
and subtracting the average value of the maximum value of the ordinate of the canvas from the maximum value of the ordinate of the target surface layer view to determine the offset of the ordinate.
7. The method according to any of claims 1-6, wherein the method is applied to a video playing scene, the number of surface views being three.
8. A processing apparatus, comprising:
the acquisition module is used for acquiring the original size information of the surface layer view;
the first adjusting module is used for adjusting the size of the surface layer view based on the screen size;
the second adjusting module is used for adjusting the canvas size of the target surface layer view based on the original size information of the target surface layer view so that the deviation between the adjusted display effect and the original stretching effect of the target surface layer view is within a set range, wherein the target surface layer view is a surface layer view in a screen.
9. A terminal device, comprising:
one or more processors;
a storage means for storing one or more programs;
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-7.
10. A computer readable medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any of claims 1-7.
CN202210444084.2A 2022-04-25 2022-04-25 Processing method, processing device, terminal equipment and medium Pending CN116996727A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210444084.2A CN116996727A (en) 2022-04-25 2022-04-25 Processing method, processing device, terminal equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210444084.2A CN116996727A (en) 2022-04-25 2022-04-25 Processing method, processing device, terminal equipment and medium

Publications (1)

Publication Number Publication Date
CN116996727A true CN116996727A (en) 2023-11-03

Family

ID=88520094

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210444084.2A Pending CN116996727A (en) 2022-04-25 2022-04-25 Processing method, processing device, terminal equipment and medium

Country Status (1)

Country Link
CN (1) CN116996727A (en)

Similar Documents

Publication Publication Date Title
CN111399956B (en) Content display method and device applied to display equipment and electronic equipment
EP4231650A1 (en) Picture display method and apparatus, and electronic device
CN111258519B (en) Screen split implementation method, device, terminal and medium
WO2020156056A1 (en) Video playing control method, apparatus and device, and medium
CN114470751B (en) Content acquisition method and device, storage medium and electronic equipment
CN112954441B (en) Video editing and playing method, device, equipment and medium
CN114416261B (en) Information display method, device, equipment and medium
CN114676358A (en) Control display method and device, electronic equipment, storage medium and program product
CN111796825B (en) Bullet screen drawing method, bullet screen drawing device, bullet screen drawing equipment and storage medium
CN110134905B (en) Page update display method, device, equipment and storage medium
CN111506241A (en) Special effect display method and device for live broadcast room, electronic equipment and computer medium
CN111309798A (en) Table processing method, device, equipment and storage medium
CN114780197B (en) Split screen rendering method, device, equipment and storage medium
CN116996727A (en) Processing method, processing device, terminal equipment and medium
CN115454306A (en) Display effect processing method and device, electronic equipment and storage medium
CN111338827B (en) Method and device for pasting form data and electronic equipment
US20220272280A1 (en) Image special effect processing method and apparatus, electronic device and computer-readable storage medium
CN113127101A (en) Application program control method, device, equipment and medium
CN114710695B (en) Progress adjustment method, device, electronic equipment, storage medium and program product
CN116991299A (en) Processing method, processing device, terminal equipment and medium
CN110633039B (en) Page filling method and device, terminal equipment and medium
CN116991514A (en) Processing method, processing device, terminal equipment and medium
CN114327188B (en) Form layout method, form layout device, electronic equipment and computer readable medium
CN114979749B (en) Graphic interface drawing method, electronic equipment and readable storage medium
CN112306339B (en) Method and apparatus for displaying image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination