CN114845162A - Video playing method and device, electronic equipment and storage medium - Google Patents

Video playing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114845162A
CN114845162A CN202110137036.4A CN202110137036A CN114845162A CN 114845162 A CN114845162 A CN 114845162A CN 202110137036 A CN202110137036 A CN 202110137036A CN 114845162 A CN114845162 A CN 114845162A
Authority
CN
China
Prior art keywords
video
texture data
frame
data
frame texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110137036.4A
Other languages
Chinese (zh)
Other versions
CN114845162B (en
Inventor
曹俊跃
周峰
初楷博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202110137036.4A priority Critical patent/CN114845162B/en
Publication of CN114845162A publication Critical patent/CN114845162A/en
Application granted granted Critical
Publication of CN114845162B publication Critical patent/CN114845162B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The embodiment of the disclosure provides a video playing method, a video playing device, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring image frame data of a video, and sending a decoding instruction to a Graphics Processing Unit (GPU), wherein the decoding instruction is used for indicating that frame texture data are obtained after the image frame data are decoded; acquiring the frame texture data; rendering the frame texture data, and displaying the rendered frame texture data through a browser; when the video is played at the web end, the embodiment of the invention calls the image processor GPU to carry out hard decoding on the image frame data, thereby reducing the occupancy rate of a central processing unit CPU in the terminal equipment, improving the decoding efficiency and reducing video blockage.

Description

Video playing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of video processing technologies, and in particular, to a video playing method and apparatus, an electronic device, and a storage medium.
Background
In the process of playing back videos on the web (browser) side, a soft solution scheme is usually adopted, that is, a controller (software) on the web side is used for decoding and playing back each video.
However, in the above scheme, the controller generally occupies a Central Processing Unit (CPU), and a video is jammed and frame-dropped due to an excessively high CPU load.
Disclosure of Invention
The embodiment of the disclosure provides a video playing method and device, electronic equipment and a storage medium, so as to overcome the problem that video playing at a web end is easy to be blocked in the prior art.
In a first aspect, an embodiment of the present disclosure provides a video playing method, including: acquiring image frame data of a video, and sending a decoding instruction to a Graphics Processing Unit (GPU), wherein the decoding instruction is used for indicating that frame texture data are obtained after the image frame data are decoded; acquiring the frame texture data; rendering the frame texture data, and displaying the rendered frame texture data through a browser.
In a second aspect, an embodiment of the present disclosure provides a multi-track video playing apparatus, including: the first acquisition module is used for acquiring image frame data of a video and sending a decoding instruction to a Graphics Processing Unit (GPU), wherein the decoding instruction is used for indicating that frame texture data are obtained after the image frame data are decoded; a second obtaining module, configured to obtain the frame texture data; and the rendering processing module is used for rendering the frame texture data and displaying the rendered frame texture data through a browser.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: at least one processor and memory; the memory stores computer-executable instructions; the at least one processor executing the computer-executable instructions stored by the memory causes the at least one processor to perform the video playback method as described above in the first aspect and various possible designs of the first aspect.
In a fourth aspect, an embodiment of the present disclosure provides a computer-readable storage medium, where computer-executable instructions are stored, and when a processor executes the computer-executable instructions, the video playing method according to the first aspect and various possible designs of the first aspect is implemented.
The video playing method, the video playing device, the electronic device and the storage medium provided by the embodiment comprise the following steps: acquiring image frame data of a video, and sending a decoding instruction to a Graphics Processing Unit (GPU), wherein the decoding instruction is used for indicating that frame texture data are obtained after the image frame data are decoded; acquiring the frame texture data; rendering the frame texture data, and displaying the rendered frame texture data through a browser; when the video is played at the web end, the embodiment of the invention calls the image processor GPU to carry out hard decoding on the image frame data, thereby reducing the occupancy rate of a central processing unit CPU in the terminal equipment, improving the decoding efficiency and reducing video blockage.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present disclosure, and for those skilled in the art, other drawings can be obtained according to the drawings without inventive exercise.
FIG. 1 is a diagram illustrating a web-side video playback in the prior art;
fig. 2 is a schematic diagram of a multi-track video provided by an embodiment of the present disclosure;
fig. 3 is a first schematic flowchart of a video playing method according to an embodiment of the present disclosure;
fig. 4 is a schematic flow chart of a video playing method according to an embodiment of the present disclosure;
fig. 5 is a schematic view illustrating a processing flow of multi-track video playing provided by an embodiment of the present disclosure;
fig. 6 is a block diagram of a video playing apparatus according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
The terms to which the present invention relates will be explained first:
image processor (Graphics Processing Unit, GPU for short): the system is also called a display core, a visual processor and a display chip, and is a microprocessor which is specially used for image and graph related operation work on personal computers, workstations, game machines and some mobile devices (such as tablet computers, smart phones and the like).
In the process of playing the video on the web side, a soft solution scheme is usually adopted, that is, a controller (software) on the web side is used for decoding and playing each video.
Fig. 1 is a schematic diagram of web-side video playing in the prior art, and as shown in fig. 1, when the web-side performs video playing, decoding and rendering of a video frame are both performed in a sub-thread of a CPU, and finally, an off-screen rendering offset Canvas can draw a picture into a Canvas label in a Document Object Model (DOM). That is to say, in the existing scheme, decoding, rendering and the like of a video frame all occupy a CPU, and the CPU load is too high, which may cause a video to be jammed.
Especially, when the web end needs to play a multi-track video, fig. 2 is a schematic diagram of the multi-track video provided by the embodiment of the present disclosure, as shown in fig. 2, videos, such as file1, file2, and file3, are loaded in multiple tracks of the video processor of the web end respectively, and the playing effect of the multi-track video covers the videos and plays the video at the top layer.
In view of the above technical problems, the technical idea of the present disclosure is that a web controller is only responsible for scheduling the playing and pausing of each video, and does not care about the decoding process, but instead, the decoding process is handed to an image processor GPU for hard decoding, and the controller directly reads texture data of a current frame, thereby playing the video.
Referring to fig. 3, fig. 3 is a first schematic flowchart of a video playing method according to an embodiment of the present disclosure. The video playing method comprises the following steps:
s101, acquiring image frame data of a video, and sending a decoding instruction to a Graphics Processing Unit (GPU).
The decoding instruction is used for indicating that frame texture data is obtained after the image frame data is decoded. And the video is played based on the multimedia video tag of the browser.
Specifically, the execution subject of this embodiment is a controller on a terminal device, where the controller has a function of editing video on a web side, and is usually a Central Processing Unit (CPU); at the same time, a graphics processor GPU is also configured on the terminal device.
As an alternative embodiment, the video is a multi-track video; the acquiring of the image frame data of the video in step S101 includes: and acquiring a plurality of image frame data corresponding to the multi-track video.
In this embodiment, when the multi-track video is played at the web end, the controller may obtain a plurality of image frame data corresponding to the multi-track video, and then send a decoding instruction to the GPU, so that the GPU decodes each image frame data to obtain frame texture data, and then returns the frame texture data to the controller.
And S102, acquiring the frame texture data.
Specifically, after the GPU decodes the frame texture data, it returns it to the controller, i.e., the controller acquires the frame texture data.
S103, rendering the frame texture data, and displaying the rendered frame texture data through a browser.
Specifically, after the controller acquires the decoded frame texture data, the frame texture data may be rendered according to the rendering parameters and then displayed through the browser.
And repeating the steps 101 to 103, and playing the video at the web end.
In an embodiment of the present disclosure, on the basis of the embodiment in fig. 3, before sending the decoding instruction to the graphics processor GPU in step 101, the method further includes: receiving a video playing instruction input by a user; and according to the video playing instruction, executing the step of sending a decoding instruction to a Graphics Processing Unit (GPU).
Specifically, in the video processor at the web end, only when a video playing instruction input by a user is received, for example, a "play" button of an interface of the video processor is clicked, the controller sends a decoding instruction to the GPU, and then the decoding, rendering and video playing are performed.
In an embodiment of the present disclosure, on the basis of the above-mentioned embodiment of fig. 3, the method further includes: receiving a video pause instruction input by a user; and stopping executing the step of sending decoding instructions to the graphics processor GPU according to the video pause instruction.
Specifically, in the video processor on the web side, when a user inputs a video pause instruction, for example, clicks a "pause" button on an interface of the video processor, the controller stops sending the decoding instruction to the GPU, so that decoding and rendering are not performed any more, and the video playing is stopped.
In summary, the controller of the terminal device in this embodiment is only responsible for scheduling the playing, pausing, and the like of each video, the decoding process is not performed any more, but the decoding process is performed by the image processor, and the controller directly reads the frame texture data and renders the frame texture data, so as to obtain the current video picture and play the current video picture.
The video playing method provided by the embodiment of the disclosure acquires image frame data of a video and sends a decoding instruction to a Graphics Processing Unit (GPU), wherein the decoding instruction is used for instructing to decode the image frame data to obtain frame texture data; acquiring the frame texture data; rendering the frame texture data, and displaying the rendered frame texture data through a browser; when the video is played at the web end, the embodiment of the invention calls the image processor GPU to carry out hard decoding on the image frame data, thereby reducing the occupancy rate of a central processing unit CPU in the terminal equipment, improving the decoding efficiency and reducing video blockage.
On the basis of the above embodiment, referring to fig. 4, fig. 4 is a schematic flow chart of a video playing method provided by the embodiment of the present disclosure, where the video playing method includes:
s201, acquiring image frame data of a video, and sending a decoding instruction to a GPU through a main thread.
The decoding instruction is used for indicating that frame texture data are obtained after the image frame data are decoded, and the video is played through a video tag.
And S202, acquiring the frame texture data.
And S203, rendering the frame texture data through the main thread, and displaying the rendered frame texture data through a browser.
Step 202 in this embodiment is similar to the implementation of step 102 in the foregoing embodiment, and is not described herein again.
Different from the foregoing embodiments, the present embodiment further defines a specific implementation manner of the decoding and rendering processes of the video frame. In this embodiment, the main thread sends a decode instruction to the GPU, and the frame texture data is rendered by the main thread.
Specifically, when the web-side video processor runs, there may be a main thread and at least one worker thread, where only the main thread can operate the DOM, such as a video tag, and the worker thread cannot acquire a DOM state or operate the DOM, including many key components and cannot be used in the worker thread. Therefore, in this embodiment, after the multi-track video is loaded, a decoding instruction is sent to the GPU through the main thread, and then the GPU returns the frame texture data obtained by decoding to the main thread, and renders the frame texture data in the main thread to obtain and play the current video picture.
In an embodiment of the present disclosure, on the basis of the embodiment of fig. 3, before the rendering the frame texture data by the main thread, the method further includes: editing the frame texture data through a sub thread to obtain edited frame texture data; the rendering frame texture data through the main thread comprises the following steps: and rendering the edited frame texture data through the main thread.
Specifically, after a decoding instruction is sent to the GPU through a main thread of the CPU, the CPU obtains frame texture data decoded by the GPU, and then the CPU edits the frame texture data through a sub-thread and renders the edited frame texture data on the main thread. Referring to fig. 5, fig. 5 is a flowchart of a video playing process provided by an embodiment of the present disclosure. In a main thread, after hard decoding is carried out through a video tag, frame texture data are obtained, the frame texture data are transmitted to a sub thread (other threads), after custom readers are carried out on the frame texture data in the sub thread, after processing of an Editor (each component), the frame texture data are returned to a rendering queue in the main thread and are rendered to a Canvas in a DOM (document object model), and a current video picture is obtained. All textures are produced by the main thread, and the main thread consumes to form a virtual data stream.
In one embodiment of the present disclosure, in the child thread, the editing process includes at least one of: zooming, rotating, translating and adding special effects. Specifically, various editing processes of the frame texture data, such as scaling, translation, rotation, adding special effects, and the like, may be put into various components in the child thread to operate, as in various components in other threads shown in fig. 5.
Note that, when the sub thread performs editing processing on a multi-track video, the sub thread performs individual processing on each track video.
On the basis of the foregoing embodiment, a decoding instruction is sent to the GPU by the main thread, and frame texture data is rendered by the main thread; when the multi-track video is played at the web end, the image processor GPU is called to perform hard decoding on the multi-track video, so that the occupancy rate of a central processing unit CPU in the terminal equipment can be reduced, and the blockage of the multi-track high-definition video can be reduced.
Fig. 6 is a block diagram of a video playing apparatus according to an embodiment of the present disclosure, corresponding to the video playing method of the foregoing embodiment. For ease of illustration, only portions that are relevant to embodiments of the present disclosure are shown. Referring to fig. 6, the apparatus includes: a first acquisition module 10, a second acquisition module 20, and a rendering processing module 30.
The first obtaining module 10 is configured to obtain image frame data of a video, and send a decoding instruction to a graphics processing unit GPU, where the decoding instruction is used to instruct to decode the image frame data to obtain frame texture data; a second obtaining module 20, configured to obtain the frame texture data; and the rendering processing module 30 is configured to render the frame texture data, and display the rendered frame texture data through a browser.
In one embodiment of the present disclosure, the video is a multi-track video; the first obtaining module 10 is specifically configured to: and acquiring a plurality of image frame data corresponding to the multi-track video.
In an embodiment of the disclosure, the first obtaining module is specifically configured to: and sending a decoding instruction to the GPU through the main thread.
In an embodiment of the present disclosure, the rendering processing module 30 is specifically configured to: rendering the frame texture data by a main thread.
In an embodiment of the present disclosure, the rendering processing module 30 is further configured to: editing the frame texture data through a sub thread to obtain edited frame texture data; and rendering the edited frame texture data through the main thread.
In one embodiment of the present disclosure, the editing process includes at least one of: zooming, rotating, translating and adding special effects.
The device provided in this embodiment may be used to implement the technical solution of the above method embodiment, and the implementation principle and technical effect are similar, which are not described herein again.
An embodiment of the present disclosure further provides an electronic device, including: at least one processor and memory; the memory stores computer-executable instructions; the at least one processor executing the computer-executable instructions stored by the memory causes the at least one processor to perform the video playback method as described above in the first aspect and various possible designs of the first aspect.
Referring to fig. 7, a schematic structural diagram of an electronic device 700 suitable for implementing the embodiment of the present disclosure is shown, where the electronic device 700 may be a terminal device or a server. Among them, the terminal Device may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a Digital broadcast receiver, a Personal Digital Assistant (PDA), a tablet computer (PAD), a Portable Multimedia Player (PMP), a car terminal (e.g., car navigation terminal), etc., and a fixed terminal such as a Digital TV, a desktop computer, etc. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7, the electronic device 700 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 701, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage means 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data necessary for the operation of the electronic apparatus 700 are also stored. The processing device 701, the ROM 702, and the RAM703 are connected to each other by a bus 904. An input/output (I/O) interface 705 is also connected to bus 704.
Generally, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication means 709 may allow the electronic device 700 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 illustrates an electronic device 700 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communication means 709, or may be installed from the storage means 708, or may be installed from the ROM 702. The computer program, when executed by the processing device 701, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
That is, the embodiments of the present disclosure also provide a computer-readable storage medium, in which computer-executable instructions are stored, and when a processor executes the computer-executable instructions, the video playing method according to the first aspect and various possible designs of the first aspect is implemented.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the methods shown in the above embodiments.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of Network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (10)

1. A video playback method, the method comprising:
acquiring image frame data of a video, and sending a decoding instruction to a Graphics Processing Unit (GPU), wherein the decoding instruction is used for indicating that frame texture data are obtained after the image frame data are decoded;
acquiring the frame texture data;
rendering the frame texture data, and displaying the rendered frame texture data through a browser.
2. The method of claim 1, wherein the video is a multi-track video; the acquiring image frame data of the video comprises:
and acquiring a plurality of image frame data corresponding to the multi-track video.
3. The method according to claim 1 or 2, wherein sending the decoding instruction to the graphics processor GPU comprises:
and sending a decoding instruction to the GPU through the main thread.
4. The method of claim 3, wherein the rendering the frame texture data comprises:
rendering the frame texture data by a main thread.
5. The method of claim 4, wherein prior to said rendering frame texture data by the main thread, further comprising:
editing the frame texture data through a sub thread to obtain edited frame texture data;
the rendering frame texture data through the main thread comprises the following steps:
and rendering the edited frame texture data through the main thread.
6. The method of claim 5, wherein the editing process includes at least one of:
zooming, rotating, translating and adding special effects.
7. A multi-track video playback device, comprising:
the first acquisition module is used for acquiring image frame data of a video and sending a decoding instruction to a Graphics Processing Unit (GPU), wherein the decoding instruction is used for indicating that frame texture data are obtained after the image frame data are decoded;
a second obtaining module, configured to obtain the frame texture data;
and the rendering processing module is used for rendering the frame texture data and displaying the rendered frame texture data through a browser.
8. The apparatus of claim 7, wherein the video is a multi-track video; the first obtaining module is specifically configured to:
and acquiring a plurality of image frame data corresponding to the multi-track video.
9. An electronic device, comprising: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executing the computer-executable instructions stored by the memory causes the at least one processor to perform the video playback method of any of claims 1-6.
10. A computer-readable storage medium having computer-executable instructions stored therein, which when executed by a processor, implement the video playback method of any one of claims 1 to 6.
CN202110137036.4A 2021-02-01 2021-02-01 Video playing method and device, electronic equipment and storage medium Active CN114845162B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110137036.4A CN114845162B (en) 2021-02-01 2021-02-01 Video playing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110137036.4A CN114845162B (en) 2021-02-01 2021-02-01 Video playing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114845162A true CN114845162A (en) 2022-08-02
CN114845162B CN114845162B (en) 2024-04-02

Family

ID=82561170

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110137036.4A Active CN114845162B (en) 2021-02-01 2021-02-01 Video playing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114845162B (en)

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6430591B1 (en) * 1997-05-30 2002-08-06 Microsoft Corporation System and method for rendering electronic images
GB0216275D0 (en) * 2001-07-12 2002-08-21 Nec Corp Multi-thread executing method and parallel processing system
US20130187933A1 (en) * 2012-01-23 2013-07-25 Google Inc. Rendering content on computing systems
US8913068B1 (en) * 2011-07-12 2014-12-16 Google Inc. Displaying video on a browser
CN104853254A (en) * 2015-05-26 2015-08-19 深圳市理奥网络技术有限公司 Video playing method and mobile terminal
CN105933724A (en) * 2016-05-23 2016-09-07 福建星网视易信息***有限公司 Video producing method, device and system
CN107277616A (en) * 2017-07-21 2017-10-20 广州爱拍网络科技有限公司 Special video effect rendering intent, device and terminal
CN107948735A (en) * 2017-12-06 2018-04-20 北京金山安全软件有限公司 Video playing method and device and electronic equipment
EP3407563A1 (en) * 2017-05-26 2018-11-28 INTEL Corporation Method, apparatus and machine readable medium for accelerating network security monitoring
CN109218802A (en) * 2018-08-23 2019-01-15 Oppo广东移动通信有限公司 Method for processing video frequency, device, electronic equipment and computer-readable medium
CN109587559A (en) * 2018-11-27 2019-04-05 Oppo广东移动通信有限公司 Method for processing video frequency, device, electronic equipment and storage medium
CN109862409A (en) * 2019-03-18 2019-06-07 广州市网星信息技术有限公司 Video decoding, playback method, device, system, terminal and storage medium
CN110620954A (en) * 2018-06-20 2019-12-27 北京优酷科技有限公司 Video processing method and device for hard solution
CN110704768A (en) * 2019-10-08 2020-01-17 支付宝(杭州)信息技术有限公司 Webpage rendering method and device based on graphics processor
CN111355978A (en) * 2018-12-21 2020-06-30 北京字节跳动网络技术有限公司 Video file processing method and device, mobile terminal and storage medium
CN111405288A (en) * 2020-03-19 2020-07-10 北京字节跳动网络技术有限公司 Video frame extraction method and device, electronic equipment and computer readable storage medium
WO2020248668A1 (en) * 2019-06-10 2020-12-17 海信视像科技股份有限公司 Display and image processing method
CN112291628A (en) * 2020-11-25 2021-01-29 杭州视洞科技有限公司 Multithreading video decoding playing method based on web browser
WO2023045973A1 (en) * 2021-09-27 2023-03-30 北京字跳网络技术有限公司 Method and apparatus for performing cloud rendering on live streaming gift, and electronic device and storage medium

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6430591B1 (en) * 1997-05-30 2002-08-06 Microsoft Corporation System and method for rendering electronic images
GB0216275D0 (en) * 2001-07-12 2002-08-21 Nec Corp Multi-thread executing method and parallel processing system
US8913068B1 (en) * 2011-07-12 2014-12-16 Google Inc. Displaying video on a browser
US20130187933A1 (en) * 2012-01-23 2013-07-25 Google Inc. Rendering content on computing systems
CN104853254A (en) * 2015-05-26 2015-08-19 深圳市理奥网络技术有限公司 Video playing method and mobile terminal
CN105933724A (en) * 2016-05-23 2016-09-07 福建星网视易信息***有限公司 Video producing method, device and system
EP3407563A1 (en) * 2017-05-26 2018-11-28 INTEL Corporation Method, apparatus and machine readable medium for accelerating network security monitoring
CN107277616A (en) * 2017-07-21 2017-10-20 广州爱拍网络科技有限公司 Special video effect rendering intent, device and terminal
CN107948735A (en) * 2017-12-06 2018-04-20 北京金山安全软件有限公司 Video playing method and device and electronic equipment
CN110620954A (en) * 2018-06-20 2019-12-27 北京优酷科技有限公司 Video processing method and device for hard solution
CN109218802A (en) * 2018-08-23 2019-01-15 Oppo广东移动通信有限公司 Method for processing video frequency, device, electronic equipment and computer-readable medium
CN109587559A (en) * 2018-11-27 2019-04-05 Oppo广东移动通信有限公司 Method for processing video frequency, device, electronic equipment and storage medium
CN111355978A (en) * 2018-12-21 2020-06-30 北京字节跳动网络技术有限公司 Video file processing method and device, mobile terminal and storage medium
CN109862409A (en) * 2019-03-18 2019-06-07 广州市网星信息技术有限公司 Video decoding, playback method, device, system, terminal and storage medium
WO2020248668A1 (en) * 2019-06-10 2020-12-17 海信视像科技股份有限公司 Display and image processing method
CN110704768A (en) * 2019-10-08 2020-01-17 支付宝(杭州)信息技术有限公司 Webpage rendering method and device based on graphics processor
CN111405288A (en) * 2020-03-19 2020-07-10 北京字节跳动网络技术有限公司 Video frame extraction method and device, electronic equipment and computer readable storage medium
CN112291628A (en) * 2020-11-25 2021-01-29 杭州视洞科技有限公司 Multithreading video decoding playing method based on web browser
WO2023045973A1 (en) * 2021-09-27 2023-03-30 北京字跳网络技术有限公司 Method and apparatus for performing cloud rendering on live streaming gift, and electronic device and storage medium

Also Published As

Publication number Publication date
CN114845162B (en) 2024-04-02

Similar Documents

Publication Publication Date Title
CN109640188B (en) Video preview method and device, electronic equipment and computer readable storage medium
CN112437345B (en) Video double-speed playing method and device, electronic equipment and storage medium
CN111436005B (en) Method and apparatus for displaying image
US11785195B2 (en) Method and apparatus for processing three-dimensional video, readable storage medium and electronic device
US20220310125A1 (en) Method and apparatus for video production, device and storage medium
CN110290398B (en) Video issuing method and device, storage medium and electronic equipment
US20220394333A1 (en) Video processing method and apparatus, storage medium, and electronic device
CN113507637A (en) Media file processing method, device, equipment, readable storage medium and product
CN110519645B (en) Video content playing method and device, electronic equipment and computer readable medium
CN115767181A (en) Live video stream rendering method, device, equipment, storage medium and product
CN113676769A (en) Video decoding method, apparatus, storage medium, and program product
CN113301424A (en) Play control method, device, storage medium and program product
CN110611847B (en) Video preview method and device, storage medium and electronic equipment
CN114845162B (en) Video playing method and device, electronic equipment and storage medium
CN115934227A (en) Application program operation control method, device, equipment and storage medium
CN115330916A (en) Method, device and equipment for generating drawing animation, readable storage medium and product
CN111385638B (en) Video processing method and device
CN113747226A (en) Video display method and device, electronic equipment and program product
CN111866508A (en) Video processing method, device, medium and electronic equipment
WO2024140069A1 (en) Video processing method and apparatus, and electronic device
CN115802104A (en) Video skip playing method and device, electronic equipment and storage medium
WO2023030402A1 (en) Video processing method, apparatus and system
CN115776594A (en) Video continuous playing method and device, electronic equipment and storage medium
CN113129360B (en) Method and device for positioning object in video, readable medium and electronic equipment
WO2023011557A1 (en) Image processing method and apparatus, and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant