CN111327921A - Video data processing method and device - Google Patents

Video data processing method and device Download PDF

Info

Publication number
CN111327921A
CN111327921A CN201811540883.XA CN201811540883A CN111327921A CN 111327921 A CN111327921 A CN 111327921A CN 201811540883 A CN201811540883 A CN 201811540883A CN 111327921 A CN111327921 A CN 111327921A
Authority
CN
China
Prior art keywords
data processing
node
video data
data
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811540883.XA
Other languages
Chinese (zh)
Inventor
熊磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Weibo Technology Co ltd
Original Assignee
Shenzhen Weibo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Weibo Technology Co ltd filed Critical Shenzhen Weibo Technology Co ltd
Priority to CN201811540883.XA priority Critical patent/CN111327921A/en
Publication of CN111327921A publication Critical patent/CN111327921A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23103Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion using load balancing strategies, e.g. by placing or distributing content on different disks, different memories or different servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • H04N21/2405Monitoring of the internal components or processes of the server, e.g. server load

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention is suitable for the technical field of computers, and provides a video data processing method and a device, wherein the video data processing method comprises the following steps: compared with the prior art, the method and the device have the advantages that the video data to be processed and the data processing task of the video data are obtained; acquiring performance parameters of pre-registered operation nodes; constructing a calculation flow graph corresponding to the video data based on the performance parameters and the data processing tasks; and acquiring target video data output by the output node in the computation flow graph after the data processing instruction corresponding to the output node is executed. According to the scheme, the data processing tasks are reasonably distributed according to the performance analysis result of each operation node, and the system resources are reasonably used, so that the operation resources can be effectively utilized.

Description

Video data processing method and device
Technical Field
The invention belongs to the technical field of computers, and particularly relates to a video data processing method and video data processing equipment.
Background
With the development of science and technology, videos are increasingly applied to various fields, such as video calls, high-definition television programs, live programs, video editing software and the like. In the process of video editing, a video is subjected to various processing, such as decoding, compression, picture processing and the like, and a traditional video data processing structure adopts a global serial streaming structure or a local parallel structure.
However, when a global serial streaming structure or a local parallel structure is adopted, a Central Processing Unit (CPU) and a Graphics Processing Unit (GPU) are in an unsaturated or idle state for most of the time, and thus, computational resources cannot be reasonably used.
Disclosure of Invention
In view of this, embodiments of the present invention provide a video data processing method and device, so as to solve the problem in the prior art that a CPU and a GPU are in an unsaturated or idle state most of the time, and computational resources cannot be used reasonably.
A first aspect of an embodiment of the present invention provides a video data processing method, including:
acquiring video data to be processed and a data processing task of the video data;
acquiring performance parameters of pre-registered operation nodes;
constructing a computation flow graph corresponding to the video data based on the performance parameters and the data processing tasks, wherein the computation flow graph comprises node information of target operation nodes for processing each data processing task and data flow directions among the target operation nodes, and the target operation nodes comprise input nodes, intermediate nodes and output nodes;
sending a data processing instruction to the target operation node based on the computation flow graph;
and acquiring target video data output by the output node in the computation flow graph after the data processing instruction corresponding to the output node is executed.
A second aspect of an embodiment of the present invention provides a video data processing apparatus, including:
the device comprises a first acquisition unit, a second acquisition unit and a processing unit, wherein the first acquisition unit is used for acquiring video data to be processed and a data processing task of the video data;
the second acquisition unit is used for acquiring the performance parameters of the operation nodes which are registered in advance;
a construction unit, configured to construct a computation flow graph corresponding to the video data based on the performance parameters and the data processing tasks, where the computation flow graph includes node information of target operation nodes for processing each data processing task and a data flow direction between the target operation nodes, and the target operation nodes include an input node, an intermediate node, and an output node;
a sending unit, configured to send a data processing instruction to the target operation node based on the computation flow graph;
and the third acquisition unit is used for acquiring target video data output by the output node in the computation flow graph after the data processing instruction corresponding to the output node is executed.
A third aspect of an embodiment of the present invention provides a video data processing apparatus, including:
a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method according to the first aspect when executing the computer program.
A fourth aspect of an embodiment of the present invention provides a computer-readable storage medium, including: a computer program which, when executed by a processor, carries out the steps of the method according to the first aspect as described above.
Compared with the prior art, the method and the device have the advantages that the video data to be processed and the data processing task of the video data are obtained; acquiring performance parameters of pre-registered operation nodes; constructing a computation flow graph corresponding to the video data based on the performance parameters and the data processing tasks, wherein the computation flow graph comprises node information of target operation nodes for processing each data processing task and data flow directions among the target operation nodes, and the target operation nodes comprise input nodes, intermediate nodes and output nodes; sending a data processing instruction to the target operation node based on the computation flow graph; and acquiring target video data output by the output node in the computation flow graph after the data processing instruction corresponding to the output node is executed. According to the scheme, the data processing tasks are reasonably distributed according to the performance analysis result of each operation node, and the system resources are reasonably used, so that the operation resources can be effectively utilized.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart of an implementation of a video data processing method according to an embodiment of the present invention;
fig. 2 is a flowchart of an implementation of S101 in a video data processing method according to an embodiment of the present invention;
fig. 3 is a flowchart of an implementation of S102 in a video data processing method according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a computation flow chart in a video data processing method according to an embodiment of the present invention;
fig. 5 is a flowchart of an implementation of S103 in a video data processing method according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a video data processing apparatus according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a video data processing apparatus according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Referring to fig. 1, fig. 1 is a flowchart illustrating an implementation of a video data processing method according to an embodiment of the present invention. In this embodiment, an execution subject of the video data processing method is a device, and the device includes, but is not limited to, a server, a terminal, and the like, where the server may be one server, or any one server in a server cluster composed of a plurality of servers, or a cloud computing service center. The video data processing method as shown in fig. 1 may include:
s101: and acquiring video data to be processed and a data processing task of the video data.
When the device detects a video file to be processed, the device acquires video data to be processed contained in the video file to be processed. The video file to be processed may be obtained locally from the device executing the embodiment, or may be obtained from other non-local devices, which is not limited herein.
The video file to be processed can be set by the user, and comprises the following steps: the user may select a video to be processed from a local video file set of the device implementing the embodiment or other non-local devices, may select the received video as the video to be processed, and may also select a video shot by a camera function as the video file to be processed.
The format of the video to be processed may be: rm format, rmvb format, mpeg1-4 format, mov format, mtv format, dat format, wmv format, avi format, 3gp format, amv format, dmv format, flv format, etc., without limitation herein.
The method comprises the steps that after the device obtains video data to be processed, the video data are split, data processing tasks are split from the video data according to requirements, and the data processing tasks of the video data to be processed are determined, wherein the data processing tasks are all minimum behavior task units required for processing the video data to be processed. For example, if video data a and B need to be video-overlaid, then the minimum behavioral task units required to process the video data to be processed include:
1. firstly separating a media file A, B by a demux to obtain compressed audio and video data A-video, A-audio, B-video and B-audio;
2. the decoder respectively decodes audio and video data A-video-compress, A-audio-compress, B-video-compress and B-audio-compress to obtain A-video-uncompress, A-audio-uncompress, B-video-uncompress and B-audio-uncompress;
3. PIP superimposes two video data A-video-uncompressions, B-video-uncompressions obtains AB-video-uncompressions;
4. respectively encoding AB-video-uncompressis and A-audio-uncompressis of one path of audio data to obtain AB-video-uncompressis and A-audio-uncompressis;
5. the Muxer merges the AB-video-compress and the a-audio-compress to generate the required file format.
Further, S101 may include S1011 to S1012, as shown in fig. 2, specifically as follows:
s1011: and acquiring video data to be processed and a user instruction.
The method for acquiring the video data to be processed in S1011 is completely consistent with that in S101, please refer to S101 specifically, which is not described herein again.
When the device detects that the user inputs a triggered user instruction, the user instruction is obtained. The user instructions include: at least one mode of processing the video data to be processed. The mode of triggering the user instruction by the user can be a processing mode of inputting video data to be processed after the user clicks a 'video processing' virtual button in an interactive interface; or the interactive interface provides a video processing mode list for the user, and the user can select the processing mode of the video data to be processed in the list, for example, after the user selects the video to be processed, the video processing mode list is popped up, and the list includes: and video processing modes such as splicing, clipping, portrait beautifying and the like, wherein the video processing mode selected by the user from the list is the user instruction. The user can simultaneously select a plurality of video processing modes, for example, simultaneously select the processing of clipping and portrait beautifying for the video to be processed.
S1012: determining a data processing task for the video data based on the video data and the user instruction.
The device analyzes the video data based on the user instruction, determines a processing mode of the video data in the user instruction, and determines a data processing task of the video data to be processed according to the processing mode of the video data. For example, when the video to be processed is a, the user instructs to clip the video a, and after the video data of the video a is acquired, the video data of the video a is split based on the video data of the video a and the clipping instruction of the user, so the data processing task of the video data of the video a includes: decoding, separating audio and video, clipping audio, synthesizing clipped video and audio, and compressing video into a playable format.
S102: and acquiring performance parameters of the pre-registered operation nodes.
The device registers a computation node in advance, where the computation node is a data Processing Unit located in a Central Processing Unit (CPU) or a Graphics Processing Unit (GPU), and in this embodiment, the CPU and the GPU are not limited to be local CPUs and GPUs or non-local CPUs and GPUs. The performance parameters of the pre-registered operation node are related parameters of the operation node for processing data, the performance parameters of different operation nodes are different and can comprise a plurality of parameters, and the capability of the operation node for processing data can be judged through the performance parameters.
The device may send an instruction to the operation node to acquire the performance parameter, and when the operation node receives the instruction, the device sends its own performance parameter to the device, and the device acquires the performance parameter of the operation node registered in advance.
Further, in order to obtain detailed parameters of the operation node, the performance parameters of the operation node include: the method comprises the following steps that the time length required by the operation node for processing a preset data processing task, the concurrency number of the preset data processing tasks which can be processed by the operation node, the data transmission time length of the operation node and the system resource occupation proportion of the operation node are obtained.
The obtaining mode of the time length required by the operation node for processing the preset data processing task is as follows: the device presets data processing tasks of common processing modes of video editing, such as cutting, merging, decoding, portrait beautifying and the like. The equipment sends an instruction to the operation node to process the preset data processing task, and records the time length required by the operation node to process the preset data processing task. The preset data processing tasks of the operation nodes of the device setting difference may be different, for example, the preset data processing tasks of the operation nodes of the device setting CPU and the operation nodes of the GPU may be different.
The concurrent number of the preset data processing tasks which can be processed by the operation nodes is the number of different tasks which can be simultaneously processed by the same operation node, and the obtaining mode is as follows: the method comprises the steps that data processing tasks of common processing modes of video editing are preset by equipment, the equipment sends an instruction to an operation node to simultaneously process N (N is a positive integer greater than or equal to 2) preset data processing tasks, when the data processed by the operation node is normal, the equipment sends an instruction to the operation node to simultaneously process N +1 preset data processing tasks next time, and when the data processed by the operation node is overloaded, the concurrence number of the preset data processing tasks which can be normally processed by the operation node is obtained.
The time length of the data transmission of the operation node is the time length required by the operation node for transmitting the data to another operation node. .
The system resource occupation proportion of the operation nodes is the percentage of the resources of the CPU or the GPU occupied by the operation nodes at the current moment.
Further, in order to ensure the comprehensiveness and accuracy of obtaining the performance parameters of the operation node, S102 may include S1021 to S1022, as shown in fig. 3, specifically as follows:
s1021: and acquiring the initialized performance parameters and the real-time performance parameters of the pre-registered operation nodes.
The equipment acquires the initialized performance parameters of the operation nodes registered in advance, wherein the initialized performance parameters refer to performance indexes of the operation nodes based on initialized attributes and characteristics and have no real-time property. Initializing performance parameters may include: the time length required by the operation node for processing the preset data processing task, the concurrency number of the preset data processing tasks which can be processed by the operation node, the time length of data transmission of the operation node, the information of the equipment where the operation node is located and the like.
The equipment acquires the real-time performance parameters of the pre-registered operation nodes, wherein the real-time performance parameters refer to performance indexes of the operation nodes for processing data at the current moment, and have real-time performance. The real-time performance parameters may include: the system resource occupation proportion of the operation node, the real-time temperature of a CPU or a GPU corresponding to the operation node and the like.
Further, in order to ensure the accuracy of the initialization performance parameters acquired by the device, before S1021, the method may include: and when the change of the node information of the operation node is detected, updating the initialization performance parameters of the operation node. The equipment detects node information of the operation node in real time, and the node information of the operation node comprises equipment information corresponding to the operation node, position information of the operation node, identification information of the operation node and the like. When the device detects that the node information of the operation node is changed, that is, the operation node is changed, situations that may occur include a change of the device, a change of the cpu or the GPU, a repartitioning of the operation node, and the like. And the equipment detects that the operation node changes, acquires the initialized performance parameter of the changed operation node, and updates the initialized performance parameter of the operation node.
S1022: and determining the performance parameters of the operation node based on the initialization performance parameters and the real-time performance parameters.
The equipment obtains the initialized performance parameters and the real-time performance parameters, counts and integrates the initialized performance parameters and the real-time performance parameters to obtain a set of performance parameters, and determines the performance parameters of the calculation nodes according to the set of performance parameters.
S103: and constructing a computation flow graph corresponding to the video data based on the performance parameters and the data processing tasks, wherein the computation flow graph comprises node information of target operation nodes for processing each data processing task and data flow directions among the target operation nodes, and the target operation nodes comprise input nodes, intermediate nodes and output nodes.
The device matches the performance parameters of the operation nodes with the data processing tasks to obtain operation nodes suitable for processing the data processing tasks, different data processing tasks can be processed by different operation nodes, and the same operation node can also process a plurality of different data processing tasks.
The device obtains an operation node suitable for processing a data processing task, and meanwhile, a calculation flow graph corresponding to the video data is constructed based on the data processing task. As shown in fig. 4, the computation flow graph graphically depicts the process of data flowing and processing in the system, which only reflects the logical functions that the system needs to complete, and is a functional model including input data flow, output data flow, data operation nodes and data flow direction. The calculation flow graph corresponding to the video data depicts a process of moving the video data from input to output, and the processing of the data flow by each operation node in the process, including the node information of a target operation node for processing each data processing task and the data flow direction among the target operation nodes, wherein the target operation nodes comprise input nodes, intermediate nodes and output nodes.
Further, in order to obtain node information of a target operation node for processing each data processing task in the computation flow graph and a data flow direction between the target operation nodes, where the target operation node includes an input node, an intermediate node, and an output node, S103 may include S1031 to S1034, as shown in fig. 4, specifically as follows:
s1031: acquiring task information of the data processing task, wherein the task information comprises: the type of the data processing task, the data volume of the data processing task, and the processing sequence of the data processing task.
The device acquires task information of the data processing task from the data processing task, wherein the task information of the data processing task is all relevant information of the data processing task, including the type of the data processing task, the data volume of the data processing task and the processing sequence of the data processing task.
The type of the data processing task is obtained by classifying according to the content of the data processing task, and the type of the data processing task may include: a demux data processing task, a decode data processing task, an encode data processing task, a mux data processing task, etc., wherein the demux data processing task is to "separate" a video portion or an audio portion in a file; the decode data processing task is to convert the coded media file into sound or image again; the encode data processing task is a process of converting a section of sound or image into a computer digital file through a certain protocol or rule; the mux data processing task is to pack the video material and the audio material into a single file. For example, if the data processing task is to split video a into two parts, image and audio, then the type of the data processing task is a demux data processing task. Furthermore, it will be understood by those skilled in the art that the types of data processing tasks may include, but are not limited to, the above data processing tasks, and may also include video special effects processing tasks, visual algorithms processing tasks, and any other processing tasks for video data.
The data amount of the data processing task may be used to measure whether the compute node has the capability to process the data processing task, including the sum of all data amounts processed in the data processing task, for example, if the data processing task is to split 50 minutes of video a into two parts, image and audio, then the data processing task includes the image data of the whole frame number of video a in 50 minutes and the audio data of the whole video a in 50 minutes.
The processing sequence of the data processing tasks is the sequence of all the data processing tasks in the process of processing all the data processing tasks. For example, if video data a and B need to be video-overlaid, the order of processing the video data to be processed is: firstly, a demux separates a media file A, B to obtain compressed audio and video data A-video, A-audio, B-video and B-audio; secondly, decoding audio and video data A-video-compress, A-audio-compress, B-video-compress and B-audio-compress by the decoder to obtain A-video-uncompress, A-audio-uncompress, B-video-uncompress and B-audio-uncompress respectively by the decoder 1; thirdly, superposing two video data A-video-uncompressions, and B-video-uncompressions to obtain AB-video-uncompressions; step four, 4, respectively encoding AB-video-uncompressis and A-audio-uncompressis of one path of audio data to obtain AB-video-uncompressis and A-audio-uncompressis; and step five, 5, the Muxer combines the AB-video-compress and the A-audio-compress to generate a required file format.
S1032: and determining node information of a target operation node for processing the data processing task based on the performance parameter, the type of the data processing task and the data volume of the data processing task.
The method comprises the steps that the equipment obtains the type of a data processing task and the data volume of the data processing task, the type and the data volume of the data processing task are matched with performance parameters of an operation node, the operation node which can process the type of the data processing task and the data volume of the data processing task is obtained preferably, and the operation node is a target operation node for processing the data processing task. The equipment acquires node information of a target operation node, wherein the node information of the target operation node comprises data processed by the target operation node, a processing mode of the target operation node, data processed by the target operation node and the like.
S1033: and determining the data flow direction between the target operation nodes based on the processing sequence of the data processing tasks.
The device determines the sequence of target operation nodes for processing the data processing tasks based on the processing sequence of the data processing tasks, determines the data flow direction among the target operation nodes, only one final output data flow is the processed video data, and controls the synchronization or the asynchronization of the whole data processing process through the control of the data flow direction. The order of the target operational nodes includes a serial processing order as well as a parallel processing order.
For example, when a user wants to superimpose a small-resolution video a on a large-resolution video B to form a picture-in-picture video, and the amount of video data of the video B is large relative to the video a, the decoding time of the video B is longer than that of the video a, and finally, both the decoded data of the video B and the decoded data of the video a flow to a picture-in-picture composition operation node.
S1034: and constructing a calculation flow graph corresponding to the video data based on the node information of the target operation node of the data processing task and the data flow direction between the target operation nodes.
The device determines input data flow, output data flow, data flow direction, input nodes, intermediate nodes and output nodes of a computation flow graph based on node information of target operation nodes of a data processing task and data flow direction between the target operation nodes, and constructs the computation flow graph corresponding to video data, wherein the computation flow graph only has one input node and one output node.
S104: and sending a data processing instruction to the target operation node based on the computation flow graph.
The device sends a data processing command to a target operation node according to a computation flow graph, the data processing command is used for the operation node to process data according to the received data processing command, and the data processing command comprises: receiving data, processing the data and sending the data. The data transmission mode between the operation nodes can be a communication establishing mode between the operation nodes, and at the moment, the operation nodes can be positioned on the same hardware equipment; the communication mode can be established between the equipment and the operation node, and Remote Procedure Call or SOCKET communication can be adopted.
S105: and acquiring target video data output by the output node in the computation flow graph after the data processing instruction corresponding to the output node is executed.
The device acquires data output by an output node in the computation flow graph after executing a data processing instruction corresponding to the output node, wherein the data is processed video data and target video data.
Compared with the prior art, the method and the device have the advantages that the video data to be processed and the data processing task of the video data are obtained; acquiring performance parameters of pre-registered operation nodes; constructing a computation flow graph corresponding to the video data based on the performance parameters and the data processing tasks, wherein the computation flow graph comprises node information of target operation nodes for processing each data processing task and data flow directions among the target operation nodes, and the target operation nodes comprise input nodes, intermediate nodes and output nodes; sending a data processing instruction to the target operation node based on the computation flow graph; and acquiring target video data output by the output node in the computation flow graph after the data processing instruction corresponding to the output node is executed. According to the scheme, the data processing tasks are reasonably distributed according to the performance analysis result of each operation node, and the system resources are reasonably used, so that the operation resources can be effectively utilized.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Referring to fig. 6, fig. 6 is a schematic diagram of a video data processing apparatus according to an embodiment of the present invention. The units included are used to perform the steps in the embodiments corresponding to fig. 1-5. Please refer to the related description of the embodiments in fig. 1 to 5. For convenience of explanation, only the portions related to the present embodiment are shown. Referring to fig. 6, the video data processing apparatus 6 includes:
a first obtaining unit 610, configured to obtain video data to be processed and a data processing task of the video data;
a second obtaining unit 620, configured to obtain performance parameters of pre-registered operation nodes;
a constructing unit 630, configured to construct a computation flow graph corresponding to the video data based on the performance parameters and the data processing tasks, where the computation flow graph includes node information of target operation nodes for processing each data processing task and a data flow direction between the target operation nodes, and the target operation nodes include an input node, an intermediate node, and an output node;
a sending unit 640, configured to send a data processing instruction to the target operation node based on the computation flow graph;
a third obtaining unit 650, configured to obtain target video data that is output by the output node in the computation flow graph after the data processing instruction corresponding to the output node is executed.
Further, the building unit 630 includes:
a fourth obtaining unit, configured to obtain task information of the data processing task, where the task information includes: the type of the data processing task, the data volume of the data processing task and the processing sequence of the data processing task;
a first determining unit, configured to determine node information of a target operation node that processes the data processing task based on the performance parameter, the type of the data processing task, and a data volume of the data processing task;
a second determining unit, configured to determine a data flow direction between the target operation nodes based on a processing order of the data processing tasks;
and the first construction unit is used for constructing a computation flow graph corresponding to the video data based on the node information of the target operation node of the data processing task and the data flow direction between the target operation nodes.
Further, the second obtaining unit 620 includes:
a fifth acquiring unit, configured to acquire an initialization performance parameter and a real-time performance parameter of a pre-registered operation node;
and the third determining unit is used for determining the performance parameters of the operation node based on the initialization performance parameters and the real-time performance parameters.
Further, the determination unit is configured to:
and when the change of the node information of the operation node is detected, updating the initialization performance parameters of the operation node.
Further, the performance parameters of the operation node include: the method comprises the following steps that the time length required by the operation node for processing a preset data processing task, the concurrency number of the preset data processing tasks which can be processed by the operation node, the data transmission time length of the operation node and the system resource occupation proportion of the operation node are obtained.
Further, the first obtaining unit 610 is configured to:
acquiring video data to be processed and a user instruction;
determining a data processing task for the video data based on the video data and the user instruction.
Fig. 7 is a schematic diagram of a video data processing apparatus according to an embodiment of the present invention. As shown in fig. 7, the video data processing apparatus 7 of this embodiment includes: a processor 70, a memory 71 and a computer program 72, such as a video data processing program, stored in said memory 71 and executable on said processor 70. The processor 70, when executing the computer program 72, implements the steps in the various video data processing method embodiments described above, such as the steps 101 to 105 shown in fig. 1. Alternatively, the processor 70, when executing the computer program 72, implements the functions of the units in the above-described device embodiments, such as the functions of the modules 610 to 650 shown in fig. 6.
Illustratively, the computer program 72 may be divided into one or more units, which are stored in the memory 71 and executed by the processor 70 to accomplish the present invention. The one or more units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 72 in the video data processing device 7. For example, the computer program 72 may be divided into a first acquiring unit, a second acquiring unit, a constructing unit, a sending unit, and a third acquiring unit, and each unit has the following specific functions:
the device comprises a first acquisition unit, a second acquisition unit and a processing unit, wherein the first acquisition unit is used for acquiring video data to be processed and a data processing task of the video data;
the second acquisition unit is used for acquiring the performance parameters of the operation nodes which are registered in advance;
a construction unit, configured to construct a computation flow graph corresponding to the video data based on the performance parameters and the data processing tasks, where the computation flow graph includes node information of target operation nodes for processing each data processing task and a data flow direction between the target operation nodes, and the target operation nodes include an input node, an intermediate node, and an output node;
a sending unit, configured to send a data processing instruction to the target operation node based on the computation flow graph;
and the third acquisition unit is used for acquiring target video data output by the output node in the computation flow graph after the data processing instruction corresponding to the output node is executed.
The video data processing device 7 may be a desktop computer, a notebook computer, a palm computer, a cloud server, or other computing devices. The video data processing apparatus may include, but is not limited to, a processor 70, a memory 71. It will be appreciated by a person skilled in the art that fig. 7 is only an example of a video data processing device 7 and does not constitute a limitation of the video data processing device 7 and may comprise more or less components than shown, or combine certain components, or different components, e.g. the video data processing device may also comprise an input output device, a network access device, a bus, etc.
The Processor 70 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 71 may be an internal storage unit of the video data processing apparatus 7, such as a hard disk or a memory of the video data processing apparatus 7. The memory 71 may also be an external storage device of the video data processing apparatus 7, such as a plug-in hard disk provided on the video data processing apparatus 7, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the memory 71 may also include both an internal storage unit and an external storage device of the video data processing device 7. The memory 71 is used for storing the computer programs and other programs and data required by the video data processing device. The memory 71 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. . Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A method of processing video data, comprising:
acquiring video data to be processed and a data processing task of the video data;
acquiring performance parameters of pre-registered operation nodes;
constructing a computation flow graph corresponding to the video data based on the performance parameters and the data processing tasks, wherein the computation flow graph comprises node information of target operation nodes for processing each data processing task and data flow directions among the target operation nodes, and the target operation nodes comprise input nodes, intermediate nodes and output nodes;
sending a data processing instruction to the target operation node based on the computation flow graph;
and acquiring target video data output by the output node in the computation flow graph after the data processing instruction corresponding to the output node is executed.
2. The video data processing method of claim 1, wherein said constructing a computational flow graph corresponding to said video data based on said performance parameters and said data processing tasks comprises:
acquiring task information of the data processing task, wherein the task information comprises: the type of the data processing task, the data volume of the data processing task and the processing sequence of the data processing task;
determining node information of a target operation node for processing the data processing task based on the performance parameter, the type of the data processing task and the data volume of the data processing task;
determining the data flow direction between the target operation nodes based on the processing sequence of the data processing tasks;
and constructing a calculation flow graph corresponding to the video data based on the node information of the target operation node of the data processing task and the data flow direction between the target operation nodes.
3. The method of claim 1, wherein the obtaining performance parameters of pre-registered compute nodes comprises:
acquiring an initialization performance parameter and a real-time performance parameter of a pre-registered operation node;
and determining the performance parameters of the operation node based on the initialization performance parameters and the real-time performance parameters.
4. The video data processing method of claim 3, wherein prior to said determining performance parameters of said compute node based on said initialization performance parameters and said real-time performance parameters, further comprises:
and when the change of the node information of the operation node is detected, updating the initialization performance parameters of the operation node.
5. The video data processing method of claim 1, wherein the performance parameters of the compute nodes comprise: the method comprises the following steps that the time length required by the operation node for processing a preset data processing task, the concurrency number of the preset data processing tasks which can be processed by the operation node, the data transmission time length of the operation node and the system resource occupation proportion of the operation node are obtained.
6. The video data processing method according to any one of claims 1 to 5, wherein the acquiring of the video data to be processed and the data processing task of the video data comprises:
acquiring video data to be processed and a user instruction;
determining a data processing task for the video data based on the video data and the user instruction.
7. A video data processing apparatus, comprising:
the device comprises a first acquisition unit, a second acquisition unit and a processing unit, wherein the first acquisition unit is used for acquiring video data to be processed and a data processing task of the video data;
the second acquisition unit is used for acquiring the performance parameters of the operation nodes which are registered in advance;
a construction unit, configured to construct a computation flow graph corresponding to the video data based on the performance parameters and the data processing tasks, where the computation flow graph includes node information of target operation nodes for processing each data processing task and a data flow direction between the target operation nodes, and the target operation nodes include an input node, an intermediate node, and an output node;
a sending unit, configured to send a data processing instruction to the target operation node based on the computation flow graph;
and the third acquisition unit is used for acquiring target video data output by the output node in the computation flow graph after the data processing instruction corresponding to the output node is executed.
8. The video data processing apparatus of claim 7, wherein the construction unit comprises:
a fourth obtaining unit, configured to obtain task information of the data processing task, where the task information includes: the type of the data processing task, the data volume of the data processing task and the processing sequence of the data processing task;
a first determining unit, configured to determine node information of a target operation node that processes the data processing task based on the performance parameter, the type of the data processing task, and a data volume of the data processing task;
a second determining unit, configured to determine a data flow direction between the target operation nodes based on a processing order of the data processing tasks;
and the first construction unit is used for constructing a computation flow graph corresponding to the video data based on the node information of the target operation node of the data processing task and the data flow direction between the target operation nodes.
9. Video data processing device comprising a memory, a processor and a computer program stored in said memory and executable on said processor, characterized in that said processor implements the steps of the method according to any of claims 1 to 6 when executing said computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
CN201811540883.XA 2018-12-17 2018-12-17 Video data processing method and device Pending CN111327921A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811540883.XA CN111327921A (en) 2018-12-17 2018-12-17 Video data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811540883.XA CN111327921A (en) 2018-12-17 2018-12-17 Video data processing method and device

Publications (1)

Publication Number Publication Date
CN111327921A true CN111327921A (en) 2020-06-23

Family

ID=71170641

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811540883.XA Pending CN111327921A (en) 2018-12-17 2018-12-17 Video data processing method and device

Country Status (1)

Country Link
CN (1) CN111327921A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112165649A (en) * 2020-08-14 2021-01-01 咪咕文化科技有限公司 Audio and video processing method and device, electronic equipment and storage medium
CN114356511A (en) * 2021-08-16 2022-04-15 中电长城网际***应用有限公司 Task allocation method and system
CN114697705A (en) * 2020-12-29 2022-07-01 深圳云天励飞技术股份有限公司 Video stream object processing method and device, video stream processing system and electronic equipment
CN114697706A (en) * 2022-03-29 2022-07-01 深圳市恒扬数据股份有限公司 Video content processing method, device, terminal and storage medium
CN114866794A (en) * 2022-04-28 2022-08-05 深圳市商汤科技有限公司 Task management method and device, electronic equipment and storage medium
CN115098181A (en) * 2022-05-26 2022-09-23 浪潮软件集团有限公司 Video stream assembling method and device for domestic CPU and OS

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1193143A (en) * 1997-03-07 1998-09-16 富士通株式会社 Information processing device, multi-task controlling method and programme recording medium
US20080187053A1 (en) * 2007-02-06 2008-08-07 Microsoft Corporation Scalable multi-thread video decoding
CN101860752A (en) * 2010-05-07 2010-10-13 浙江大学 Video code stream parallelization method for embedded multi-core system
CN104469370A (en) * 2013-09-17 2015-03-25 中国普天信息产业股份有限公司 Video transcode method and device
CN105159761A (en) * 2008-06-06 2015-12-16 苹果公司 Application programming interfaces for data parallel computing on multiple processors
CN107783782A (en) * 2016-08-25 2018-03-09 萨思学会有限公司 Compiling for parallel processing of the node apparatus based on GPU
CN108924221A (en) * 2018-06-29 2018-11-30 华为技术有限公司 The method and apparatus for distributing resource

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1193143A (en) * 1997-03-07 1998-09-16 富士通株式会社 Information processing device, multi-task controlling method and programme recording medium
US20080187053A1 (en) * 2007-02-06 2008-08-07 Microsoft Corporation Scalable multi-thread video decoding
CN105159761A (en) * 2008-06-06 2015-12-16 苹果公司 Application programming interfaces for data parallel computing on multiple processors
CN101860752A (en) * 2010-05-07 2010-10-13 浙江大学 Video code stream parallelization method for embedded multi-core system
CN104469370A (en) * 2013-09-17 2015-03-25 中国普天信息产业股份有限公司 Video transcode method and device
CN107783782A (en) * 2016-08-25 2018-03-09 萨思学会有限公司 Compiling for parallel processing of the node apparatus based on GPU
CN108924221A (en) * 2018-06-29 2018-11-30 华为技术有限公司 The method and apparatus for distributing resource

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112165649A (en) * 2020-08-14 2021-01-01 咪咕文化科技有限公司 Audio and video processing method and device, electronic equipment and storage medium
CN114697705A (en) * 2020-12-29 2022-07-01 深圳云天励飞技术股份有限公司 Video stream object processing method and device, video stream processing system and electronic equipment
CN114697705B (en) * 2020-12-29 2024-03-22 深圳云天励飞技术股份有限公司 Video stream object processing method and device, video stream processing system and electronic equipment
CN114356511A (en) * 2021-08-16 2022-04-15 中电长城网际***应用有限公司 Task allocation method and system
CN114697706A (en) * 2022-03-29 2022-07-01 深圳市恒扬数据股份有限公司 Video content processing method, device, terminal and storage medium
CN114866794A (en) * 2022-04-28 2022-08-05 深圳市商汤科技有限公司 Task management method and device, electronic equipment and storage medium
CN115098181A (en) * 2022-05-26 2022-09-23 浪潮软件集团有限公司 Video stream assembling method and device for domestic CPU and OS

Similar Documents

Publication Publication Date Title
CN111327921A (en) Video data processing method and device
TWI674790B (en) Method of picture data encoding and decoding and apparatus
CN109309842B (en) Live broadcast data processing method and device, computer equipment and storage medium
KR101770070B1 (en) Method and system for providing video stream of video conference
CN111681291A (en) Image processing method, device, equipment and computer readable storage medium
CN107295352B (en) Video compression method, device, equipment and storage medium
CN109525802A (en) A kind of video stream transmission method and device
CN113965751B (en) Screen content coding method, device, equipment and storage medium
CN111093094A (en) Video transcoding method, device and system, electronic equipment and readable storage medium
CN112035081A (en) Screen projection method and device, computer equipment and storage medium
CN109040786A (en) Transmission method, device, system and the storage medium of camera data
CN104349177A (en) Method for turning to play multimedia file under desktop cloud, virtual machine and system
CN115396645B (en) Data processing method, device and equipment for immersion medium and storage medium
CN111385576B (en) Video coding method and device, mobile terminal and storage medium
US20170134454A1 (en) System for cloud streaming service, method for still image-based cloud streaming service and apparatus therefor
CN104572964A (en) Zip file unzipping method and device
CN106937127B (en) Display method and system for intelligent search preparation
WO2023226504A1 (en) Media data processing methods and apparatuses, device, and readable storage medium
KR102417055B1 (en) Method and device for post processing of a video stream
CN116248889A (en) Image encoding and decoding method and device and electronic equipment
CN107872683B (en) Video data processing method, device, equipment and storage medium
CN110662071A (en) Video decoding method and apparatus, storage medium, and electronic apparatus
CN113766266B (en) Audio and video processing method, device, equipment and storage medium
KR102516831B1 (en) Method, computer device, and computer program for providing high-definition image of region of interest using single stream
CN114205359A (en) Video rendering coordination method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200623

RJ01 Rejection of invention patent application after publication