CN110149518B - Method, system, device, equipment and storage medium for processing media data - Google Patents

Method, system, device, equipment and storage medium for processing media data Download PDF

Info

Publication number
CN110149518B
CN110149518B CN201810145069.1A CN201810145069A CN110149518B CN 110149518 B CN110149518 B CN 110149518B CN 201810145069 A CN201810145069 A CN 201810145069A CN 110149518 B CN110149518 B CN 110149518B
Authority
CN
China
Prior art keywords
processing
video
audio
task
file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810145069.1A
Other languages
Chinese (zh)
Other versions
CN110149518A (en
Inventor
秦智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201810145069.1A priority Critical patent/CN110149518B/en
Publication of CN110149518A publication Critical patent/CN110149518A/en
Application granted granted Critical
Publication of CN110149518B publication Critical patent/CN110149518B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/40Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application provides a media data processing method, which comprises the following steps: separating audio data and video data in a first video file to be processed to obtain a second video file; carrying out fragment processing on the second video file to obtain a plurality of video fragments, and respectively carrying out transcoding processing on the plurality of video fragments; generating an audio file according to the audio data; carrying out audio adjustment processing on the audio file; merging the transcoded plurality of video fragments to obtain a third video file; and combining the audio file subjected to the audio adjustment processing with the third video file to obtain a fourth video file.

Description

Method, system, device, equipment and storage medium for processing media data
Technical Field
The present application relates to the field of information technology, and in particular, to a method, a system, an apparatus, a computing device, and a storage medium for processing media data.
Background
With the development of information technology, online entertainment is more and more popular with multiple users, especially watching online resource videos; however, since the online resource video has a wide range of sources, the playing volumes of videos from different sources are different, some videos have higher playing volumes and some videos have lower playing volumes, and when a user watches the online resource video, the user is required to frequently and autonomously adjust the playing volumes of the videos from different sources, which causes poor watching experience for the user.
Disclosure of Invention
The application provides a method, a system, a device, a computing device and a storage medium for processing media data, which can automatically adjust the volume of various video files, thereby improving the playing effect of videos.
The application example provides a media data processing method, which comprises the following steps: separating audio data and video data in a first video file to be processed to obtain a second video file containing the video data; carrying out fragmentation processing on the second video file to obtain a plurality of video fragments, and respectively carrying out transcoding processing on the plurality of video fragments; generating an audio file according to the audio data obtained by separation; carrying out audio adjustment processing on the audio file; merging the transcoded plurality of video fragments to obtain a third video file; and combining the audio file subjected to the audio adjustment processing with the third video file to obtain a fourth video file.
In some examples, the method further comprises: determining playing information of the transcoded plurality of video fragments according to the attribute information of the transcoded plurality of video fragments; and according to the playing information of the plurality of video fragments, executing the step of combining the audio file subjected to the audio adjustment processing and the third video file to obtain a fourth video file.
In some examples, the playback information includes: playing time; the merging the audio file subjected to the audio adjustment processing and the third video file to obtain a fourth video file includes: according to the playing time of the plurality of video fragments, a plurality of audio fragments are divided from the audio file subjected to the audio adjustment processing, and the plurality of audio fragments respectively correspond to the playing time of the plurality of video fragments; merging the plurality of audio fragments to generate a target audio file; and merging the target audio file and the third video file to obtain the fourth video file.
In some examples, the step of audio adjusting the audio file and the step of transcoding the plurality of video slices, respectively, are performed in parallel.
In some examples, the processing method is applied to a processing system comprising a file processing module and a transcoding module; the file processing module executes the step of separating audio data and video data in a first video file to be processed to obtain a second video file containing the video data, the step of performing fragment processing on the second video file to obtain the plurality of video fragments, and the step of generating the audio file according to the audio data obtained by separation; the processing method further comprises the following steps: the file processing module creates a video processing task for each video fragment, wherein the video processing task comprises storage information of the video fragment, and creates an audio processing task for the audio file, and the audio processing task comprises the storage information of the audio file; the file processing module sends each video processing task and the audio processing task to the transcoding module respectively; wherein, the step of performing audio adjustment processing on the audio file and the step of performing transcoding processing on the plurality of video slices respectively are executed in parallel, and the method comprises the following steps: when a video processing task is received, the transcoding module acquires the video fragments according to the storage information of the video fragments in the video processing task and transcodes the video fragments; when an audio processing task is received, the transcoding module acquires the audio file according to the storage information of the audio file in the audio processing task, and performs audio adjustment processing on the audio file.
In some examples, the processing system further comprises a scheduling module; wherein, the file processing module sends each video processing task and the audio processing task to the transcoding module respectively, and the method comprises the following steps: the file processing module sends each video processing task and the audio processing task to the scheduling module respectively; and the transcoding module pulls each video processing task and the audio processing task from the scheduling module.
In some examples, the processing system further comprises a merge processing module; the method further comprises the following steps: after transcoding the video fragments, the transcoding module creates a video processing completion task for the transcoded video fragments, and sends the video processing completion task to the merging processing module; the video processing completion task comprises storage information of the transcoded video fragments and identifications of the video fragments; after the audio file is subjected to audio adjustment processing, the transcoding module creates an audio processing completion task for the audio file subjected to the audio adjustment processing, and sends the audio processing completion task to the merging processing module; the audio processing completion task comprises the storage information of the audio file subjected to the audio adjustment processing and the identification of the audio file; after the merging processing module receives the video processing completion task and the audio processing completion task of each video fragment, acquiring the transcoded video fragments according to the storage information of the transcoded video fragments in the video processing completion task, and merging the transcoded video fragments according to the identification of the video fragments to obtain a third video file; and acquiring the audio file subjected to audio adjustment according to the storage information of the audio file subjected to audio adjustment in the audio processing completion task, and combining the audio file subjected to audio adjustment with the third video file to obtain a fourth video file.
In some examples, the processing system further comprises a scheduling module; wherein, the file processing module sends each video processing task and the audio processing task to the transcoding module respectively, and the method comprises the following steps: the file processing module sends each video processing task and the audio processing task to the scheduling module respectively; the transcoding module pulls each video processing task and the audio processing task from the scheduling module; the sending the video processing completion task to the merge processing module and the sending the audio processing completion task to the merge processing module includes: after the transcoding module creates a video processing completion task, sending the video processing completion task to the scheduling module; after the transcoding module creates an audio processing completion task, sending the audio processing completion task to the scheduling module; when the scheduling module receives the video processing completion task of each video fragment and the audio processing completion task of the audio file, the video processing completion task of each video fragment and the audio processing completion task of the audio file are subjected to task combination to obtain a media processing task; the merging processing module pulls the media processing task from the scheduling module and analyzes the media processing task to obtain a video processing completion task of each video fragment and an audio processing completion task of the audio file.
The present application also provides a system for processing media data, where the system includes: the system comprises a file processing module, a transcoding module and a merging processing module; the file processing module is used for separating audio data and video data in a first video file to be processed to obtain a second video file containing the video data, generating an audio file according to the audio data obtained by separation, and carrying out fragment processing on the second video file to obtain a plurality of video fragments; the transcoding module is used for respectively transcoding the plurality of video fragments and carrying out audio adjustment processing on the audio file; the merging processing module merges the transcoded plurality of video fragments to obtain a third video file, and merges the audio file subjected to the audio adjustment processing and the third video file to obtain a fourth video file.
In some examples, the file processing module creates a video processing task for each video slice, where the video processing task includes storage information of the video slice, and creates an audio processing task for the audio file, where the audio processing task includes storage information of the audio file; and the file processing module is used for respectively sending each video processing task and the audio processing task to the transcoding module.
In some examples, when a video processing task is received, the transcoding module acquires the video fragment according to storage information of the video fragment in the video processing task and transcodes the video fragment; and the transcoding module acquires the audio file according to the storage information of the audio file in the audio processing task and performs audio adjustment processing on the audio file when receiving an audio processing task.
In some examples, the processing system further comprises a scheduling module; the file processing module is used for respectively sending each video processing task and the audio processing task to the scheduling module; and the transcoding module pulls each video processing task and the audio processing task from the scheduling module.
In some examples, after transcoding the video fragments, the transcoding module creates a video processing completion task for the transcoded video fragments, and sends the video processing completion task to the merging processing module; the video processing completion task comprises storage information of the transcoded video fragments and identifications of the video fragments; the transcoding module creates an audio processing completion task for the audio file subjected to the audio adjustment processing after the audio adjustment processing is performed on the audio file, and sends the audio processing completion task to the merging processing module; the audio processing completion task comprises the storage information of the audio file subjected to the audio adjustment processing and the identification of the audio file.
In some examples, after receiving the video processing completion task and the audio processing completion task of each video fragment, the merge processing module obtains the transcoded video fragments according to the storage information of the transcoded video fragments in the video processing completion task, and merges the transcoded video fragments according to the identifier of the video fragments to obtain a third video file; and acquiring the audio file subjected to audio processing according to the storage information of the audio file subjected to audio adjustment in the audio processing completion task, and adding the audio file subjected to audio adjustment into the third video file to obtain a fourth video file.
In some examples, the processing system further comprises a scheduling module; the transcoding module is used for sending a video processing completion task to the scheduling module after the video processing completion task is created; the transcoding module is used for sending the audio processing completion task to the scheduling module after creating the audio processing completion task; the scheduling module is used for performing task combination on the video processing completion task of each video fragment and the audio processing completion task of the audio file to obtain a media processing task when receiving the video processing completion task of each video fragment and the audio processing completion task of the audio file; and the merging processing module pulls the media processing task from the scheduling module and analyzes the media processing task to obtain the video processing completion task of each video fragment and the audio processing completion task of the audio file.
In some examples, the merge processing module determines playing information of the transcoded plurality of video slices according to attribute information of the transcoded plurality of video slices; the playing information includes: playing time; the merging processing module is used for segmenting a plurality of audio fragments from the audio file subjected to the audio adjustment processing according to the playing time of the plurality of video fragments, merging the plurality of audio fragments to generate a target audio file, and merging the target audio file and the third video file to obtain the fourth video file, wherein the plurality of audio fragments correspond to the playing time of the plurality of video fragments respectively.
In some examples, the transcoding module determines the playing information of the transcoded video fragments according to the attribute information of the transcoded video fragments, creates a video processing completion task for the transcoded video fragments after transcoding the video fragments, and sends the video processing completion task to the merging processing module; the video processing completion task comprises storage information of the transcoded video fragments, the playing information and identifications of the video fragments, and the playing information comprises: playing time; the merging processing module is used for segmenting a plurality of audio fragments from the audio file subjected to the audio adjustment processing according to the playing time of the plurality of video fragments, merging the plurality of audio fragments to generate a target audio file, and merging the target audio file and the third video file to obtain the fourth video file, wherein the plurality of audio fragments correspond to the playing time of the plurality of video fragments respectively.
The example of the present application further provides an apparatus for processing media data, the apparatus including: the acquisition module is used for separating audio data and video data in a first video file to be processed to obtain a second video file containing the video data; the fragmentation module is used for carrying out fragmentation processing on the second video file to obtain a plurality of video fragments and respectively carrying out transcoding processing on the plurality of video fragments; the generating module is used for generating an audio file according to the audio data obtained by separation; the adjusting module is used for carrying out audio adjusting processing on the audio file; the video merging module is used for merging the transcoded plurality of video fragments to obtain a third video file; and the processing module is used for combining the audio file subjected to the audio adjustment processing with the third video file to obtain a fourth video file.
In some examples, the processing device further comprises: the information determining module is used for determining the playing information of the transcoded plurality of video fragments according to the attribute information of the transcoded plurality of video fragments; the processing module combines the audio file subjected to audio adjustment processing and the third video file to obtain a fourth video file according to the playing information of the plurality of video fragments.
In some examples, the playback information includes: playing time; the processing module comprises: the dividing unit is used for dividing a plurality of audio fragments from the audio file subjected to the audio adjustment processing according to the playing time of the plurality of video fragments, wherein the plurality of audio fragments correspond to the playing time of the plurality of video fragments respectively; the generating unit is used for merging the plurality of audio fragments to generate a target audio file; and the processing unit is used for combining the target audio file and the third video file to obtain the fourth video file.
In some examples, the adjusting module performs audio adjustment processing on an audio file and the slicing module performs transcoding processing on the plurality of video slices respectively and executes the transcoding processing.
In some examples, the processing device further comprises: the first video creating module creates a video processing task for each video fragment, wherein the video processing task comprises storage information of the video fragment, and the first audio creating module creates an audio processing task for the audio file, wherein the audio processing task comprises the storage information of the audio file; the sending module is used for sending each video processing task to the slicing module and sending the audio processing task to the adjusting module; when a video processing task is received, the fragment module acquires the video fragments according to the storage information of the video fragments in the video processing task and transcodes the video fragments; and the adjusting module acquires the audio file according to the storage information of the audio file in the audio processing task and performs audio adjusting processing on the audio file when receiving an audio processing task.
In some examples, the processing device further comprises a forwarding module; the sending module is used for sending each video processing task and the audio processing task to the forwarding module respectively; the fragment module pulls each video processing task from the forwarding module; the adjusting module pulls the audio processing task from the forwarding module.
In some examples, after transcoding the video fragments, the fragment module creates a video processing completion task for the transcoded video fragments, and sends the video processing completion task to the video merge module; the video processing completion task comprises storage information of the transcoded video fragments and identifications of the video fragments; after the audio file is subjected to audio adjustment processing, the adjustment module creates an audio processing completion task for the audio file subjected to audio adjustment processing, and sends the audio processing completion task to the processing module; the audio processing completion task comprises storage information of the audio file subjected to the audio adjustment processing and an identifier of the audio file; after the video merging module receives the video processing completion task of each video fragment, acquiring the transcoded video fragments according to the storage information of the transcoded video fragments in the video processing completion task, and merging the transcoded video fragments according to the identification of the video fragments to obtain a third video file; and the processing module is used for acquiring the audio file subjected to the audio adjustment according to the storage information of the audio file subjected to the audio adjustment in the audio processing completion task, and combining the audio file subjected to the audio adjustment with the third video file to obtain a fourth video file.
In some examples, after the fragmentation module creates a video processing completion task, the video processing completion task is sent to the forwarding module; after the adjusting module creates an audio processing completion task, sending the audio processing completion task to the forwarding module; when the forwarding module receives the video processing completion task of each video fragment and the audio processing completion task of the audio file, the forwarding module performs task combination on the video processing completion task of each video fragment and the audio processing completion task of the audio file to obtain a media processing task; the video merging module pulls the media processing task from the scheduling module and analyzes the media processing task to obtain a video processing completion task of each video fragment, and the processing module pulls the media processing task from the scheduling module and analyzes the media processing task to obtain an audio processing completion task of the audio file.
The embodiment of the present application further provides a method for processing media data, where the method includes: acquiring audio data from a first video file to be processed; generating an audio file according to the audio data; carrying out fragment processing on the first video file to obtain a plurality of video fragments, and respectively carrying out transcoding processing on the plurality of video fragments; carrying out audio adjustment processing on the audio file; merging the transcoded plurality of video fragments to obtain a second video file; and combining the audio file subjected to the audio adjustment processing with the second video file to obtain a third video file.
In some examples, the processing method is applied to a processing system comprising a file processing module and a transcoding module; the file processing module executes the step of acquiring audio data from a first video file to be processed, the step of carrying out fragment processing on the first video file to obtain a plurality of video fragments and the step of generating an audio file according to the audio data; the processing method further comprises the following steps: the file processing module creates a video processing task for each video fragment, wherein the video processing task comprises storage information of the video fragment, and creates an audio processing task for the audio file, wherein the audio processing task comprises the storage information of the audio file; the file processing module sends each video processing task and the audio processing task to the transcoding module respectively; wherein, the step of performing audio adjustment processing on the audio file and the step of performing transcoding processing on the plurality of video slices respectively are executed in parallel, and the method comprises the following steps: when a video processing task is received, the transcoding module acquires the video fragments according to the storage information of the video fragments in the video processing task and transcodes the video fragments; when an audio processing task is received, the transcoding module acquires the audio file according to the storage information of the audio file in the audio processing task, and performs audio adjustment processing on the audio file.
The present application also provides a system for processing media data, where the system includes: the system comprises a file processing module, a transcoding module and a merging processing module; the file processing module acquires audio data from a first video file to be processed, generates an audio file according to the audio data, and performs fragmentation processing on the first video file to obtain a plurality of video fragments; the transcoding module is used for respectively transcoding the plurality of video fragments and carrying out audio adjustment processing on the audio file; the merging processing module merges the transcoded plurality of video fragments to obtain a second video file, and merges the audio file subjected to the audio adjustment processing and the second video file to obtain a third video file.
The present application further provides a device for processing media data, where the device includes: the acquisition module acquires audio data from a first video file to be processed; the generating module generates an audio file according to the audio data; the fragment module is used for carrying out fragment processing on the first video file to obtain a plurality of video fragments and respectively carrying out transcoding processing on the video fragments; the adjusting module is used for carrying out audio adjusting processing on the audio file; the video merging module is used for merging the transcoded plurality of video fragments to obtain a second video file; and the processing module is used for combining the audio file subjected to the audio adjustment processing with the second video file to obtain a third video file.
In some examples, the processing device further comprises: the video creating module is used for creating a video processing task for each video fragment, the video processing task comprises storage information of the video fragment, the audio creating module is used for creating an audio processing task for the audio file, and the audio processing task comprises the storage information of the audio file; the video creating module sends each video processing task to the slicing module, and the audio creating module sends the audio processing task to the adjusting module; the adjusting module and the fragment module execute in parallel, and when a video processing task is received, the fragment module acquires the video fragment according to the storage information of the video fragment in the video processing task and transcodes the video fragment; when an audio processing task is received, the adjusting module acquires the audio file according to the storage information of the audio file in the audio processing task and performs audio adjusting processing on the audio file.
The examples of this application also provide a computing device comprising a memory, a processor, and a computer program stored on the memory and run on the processor; the processor implements the processing method described above when executing the computer program.
The present examples also provide a storage medium storing one or more programs, the one or more programs including instructions, which when executed by a computing device, cause the computing device to perform the processing method described above.
By applying the technical scheme, the audio data in the video file can be automatically adjusted in the process of carrying out fragment transcoding on the video file, and then the volume of the video file can be efficiently and automatically adjusted. Furthermore, when the video file subjected to audio adjustment is played, the problem that the video volume needs to be frequently adjusted due to the fact that the video volumes are inconsistent does not occur.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic diagram of a system architecture for an exemplary processing method of the present application;
FIG. 2 is a flow diagram of a processing method according to an example of the present application;
FIG. 3 is a flow chart of obtaining a fourth video file according to an example of the present application;
FIG. 4 is an interactive flow diagram of a processing method according to an example of the present application;
FIG. 5 is a schematic diagram of a processing system according to an example of the present application;
FIG. 6 is a schematic diagram of a processing device according to an example of the present application;
FIG. 7 is a block diagram of an exemplary process module according to the present application;
FIG. 8 is a flow chart of a processing method of an example of the present application;
FIG. 9 is a flow diagram of a processing system according to an example of the present application;
FIG. 10 is a schematic diagram of a processing device according to an example of the present application;
FIG. 11 is a block diagram of hardware of a computing device according to an example of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
For simplicity and clarity of description, the invention will be described below by describing several representative embodiments. The numerous details of the examples are merely provided to assist in understanding the inventive arrangements. It will be apparent, however, that the invention may be practiced without these specific details. Some embodiments are not described in detail, but rather only a framework is presented, in order to avoid unnecessarily obscuring aspects of the invention. Hereinafter, "including" means "including but not limited to", "according to … …" means "according to at least … …, but not according to only … …". When the number of one component is not particularly specified hereinafter, it means that the component may be one or more, or may be understood as at least one.
In order to realize the automatic adjustment of the volume of a video file, the application provides a media data processing method.
Fig. 1 shows a schematic structural diagram of a system 100 to which the media data processing method of the present example is applied. The system 100 at least comprises a file processing server 101, a transcoding server 103 and a merging processing server 104, and can further comprise a scheduling server 102, and the servers 101 to 104 are connected through a network 105, so that various information interaction can be realized.
The file processing server 101 may run application server software having a function of preprocessing a media data file (e.g., generating an audio file and obtaining video clips); application server software having a function of allocating processing tasks (e.g., video processing tasks, audio processing tasks, and media processing tasks) and the like may be run in the scheduling server 102; application server software having functions of processing video processing tasks, audio processing tasks and the like (for example, video fragment transcoding processing and audio file audio adjustment processing) can be run in the transcoding server 103; application server software having a function of processing a media processing task (e.g., merging a transcoded video segment with an audio file processed by an audio adjustment process) may be run in the merge processing server 104. The servers 101 to 104 belong to functional partitions, and in practical applications, different servers may be located in the same server device as different functional modules, or different servers may be located in different server devices.
The network 105 may be a wired network or a wireless network.
The present application provides a method for processing media data, which can be applied to a device or system capable of processing video files, such as the system 100 described above. As shown in fig. 2, the method 200 is illustrated, the method 200 comprising the steps of:
step 201: and acquiring audio data from the first video file to be processed to obtain a second video file.
In some examples, the processing system extracts audio data from the first video file to be processed (e.g., an un-transcoded video file) to obtain a second video file not containing the audio data (e.g., a video image data file obtained by removing the audio data from the first video file, i.e., a sound-free video file), or separates the audio data and the video data in the first video file to be processed to obtain a second video file containing the video data.
In other examples, the processing system may obtain (or read) audio data from a first video file to be processed and generate an audio file, so that a second video file and the audio file may be obtained, where the second video file is the same as the first video file and is the original video file.
It should be noted that, when the second video file is an original video file (i.e., the video data and the audio data are not separately processed), when the video slice of the original video file is directly transcoded, no processing is performed on the audio data in the video slice.
In step 201, audio data may be extracted from the first video file by the audio acquisition software pair.
Step 202: and carrying out fragment processing on the second video file to obtain a plurality of video fragments, and respectively carrying out transcoding processing on the plurality of video fragments.
In some examples, the processing system performs a slicing process on the second video file (e.g., a video image data file) to obtain a plurality of video slices, so as to perform a transcoding process on the plurality of video slices respectively.
The processing system transcodes each video fragment according to preset transcoding information; the transcoding information at least comprises: video format, video resolution and video code rate; therefore, the original video format, the original video resolution and the original video code rate of each video fragment are converted into the preset video format, the preset video resolution and the preset video code rate.
Step 203: and generating an audio file according to the audio data.
In some examples, the processing system generates the audio file based on the separated audio data, as described above.
Step 204: and carrying out audio adjustment processing on the audio file.
In some examples, the processing system adjusts the volume of the audio file.
The volume adjustment mode may include: adjusting the volume of the audio file to a preset threshold volume; the volume of the audio file is adjusted to a preset volume range.
It should be noted that, in step 204: and performing audio adjustment processing on the audio file, and step 203: transcoding the plurality of video slices, respectively, is performed in parallel. By executing the two steps in parallel, the purpose of adjusting the volume of the original video file can be achieved, the code rate of the original video file can be compressed under the condition that the video definition of the original video file is kept, and compared with the technical scheme that the volume adjustment is carried out after transcoding processing is carried out on the original video file, the processing efficiency can be improved, the processing time delay is reduced, the time-consuming influence of the volume adjustment processing on the output of the video file is reduced, the video playing effect is improved, and the watching experience of a user is improved.
Step 205: and merging the transcoded plurality of video fragments to obtain a third video file.
In some examples, the processing system performs merging processing according to a sequence of the transcoded plurality of video fragment identifiers (e.g., video fragment IDs) to obtain a third video file (e.g., a video image data file obtained through merging processing).
It should be noted that the video segment identifier is set according to the playing time sequence of the video segment, for example, the playing time of the video segment ID "001" is 00-00 for 10 minutes, the playing time of the video segment ID "002" is 00-20.
Step 206: and combining the audio file subjected to the audio adjustment processing with the third video file to obtain a fourth video file.
In some examples, determining playing information of the transcoded plurality of video segments according to attribute information of the transcoded plurality of video segments; and combining the audio file subjected to audio adjustment processing and the third video file to obtain a fourth video file according to the playing information of the plurality of video fragments.
The attribute information may include a video code rate of the transcoded video slice and a data size (i.e., a size of an occupied storage space) of the video slice. The playing information includes: the playing time.
The above-mentioned manner of determining the play information of the transcoded video clips may include: determining the time length of the transcoded video fragment according to the video code rate of the transcoded video fragment and the data volume of the video fragment, and determining the playing time of the transcoded video fragment according to the starting time of the video fragment.
Fig. 3 shows a specific processing manner 300 for adding the audio file subjected to the audio adjustment processing to the third video file to obtain a fourth video file, where the specific processing manner 300 includes the following steps:
step 301: and according to the playing time of the plurality of video fragments, segmenting a plurality of audio fragments from the audio file subjected to the audio adjustment processing, wherein the plurality of audio fragments respectively correspond to the playing time of the plurality of video fragments.
In some examples, the processing system segments a plurality of audio segments from the audio file subjected to the audio adjustment processing according to the playing time of the transcoded plurality of video segments, for example, the playing time of the transcoded plurality of video segments is: playback time 00 of video clip ID "001", playback time 00 of video clip ID "002": 10-00:19.99, dividing the audio fragment ID "001" playing time 00 corresponding to the video fragment ID "001" and the audio fragment ID "002" playing time 00 corresponding to the video fragment ID "002" from the audio file subjected to the audio adjustment processing: 10-00:19.99; and so on.
Since the video fragment duration is changed after the video fragment is transcoded, the duration of the transcoded video fragment must be determined again, so as to avoid the problem that the video playing picture content of the transcoded video fragment is inconsistent with the sound playing content of the fourth video file obtained by merging the transcoded video fragment and the audio file subjected to the audio adjustment.
Step 302: and merging the plurality of audio fragments to generate a target audio file.
In some examples, the processing system merges the plurality of audio fragments according to the order of the plurality of audio fragment identifiers (e.g., audio IDs) to generate the target audio file.
It should be noted that the audio segment identifier is set according to the playing time of the audio segment, and the setting mode is the same as the setting mode of the identifier of the video segment, and is not described here again.
Step 303: and merging the target audio file and the third video file to obtain the fourth video file.
In some examples, the processing system merges the target audio file and the third video file to obtain the fourth video file, and the processing system sends the fourth video file to the target video server, so that the target server provides video playing resources to the video client.
The processing system at least includes a file processing module 101 and a transcoding module 103, and as shown in fig. 4, information interaction between the modules in the processing system is described:
in some examples, the file processing module 101 in the processing system extracts audio data from the first video file (e.g., an un-transcoded video file) to be processed resulting in a second video file (e.g., a video image data file) that does not contain the audio data.
In some examples, the file processing module 101 in the processing system executes step S401 according to the audio data: and generating the audio file, and storing the audio file into a downloadable path configured by the file processing module 101.
After the file processing module 101 executes the step 202, the file processing module 101 executes a step S402 for the audio file: an audio processing task is created, which at least contains the storage information of the audio file (i.e. the downloadable path), the playing time of the audio file, and the identifier of the audio file (e.g. audio ID, and the audio ID is associated with the first video file to be processed).
In some examples, the file processing module 101 performs a slicing process on the second video file, and performs step S403: obtaining a plurality of video fragments; the plurality of video clips are stored in the downloadable path configured by the file processing module 101.
The document processing module 101 executes step S404: and creating a video processing task for each video fragment, wherein each video processing task at least comprises storage information of the video fragment (namely the downloadable path), playing time of the video fragment and identification of the video fragment (for example, a video fragment ID is set according to the playing time and is simultaneously associated with the first video file).
After the file processing module 101 performs the above steps 202 and 203, the file processing module 101 sends each video processing task and the audio processing task to the transcoding module 103, respectively.
Wherein, the step of performing audio adjustment processing on the audio file and the step of performing transcoding processing on the plurality of video slices respectively are executed in parallel, and the method comprises the following steps: when a video processing task is received, the transcoding module 103 obtains the video fragment according to the storage information of the video fragment in the video processing task (e.g., downloads the video fragment through a downloadable address), and executes step S407: transcoding the video fragments; when an audio processing task is received, the transcoding module 103 obtains the audio file according to the storage information of the audio file in the audio processing task (e.g., downloads the audio file through a downloadable address), and performs step S408: and carrying out audio adjustment processing on the audio file.
It should be noted that, since the specific manner of the transcoding process of the video clips and the specific manner of the audio adjustment process of the audio files are both described above, no further description is given here.
In some examples, the processing system further includes a scheduling module 102.
Wherein, the file processing module 101 sends each video processing task and the audio processing task to the transcoding module 103 respectively, including: the document processing module 101 executes step S405: sending each video processing task and the audio processing task to the scheduling module 102, respectively; the transcoding module 103 pulls each video processing task and the audio processing task from the scheduling module 102 at regular intervals (e.g., every 10 minutes).
It should be noted that, in an actual application scenario, for one scheduling module 102, a plurality of transcoding modules 103 correspond to the scheduling module 102, so that the transcoding module 103 processes each video processing task and the audio processing task according to the type of the processing task (for example, the type of the video processing task or the type of the audio processing task) allocated by the scheduling module 102, and when there is no other processing task currently, the transcoding module 103 may execute step S406: a video processing task or an audio processing task is pulled from the scheduling module 102, or the transcoding service module 103 executes the step S406 at regular time: a video processing task or an audio processing task is pulled from the scheduling module 102.
Wherein, the pulling mode may include: the transcoding module 103 sends a processing task pull request to the scheduling module 102, and when the scheduling module 102 receives the pull request, any video fragment processing task or audio processing task is sent to the transcoding module 103 in response to the pull request.
In some examples, the processing system further includes a merge processing module 104.
In some examples, after transcoding the video segment, the transcoding-processed video segment is stored into the configured downloadable path, and the transcoding module 103 performs, on the transcoding-processed video segment, step S409: creating a video processing completion task, and sending the video processing completion task to the merging processing module 104; the video processing completion task at least comprises the storage information (namely a downloadable path) of the transcoded video fragment, the playing time of the transcoded video fragment and the identifier of the video fragment.
After performing audio adjustment processing on the audio file, storing the audio file subjected to the audio adjustment processing into a configured downloadable path, and the transcoding module 103 performs step S410 on the audio file subjected to the audio adjustment processing: creating an audio processing completion task, and sending the audio processing completion task to the merging processing module 104; the audio processing completion task at least comprises the storage information (namely a downloadable path) of the audio file subjected to the audio adjustment processing, the playing time of the audio file subjected to the audio adjustment processing and the identifier of the audio file.
After the merge processing module 104 receives the video processing completion task and the audio processing completion task of each video fragment, acquiring the transcoded video fragment according to the storage information (i.e., the downloadable path) of the transcoded video fragment in the video processing completion task, and merging the transcoded video fragment according to the identifier of the video fragment to obtain a third video file; the merging processing module 104 obtains the audio file subjected to the audio adjustment processing according to the storage information (i.e., the downloadable path) of the audio file subjected to the audio adjustment processing in the audio processing completion task, and merges the audio file subjected to the audio adjustment processing and the third video file to obtain a fourth video file.
In some examples, the merge processing module 104 merges the audio file subjected to the audio adjustment processing and the third video file to obtain a fourth video file according to the playing information of the plurality of video slices.
In some examples, the merge processing module 104 divides a plurality of audio segments from the audio file subjected to the audio adjustment processing according to the playing time of the plurality of video segments, where the plurality of audio segments respectively correspond to the playing time of the plurality of video segments; merging the plurality of audio fragments to generate a target audio file; and merging the target audio file and the third video file to obtain the fourth video file.
The playing information may be determined by transcoding module 103, or may be determined by merge processing module 104.
In some examples, when the playing information is determined by the merge processing module 104, the merge processing module 104 may determine the playing information of the transcoded plurality of video slices according to the attribute information of the transcoded plurality of video slices.
The attribute information may include a video bitrate of the transcoded video slice and a data amount (i.e., an occupied storage space size) of the video slice. The playing information includes: the playing time.
The above-mentioned manner of determining the play information of the transcoded video clips may include: determining the time length of the transcoded video fragment according to the video code rate of the transcoded video fragment and the data volume of the video fragment, and determining the playing time of the transcoded video fragment according to the starting time of the video fragment.
In some examples, when the playing information is determined by the transcoding module 103, after performing transcoding processing on each video segment, the transcoding module 103 determines the playing information of the transcoded plurality of video segments according to the attribute information of the transcoded plurality of video segments.
The determination manner of the transcoding module 103 for the playing information is the same as the determination manner of the merging processing module 104 for the playing information, and is not described herein again.
After the transcoding module 103 determines the playing information (e.g., the playing time), the video processing completion task at least further includes the playing information, so that the merging processing module can obtain the playing information of the transcoded video fragment when receiving the video processing completion task. In some examples, the sending the video processing completion task to the merge processing module 104 includes: after the transcoding module 103 creates a video processing completion task, step S411 is executed: sending the video processing completion task to the scheduling module 102; when the scheduling module 102 receives the video processing completion task of each video clip and the audio processing completion task of the audio file, the scheduling module performs task merging on the video processing completion task of each video clip and the audio processing completion task of the audio file, and executes step S413: obtaining a media processing task; the merge processing module 104 executes step S414 periodically: the media processing task is pulled from the scheduling module 102.
The sending the audio processing completion task to the merge processing module 104 includes: after the transcoding module 103 creates an audio processing completion task, step S412 is executed: sending the audio processing completion task to the scheduling module 102; when the scheduling module 102 receives the video processing completion task of each video clip and the audio processing completion task of the audio file, the scheduling module performs task merging on the video processing completion task of each video clip and the audio processing completion task of the audio file, and executes step S413: obtaining a media processing task; the merge processing module 104 executes step S414 periodically: pulling the media processing task from the scheduling module.
It should be noted that the merge processing module 104 may also execute step S414 when there is no media processing task currently: pulling the media processing task from the scheduling module. The pulling manner of the merging processing module 104 is the same as that of the transcoding module 103, and is not described herein again.
When the scheduling module 102 detects a plurality of transcoded video fragment identifiers in a plurality of received video processing completion tasks and an audio file identifier subjected to audio adjustment processing in a plurality of received audio processing completion tasks, the scheduling module 102 performs task merging on the plurality of video processing completion tasks and the audio processing tasks. The video fragment identifiers and the audio file identifiers are set in association, and when the file processing module 101 sends a plurality of video processing tasks and audio processing tasks, the scheduling module 102 records the video fragment identifiers and the audio file identifiers belonging to the first video file, so as to facilitate the detection processing.
After the media processing task is pulled, the merge processing module 104 parses the media processing task to obtain a video processing completion task of each video fragment and an audio processing completion task of the audio file, obtains a transcoded video fragment according to storage information (i.e., a downloadable path) of the transcoded video fragment in the video processing completion task, obtains an audio file subjected to audio adjustment processing according to storage information (i.e., a downloadable path) of the audio file subjected to audio adjustment processing in the audio processing completion task, and then performs step 205 and step 206. Since the above steps have been described in detail in the foregoing, they will not be described in detail here.
Corresponding to a method 200 for processing media data, the present application also provides a system for processing media data, as shown in fig. 5, the system 500 includes: a file processing module 501, a transcoding module 503 and a merging processing module 504; the processing system further includes a scheduling module 502, and the functions of the modules are specifically as follows:
the file processing module 501 extracts audio data from a first video file to be processed to obtain a second video file that does not include the audio data (i.e., separates the audio data from the video data in the first video file to be processed to obtain a second video file that includes the video data), generates an audio file according to the extracted audio data (i.e., generates an audio file according to the audio data obtained by separation), and performs fragment processing on the second video file to obtain a plurality of video fragments.
The transcoding module 503 transcodes the plurality of video segments respectively, and adjusts the audio of the audio file.
The merge processing module 504 merges the transcoded video slices to obtain a third video file, and merges the audio file subjected to the audio adjustment processing and the third video file to obtain a fourth video file.
In some examples, the file processing module 501 creates a video processing task for each video slice, where the video processing task includes storage information of the video slice, and creates an audio processing task for the audio file, where the audio processing task includes storage information of the audio file; the file processing module 501 sends each video processing task and the audio processing task to the transcoding module, respectively.
In some examples, when a video processing task is received, the transcoding module 503 obtains the video fragment according to the storage information of the video fragment in the video processing task, and transcodes the video fragment; when an audio processing task is received, the transcoding module 503 acquires the audio file according to the storage information of the audio file in the audio processing task, and performs audio adjustment processing on the audio file.
In some examples, the processing system further includes a scheduling module 502; the file processing module 501 sends each video processing task and the audio processing task to the scheduling module 502 respectively; the transcoding module 504 pulls each video processing task and the audio processing task from the scheduling module 502.
In some examples, after transcoding the video fragments, the transcoding module 503 creates a video processing completion task for the transcoded video fragments, and sends the video processing completion task to the merge processing module 504; the video processing completion task comprises storage information of the transcoded video fragments and identifications of the video fragments; the transcoding module 503, after performing audio adjustment processing on the audio file, creates an audio processing completion task for the audio file subjected to the audio adjustment processing, and sends the audio processing completion task to the merging processing module 504; the audio processing completion task comprises the storage information of the audio file subjected to the audio adjustment processing and the identification of the audio file.
In some examples, after receiving the video processing completion task and the audio processing completion task of each video fragment, the merge processing module 504 obtains the transcoded video fragments according to the storage information of the transcoded video fragments in the video processing completion task, and merges the transcoded video fragments according to the identifier of the video fragments to obtain a third video file; and acquiring the audio file subjected to audio processing according to the storage information of the audio file subjected to audio adjustment in the audio processing completion task, and adding the audio file subjected to audio adjustment into the third video file to obtain a fourth video file.
In some examples, the transcoding module 503, after creating a video processing completion task, sends the video processing completion task to the scheduling module 502; the transcoding module 503, after creating an audio processing completion task, sends the audio processing completion task to the scheduling module 502; the scheduling module 502, when receiving the video processing completion task of each video fragment and the audio processing completion task of the audio file, performs task combination on the video processing completion task of each video fragment and the audio processing completion task of the audio file to obtain a media processing task; the merge processing module 504 pulls the media processing task from the scheduling module and parses the media processing task to obtain a video processing completion task for each video fragment and an audio processing completion task for the audio file. In some examples, the merge processing module 104 merges the audio file subjected to the audio adjustment processing and the third video file to obtain a fourth video file according to the playing information of the plurality of video slices.
In some examples, the merge processing module 104 divides a plurality of audio segments from the audio file subjected to the audio adjustment processing according to the playing time of the plurality of video segments, where the plurality of audio segments respectively correspond to the playing time of the plurality of video segments; merging the plurality of audio fragments to generate a target audio file; and merging the target audio file and the third video file to obtain the fourth video file.
The playing information may be determined by transcoding module 103, or may be determined by merge processing module 104.
In some examples, when the playing information is determined by the merge processing module 104, the merge processing module 104 may determine the playing information of the transcoded plurality of video slices according to the attribute information of the transcoded plurality of video slices.
The attribute information may include a video bitrate of the transcoded video slice and a data amount (i.e., an occupied storage space size) of the video slice. The playing information includes: the playing time.
The above-mentioned manner of determining the play information of the transcoded video clips may include: determining the time length of the transcoded video fragment according to the video code rate of the transcoded video fragment and the data volume of the video fragment, and determining the playing time of the transcoded video fragment according to the starting time of the video fragment.
In some examples, when the playing information is determined by the transcoding module 103, after performing transcoding processing on each video segment, the transcoding module 103 determines the playing information of the transcoded plurality of video segments according to the attribute information of the transcoded plurality of video segments.
The determination manner of the transcoding module 103 for the playing information is the same as the determination manner of the merging processing module 104 for the playing information, and is not described herein again.
After the transcoding module 103 determines the playing information (e.g., the playing time), the video processing completion task at least further includes the playing information, so that when the merge processing module receives the video processing completion task, the merge processing module can acquire the playing information of the transcoded video fragment.
In some examples, the merge processing module 504 determines the playing time of the transcoded multiple video clips, divides multiple audio clips from the audio file that has been subjected to the audio adjustment processing, where the multiple audio clips respectively correspond to the playing time of the multiple video clips, merges the multiple audio clips to generate a target audio file, and merges the target audio file and the third video file to obtain the fourth video file.
In some examples, the merge processing module 504 sends the fourth video file to a target video module.
It should be noted that the file processing module, the transcoding module, the merging processing module, and the scheduling module described above may be disposed in the same module device (e.g., in the processing server), may also be disposed in several different module devices (e.g., the file processing module, the transcoding module, and the merging processing module are disposed in the processing server, and the scheduling module is disposed in the scheduling server), and may also be disposed in different module devices respectively (e.g., the file processing module is disposed in the file processing server, the transcoding module is disposed in the transcoding server, the merging processing module is disposed in the merging processing server, and the scheduling module is disposed in the scheduling server). These modules may also be referred to as servers, such as file processing servers, transcoding servers, merge processing servers, and dispatch servers.
Corresponding to a method 200 for processing media data, the present application also proposes a device for processing media data, which is applied to a processing system 500, as shown in fig. 6, the processing device 600 includes: the video processing system comprises an acquisition module 601, a generation module 602, a slicing module 603, an adjustment module 604, a video merging module 605 and a processing module 606; and the functions of the modules are as follows:
the obtaining module 601 extracts audio data from the first video file to be processed to obtain a second video file that does not include the audio data (i.e., separates the audio data from the video data in the first video file to be processed to obtain the second video file that includes the video data).
The fragmentation module 602 performs fragmentation processing on the second video file to obtain a plurality of video fragments, and performs transcoding processing on the plurality of video fragments respectively.
A generating module 603 for generating an audio file from the extracted audio data (i.e. from the separated audio data)
Audio data generates an audio file).
The adjusting module 604 performs audio adjustment processing on the audio file.
And the video merging module 605 merges the transcoded video slices to obtain a third video file.
The processing module 606 merges the audio file subjected to the audio adjustment processing and the third video file to obtain a fourth video file.
In some examples, the processing device 600 further comprises: the information determining module is used for determining the playing information of the transcoded plurality of video fragments according to the attribute information of the transcoded plurality of video fragments; the processing module 606, according to the playing information of the multiple video fragments, merges the audio file that has undergone audio adjustment processing with the third video file to obtain a fourth video file, where the playing information includes: the playing time.
In some examples, fig. 7 illustrates a structure 700 of the processing module 606, and as shown in fig. 7, the processing module 606 includes: a dividing unit 701, configured to divide a plurality of audio segments from the audio file subjected to the audio adjustment processing according to the playing time of the plurality of video segments, where the plurality of audio segments correspond to the playing time of the plurality of video segments respectively; a generating unit 702, which merges the plurality of audio segments to generate a target audio file; the processing unit 703 is configured to merge the target audio file and the third video file to obtain the fourth video file.
In some examples, the processing device 600 further comprises a sending module that sends the fourth video file to a target video server.
In some examples, the adjusting module 604 performs audio adjustment processing on an audio file and the slicing module performs transcoding processing on the plurality of video slices respectively in parallel.
In some examples, the processing device 600 further comprises: the first video creating module creates a video processing task for each video fragment, wherein the video processing task comprises storage information of the video fragment, and the first audio creating module creates an audio processing task for the audio file, wherein the audio processing task comprises the storage information of the audio file; a sending module, configured to send each video processing task to the slicing module 602 and send the audio processing task to the adjusting module 604; when a video processing task is received, the fragment module 602 acquires the video fragment according to the storage information of the video fragment in the video processing task, and transcodes the video fragment; the adjusting module 604 obtains the audio file according to the storage information of the audio file in the audio processing task when receiving an audio processing task, and performs audio adjustment processing on the audio file.
In some examples, the processing device 600 further comprises a forwarding module; the sending module is used for sending each video processing task and the audio processing task to the forwarding module respectively; the fragmentation module 602 is configured to pull each video processing task from the forwarding module; the adjusting module 604 pulls the audio processing task from the forwarding module.
In some examples, after transcoding the video fragment, the fragment module 602 creates a video processing completion task for the transcoded video fragment, and sends the video processing completion task to the video merge module 605; the video processing completion task comprises storage information of the transcoded video fragments and identifications of the video fragments; after the audio file is subjected to the audio adjustment, the adjustment module 604 creates an audio processing completion task for the audio file subjected to the audio adjustment, and sends the audio processing completion task to the processing module 606; the audio processing completion task comprises storage information of the audio file subjected to the audio adjustment processing and an identifier of the audio file; after the video merging module 605 receives the video processing completion task of each video fragment, acquiring the transcoded video fragment according to the storage information of the transcoded video fragment in the video processing completion task, and merging the transcoded video fragment according to the identifier of the video fragment to obtain a third video file; the processing module 606 acquires the audio file subjected to the audio adjustment according to the storage information of the audio file subjected to the audio adjustment in the audio processing completion task, and combines the audio file subjected to the audio adjustment with the third video file to obtain a fourth video file.
In some examples, after the fragmentation module 602 creates a video processing completion task, the video processing completion task is sent to the forwarding module; after the adjusting module 604 creates an audio processing completion task, sending the audio processing completion task to the forwarding module; when the forwarding module receives the video processing completion task of each video fragment and the audio processing completion task of the audio file, the forwarding module performs task combination on the video processing completion task of each video fragment and the audio processing completion task of the audio file to obtain a media processing task; the video merging module 605 pulls the media processing task from the scheduling module and obtains the video processing completion task of each video fragment from the media processing task by parsing, and the processing module 606 pulls the media processing task from the scheduling module and obtains the audio processing completion task of the audio file from the media processing task by parsing.
Based on the system 100, the present application provides a method for processing media data, which is applied to a processing system. As shown in fig. 8, the method 800 is illustrated, the method 800 comprising the steps of:
step 801: and acquiring audio data from the first video file to be processed.
The first video file is an original video file (i.e., contains original audio data and original image data of the first video file).
Step 802: and generating an audio file according to the audio data.
Step 803: and carrying out fragment processing on the first video file to obtain a plurality of video fragments, and respectively carrying out transcoding processing on the plurality of video fragments.
It should be noted that, when the video segment in step 803 is transcoded, no processing is performed on the audio data in the video segment, and only transcoding is performed on the image data in the video segment.
Step 804: and carrying out audio adjustment processing on the audio file.
Step 805: and merging the transcoded plurality of video fragments to obtain a second video file.
Step 806: and combining the audio file subjected to the audio adjustment processing with the second video file to obtain a third video file.
It should be noted that when the audio file is merged with the second video file in step 806, the audio file replaces the original audio data existing in the second video file.
In some examples, the processing method 800 applies to a processing system that includes a file processing module and a transcoding module; the file processing module executes the step of acquiring audio data from a first video file to be processed, the step of carrying out fragment processing on the first video file to obtain a plurality of video fragments and the step of generating an audio file according to the audio data; the processing method further comprises the following steps: the file processing module creates a video processing task for each video fragment, wherein the video processing task comprises storage information of the video fragment, and creates an audio processing task for the audio file, and the audio processing task comprises the storage information of the audio file; the file processing module sends each video processing task and the audio processing task to the transcoding module respectively; wherein the step of performing audio adjustment processing on the audio file and the step of performing transcoding processing on the plurality of video clips are performed in parallel, and the method comprises the following steps of: when a video processing task is received, the transcoding module acquires the video fragments according to the storage information of the video fragments in the video processing task and transcodes the video fragments; when an audio processing task is received, the transcoding module acquires the audio file according to the storage information of the audio file in the audio processing task and performs audio adjustment processing on the audio file.
Corresponding to a processing method 800 of media data, the present application also provides a processing system of media data, as shown in fig. 9, where the processing system 900 includes: a file processing module 901, a transcoding module 902 and a merging processing module 903.
The file processing module 901 obtains audio data from a first video file to be processed, generates an audio file according to the audio data, and performs fragment processing on the first video file to obtain a plurality of video fragments.
The transcoding module 902 performs transcoding processing on the plurality of video fragments respectively, and performs audio adjustment processing on the audio file.
The merge processing module 903 is configured to merge the transcoded multiple video segments to obtain a second video file, and merge the audio file subjected to the audio adjustment processing with the second video file to obtain a third video file.
It should be noted that the file processing module, the transcoding module, the merging processing module, and the scheduling module described above may be disposed in the same module device (e.g., in the processing server), may also be disposed in several different module devices (e.g., the file processing module, the transcoding module, and the merging processing module are disposed in the processing server, and the scheduling module is disposed in the scheduling server), and may also be disposed in different module devices respectively (e.g., the file processing module is disposed in the file processing server, the transcoding module is disposed in the transcoding server, the merging processing module is disposed in the merging processing server, and the scheduling module is disposed in the scheduling server). These modules may also be referred to as servers, such as file processing servers, transcoding servers, merge processing servers, and dispatch servers.
Corresponding to a method 800 for processing media data, the present application also proposes a device for processing media data, as shown in fig. 10, where the device 1000 includes: the video processing device comprises an acquisition module 1001, a generation module 1002, a slicing module 1003, an adjustment module 1004, a video merging module 1005 and a processing module 1006.
The obtaining module 1001 obtains audio data from a first video file to be processed.
The generating module 1002 generates an audio file according to the extracted audio data.
The fragmentation module 1003 performs fragmentation processing on the first video file to obtain a plurality of video fragments, and performs transcoding processing on the plurality of video fragments respectively.
And the adjusting module 1004 is used for performing audio adjusting processing on the audio file.
The video merging module 1005 is configured to merge the transcoded multiple video segments to obtain a second video file.
The processing module 1006 merges the audio file subjected to the audio adjustment processing and the second video file to obtain a third video file.
In some examples, the processing device 1000 further comprises: the video creating module is used for creating a video processing task for each video fragment, the video processing task comprises storage information of the video fragment, the audio creating module is used for creating an audio processing task for the audio file, and the audio processing task comprises the storage information of the audio file; the video creating module sends each video processing task to the slicing module 1003, and the audio creating module sends the audio processing task to the adjusting module 1004; the adjusting module 1004 and the fragment module 1003 execute in parallel, and when a video processing task is received, the fragment module 1003 acquires the video fragment according to the storage information of the video fragment in the video processing task and transcodes the video fragment; when an audio processing task is received, the adjusting module 1004 obtains the audio file according to the storage information of the audio file in the audio processing task, and performs audio adjustment processing on the audio file.
It should be noted that the specific implementation of the modules described in the foregoing can be implemented according to the specific implementation of the steps described in the foregoing.
Fig. 11 shows a block diagram of the components of a computing device 1100 in which the processing apparatus 600 and/or the processing apparatus 1000 are located. This computing device 1100 may be a server. As shown in fig. 11, the computing device includes one or more processors (CPUs) 1102, a communications module 1104, memory 1106, a user interface 1110, and a communication bus 1108 for interconnecting these components.
The processor 1102 may receive and transmit data via the communication module 1104 to enable network communications and/or local communications.
The user interface 1110 includes one or more output devices 1112, including one or more speakers and/or one or more visual displays. The user interface 1110 also includes one or more input devices 1114, including, for example, a keyboard, a mouse, a voice command input unit or microphone, a touch screen display, a touch-sensitive input pad, a gesture-capture camera or other input buttons or controls, and the like.
Memory 1106 may be high-speed random access memory such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; or non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices.
The memory 1106 stores a set of instructions executable by the processor 1102, including:
an operating system 1116, including programs for handling various basic system services and for performing hardware-related tasks;
the application 1118 includes various application programs for video playing, such application programs can implement the processing flow in the above examples, for example, the application programs may include some or all of the modules in the processing apparatus 600 shown in fig. 6, at least one of the modules 601 to 606 may store machine executable instructions, and the processor 1102 can implement the functions of at least one of the modules 601 to 606 by executing the machine executable instructions in at least one of the modules 601 to 606 in the memory 1106.
For example, the apparatus may further include some or all of the modules in the processing apparatus 1000 shown in fig. 10, at least one of the modules 1001 to 1006 may store machine executable instructions, and the processor 1102 may further implement the functions of at least one of the modules 1001 to 1006 by executing the machine executable instructions in at least one of the modules 1001 to 1006 in the memory 1106.
It should be noted that not all steps and modules in the above flows and structures are necessary, and some steps or modules may be omitted according to actual needs. The execution sequence of the steps is not fixed and can be adjusted according to the needs. The division of each module is only for convenience of describing adopted functional division, and in actual implementation, one module may be divided into multiple modules, and the functions of multiple modules may also be implemented by the same module, and these modules may be located in the same device or in different devices.
The hardware modules in the embodiments may be implemented in hardware or a hardware platform plus software. The software includes machine-readable instructions stored on a non-volatile storage medium. Thus, embodiments may also be embodied as software products.
In various examples, the hardware may be implemented by specialized hardware or hardware executing machine-readable instructions. For example, the hardware may be specially designed permanent circuits or logic devices (e.g., special purpose processors, such as FPGAs or ASICs) for performing the specified operations. The hardware may also include programmable logic devices or circuits temporarily configured by software (e.g., including a general purpose processor or other programmable processor) to perform certain operations.
In addition, each example of the present application can be realized by a data processing program executed by a data processing apparatus such as a computer. It is clear that a data processing program constitutes the present application. Further, the data processing program, which is generally stored in one storage medium, is executed by directly reading the program out of the storage medium or by installing or copying the program into a storage device (such as a hard disk and/or a memory) of the data processing device. Such a storage medium therefore also constitutes the present application, which also provides a non-volatile storage medium in which a data processing program is stored, which data processing program can be used to carry out any one of the above-mentioned method examples of the present application.
Machine-readable instructions corresponding to the modules in fig. 6 and/or fig. 10 may cause an operating system or the like operating on the computer to perform some or all of the operations described herein. The nonvolatile computer-readable storage medium may be a memory provided in an expansion board inserted into the computer or written to a memory provided in an expansion unit connected to the computer. A CPU or the like mounted on the expansion board or the expansion unit may perform part or all of the actual operations according to the instructions. In addition, the devices and modules in the examples of the present application may be integrated into one processing unit, or each module may exist alone physically, or two or more devices or modules may be integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and should not be taken as limiting the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (16)

1. A media data processing method is applied to a processing system comprising a file processing module, a scheduling module, a transcoding module and a merging processing module, and comprises the following steps:
the file processing module separates audio data and video data in a first video file to be processed to obtain a second video file containing the video data, performs fragment processing on the second video file to obtain a plurality of video fragments, and creates a video processing task for each video fragment, wherein the video processing task at least comprises a downloadable path of the video fragment; generating an audio file according to the audio data obtained by separation, and creating an audio processing task, wherein the audio processing task at least comprises a downloadable path of the audio file; respectively sending each video processing task and the audio processing task to the scheduling module;
when no other processing task exists at present, the transcoding module pulls a video processing task or the audio processing task from the scheduling module, and executes the following processing:
when a video processing task is pulled, downloading a video fragment through a downloadable path, transcoding the video fragment, creating a video processing completion task, wherein the video processing completion task at least comprises the downloadable path of the transcoded video fragment, and sending the video processing completion task to the scheduling module;
when the audio processing task is pulled, the audio file is downloaded through a downloadable path, the audio file is subjected to volume adjustment, then a volume processing completion task is created, the volume processing completion task at least comprises the downloadable path of the audio file subjected to volume adjustment, and the volume processing completion task is sent to the scheduling module, wherein the volume adjustment mode comprises the following steps: adjusting the volume of the audio file to a preset volume range;
the scheduling module performs task combination on each video processing completion task and the audio processing completion task to obtain a media processing task;
and when no media processing task exists currently, the merging processing module pulls the media processing task from the scheduling module, and merges the transcoded plurality of video fragments and the audio file subjected to the volume adjustment processing through a downloadable path to obtain a fourth video file.
2. The processing method of claim 1, further comprising:
determining playing information of the transcoded plurality of video fragments according to the attribute information of the transcoded plurality of video fragments;
and according to the playing information of the video fragments, executing the step of merging the transcoded video fragments and the audio file subjected to the volume adjustment to obtain a fourth video file.
3. The processing method according to claim 2, wherein the playback information comprises: playing time;
the merging the transcoded video fragments and the audio file subjected to the volume adjustment processing to obtain a fourth video file includes:
according to the playing time of the video fragments, a plurality of audio fragments are divided from the audio file which is subjected to volume adjustment processing, and the audio fragments respectively correspond to the playing time of the video fragments;
merging the plurality of audio fragments to generate a target audio file; and
merging the transcoded plurality of video fragments to obtain a third video file, and merging the target audio file and the third video file to obtain a fourth video file.
4. The processing method according to claim 1,
and the step of adjusting the volume of the audio file and the step of transcoding the video clip are executed in parallel.
5. The processing method of claim 1, further comprising:
the file processing module respectively stores the plurality of video fragments into the downloadable paths configured by the file processing module.
6. The processing method of claim 1, further comprising:
and the file processing module stores the audio file into a downloadable path configured by the file processing module.
7. The processing method according to claim 1, wherein the video processing completion task further comprises an identification of the corresponding video slice;
the audio processing completion task further comprises an identification of the audio file.
8. The processing method of claim 1, wherein after the merge processing module pulls the media processing task from the scheduling module, the method further comprises:
and the merging processing module analyzes the media processing tasks to obtain the video processing completion task and the audio processing completion task of each video fragment.
9. A media data processing method is applied to a processing system comprising a file processing module, a scheduling module, a transcoding module and a merging processing module, and comprises the following steps:
the file processing module acquires audio data from a first video file to be processed, generates an audio file according to the audio data, and creates an audio processing task, wherein the audio processing task at least comprises a downloadable path of the audio file; carrying out fragment processing on the first video file to obtain a plurality of video fragments, and creating a video processing task for each video fragment, wherein the video processing task at least comprises a downloadable path of the video fragment; respectively sending each video processing task and the audio processing task to the scheduling module;
when no other processing task exists at present, the transcoding module pulls a video processing task or the audio processing task from the scheduling module, and executes the following processing:
when a video processing task is pulled, downloading a video fragment through a downloadable path, transcoding the video fragment, creating a video processing completion task, wherein the video processing completion task at least comprises the downloadable path of the transcoded video fragment, and sending the video processing completion task to the scheduling module;
when the audio processing task is pulled, the audio file is downloaded through a downloadable path, the audio file is subjected to volume adjustment, then a volume processing completion task is created, the volume processing completion task at least comprises the downloadable path of the audio file subjected to volume adjustment, and the volume processing completion task is sent to the scheduling module, wherein the volume adjustment mode comprises the following steps: adjusting the volume of the audio file to a preset volume range;
the scheduling module performs task combination on each video processing completion task and the audio processing completion task to obtain a media processing task;
and when no media processing task exists currently, the merging processing module pulls the media processing task from the scheduling module, and merges the transcoded plurality of video fragments and the volume-adjusted audio file through a downloadable path to obtain a third video file.
10. The processing method according to claim 9, wherein the video processing task further comprises an identification of the corresponding video slice; the audio processing completion task further comprises an identification of the audio file.
11. A processing system for media data, the processing system comprising: the system comprises a file processing module, a transcoding module, a scheduling module and a merging processing module; wherein the content of the first and second substances,
the file processing module is used for separating audio data and video data in a first video file to be processed to obtain a second video file containing the video data, carrying out fragment processing on the second video file to obtain a plurality of video fragments, and creating a video processing task for each video fragment, wherein the video processing task at least comprises a downloadable path of the video fragment; generating an audio file according to the audio data obtained by separation, and creating an audio processing task, wherein the audio processing task at least comprises a downloadable path of the audio file; respectively sending each video processing task and the audio processing task to the scheduling module;
the transcoding module pulls a video processing task or the audio processing task from the scheduling module when no other processing task exists currently, and executes the following processing:
when a video processing task is pulled, downloading a video fragment through a downloadable path, transcoding the video fragment, creating a video processing completion task, wherein the video processing completion task at least comprises the downloadable path of the transcoded video fragment, and sending the video processing completion task to the scheduling module;
when the audio processing task is pulled, the audio file is downloaded through a downloadable path, the audio file is subjected to volume adjustment, then a volume processing completion task is created, the volume processing completion task at least comprises the downloadable path of the audio file subjected to volume adjustment, and the volume processing completion task is sent to the scheduling module, wherein the volume adjustment mode comprises the following steps: adjusting the volume of the audio file to a preset volume range;
the scheduling module is used for merging each video processing completion task and the audio processing completion task to obtain a media processing task;
and the merging processing module is used for pulling the media processing task from the scheduling module when no media processing task exists currently, merging the plurality of transcoded video fragments and the audio file subjected to the volume adjustment processing through a downloadable path, and obtaining a fourth video file.
12. An apparatus for processing media data, the apparatus comprising:
the acquisition module is used for separating audio data and video data in a first video file to be processed to obtain a second video file containing the video data;
the fragment module is used for carrying out fragment processing on the second video file to obtain a plurality of video fragments, and for each video fragment, a video processing task is created, wherein the video processing task at least comprises a downloadable path of the video fragment; respectively sending each video processing task to a scheduling module; when no other processing task exists at present, pulling a video processing task from the scheduling module, downloading a video fragment through a downloadable path, transcoding the video fragment, and then creating a video processing completion task, wherein the video processing completion task at least comprises the downloadable path of the transcoded video fragment, and sending the video processing completion task to the scheduling module;
the generating module is used for generating an audio file according to the audio data obtained by separation and creating an audio processing task, wherein the audio processing task at least comprises a downloadable path of the audio file; sending the audio processing task to the scheduling module;
the adjustment module, when there are not other processing tasks at present, follow the dispatch module draws the audio frequency processing task, downloads through downloadable route the audio file, right after audio file carries out volume control, establish volume processing and accomplish the task, volume processing accomplishes the task and contains the downloadable route through volume control's audio file at least, will volume processing accomplishes the task and sends to the dispatch module, wherein, the volume control mode includes: adjusting the volume of the audio file to a preset volume range;
and the processing module is used for pulling the media processing task from the scheduling module when no media processing task exists currently, merging the plurality of transcoded video fragments and the audio file subjected to the volume adjustment processing through a downloadable path to obtain a fourth video file, wherein the scheduling module is used for merging each video processing completion task and the audio processing completion task to obtain the media processing task.
13. A processing system for media data, the processing system comprising: the system comprises a file processing module, a scheduling module, a transcoding module and a merging processing module; wherein the content of the first and second substances,
the file processing module acquires audio data from a first video file to be processed, generates an audio file according to the audio data, and creates an audio processing task, wherein the audio processing task at least comprises a downloadable path of the audio file; carrying out fragment processing on the first video file to obtain a plurality of video fragments, and creating a video processing task for each video fragment, wherein the video processing task at least comprises a downloadable path of the video fragment; respectively sending each video processing task and the audio processing task to the scheduling module;
the transcoding module pulls a video processing task or the audio processing task from the scheduling module when no other processing task exists currently, and executes the following processing:
when a video processing task is pulled, downloading a video fragment through a downloadable path, transcoding the video fragment, creating a video processing completion task, wherein the video processing completion task at least comprises the downloadable path of the transcoded video fragment, and sending the video processing completion task to the scheduling module;
when the audio processing task is pulled, the audio file is downloaded through a downloadable path, the audio file is subjected to volume adjustment, then a volume processing completion task is created, the volume processing completion task at least comprises the downloadable path of the audio file subjected to volume adjustment, and the volume processing completion task is sent to the scheduling module, wherein the volume adjustment mode comprises the following steps: adjusting the volume of the audio file to a preset volume range;
the scheduling module is used for merging each video processing completion task and the audio processing completion task to obtain a media processing task;
and the merging processing module is used for pulling the media processing task from the scheduling module when no media processing task exists currently, merging the transcoded plurality of video fragments and the audio file with the adjusted volume through a downloadable path, and obtaining a third video file.
14. An apparatus for processing media data, the apparatus comprising:
the acquisition module is used for acquiring audio data from a first video file to be processed;
the generating module is used for generating an audio file according to the audio data and creating an audio processing task, wherein the audio processing task at least comprises a downloadable path of the audio file; sending the audio processing task to a scheduling module;
the fragment module is used for carrying out fragment processing on the first video file to obtain a plurality of video fragments, and for each video fragment, a video processing task is created, wherein the video processing task at least comprises a downloadable path of the video fragment; respectively sending each video processing task to the scheduling module; when no other processing task exists at present, pulling a video processing task from the scheduling module, downloading a video fragment through a downloadable path, transcoding the video fragment, and then creating a video processing completion task, wherein the video processing completion task at least comprises the downloadable path of the transcoded video fragment, and sending the video processing completion task to the scheduling module;
the adjustment module, when there are not other processing tasks at present, follow the scheduling module draws audio frequency processing task downloads through downloadable route the audio file, right after audio file carries out volume adjustment, establish volume processing and accomplish the task, volume processing accomplishes the task and contains the downloadable route through volume adjustment's audio file at least, will volume processing accomplishes the task and sends to the scheduling module, wherein, the volume adjustment mode includes: adjusting the volume of the audio file to a preset volume range;
and the processing module is used for pulling the media processing task from the scheduling module when no media processing task exists currently, merging the plurality of video fragments subjected to transcoding processing and the audio file subjected to volume adjustment through a downloadable path to obtain a third video file, wherein the scheduling module is used for merging each video processing completion task and the audio processing completion task to obtain the media processing task.
15. A computer readable storage medium, storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform the method of any of claims 1-10.
16. A computing device comprising a memory and a processor, the memory having stored therein computer-readable instructions that, when executed by the processor, implement the method of any of claims 1-10.
CN201810145069.1A 2018-02-12 2018-02-12 Method, system, device, equipment and storage medium for processing media data Active CN110149518B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810145069.1A CN110149518B (en) 2018-02-12 2018-02-12 Method, system, device, equipment and storage medium for processing media data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810145069.1A CN110149518B (en) 2018-02-12 2018-02-12 Method, system, device, equipment and storage medium for processing media data

Publications (2)

Publication Number Publication Date
CN110149518A CN110149518A (en) 2019-08-20
CN110149518B true CN110149518B (en) 2023-03-24

Family

ID=67589375

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810145069.1A Active CN110149518B (en) 2018-02-12 2018-02-12 Method, system, device, equipment and storage medium for processing media data

Country Status (1)

Country Link
CN (1) CN110149518B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111107443A (en) * 2019-12-26 2020-05-05 陕西美亚秦安信息科技有限公司 DASH fragment file merging method, terminal device and storage medium
CN111246243A (en) * 2020-01-15 2020-06-05 天脉拓道(北京)科技有限公司 File encoding and decoding method and device, terminal and storage medium
CN113709412B (en) * 2020-05-21 2023-05-19 中国电信股份有限公司 Live stream processing method, device and system and computer readable storage medium
CN112312164A (en) * 2020-10-16 2021-02-02 安擎(天津)计算机有限公司 Video transcoding system based on distributed transcoding server
CN112423024A (en) * 2020-11-18 2021-02-26 北京乐学帮网络技术有限公司 Video transcoding method and device, computer equipment and storage medium
CN115883874A (en) * 2022-01-27 2023-03-31 北京中关村科金技术有限公司 Compliance service detection method and device based on file
CN116156249A (en) * 2023-02-14 2023-05-23 北京网梯科技发展有限公司 Video stitching method, device and equipment based on multiple players
CN117768715B (en) * 2024-02-22 2024-05-28 北京搜狐新媒体信息技术有限公司 Data processing system, method, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101640057A (en) * 2009-05-31 2010-02-03 北京中星微电子有限公司 Audio and video matching method and device therefor
CN103237258A (en) * 2013-03-29 2013-08-07 天脉聚源(北京)传媒科技有限公司 System and method for automatically adjusting video volume
CN103605709A (en) * 2013-11-12 2014-02-26 天脉聚源(北京)传媒科技有限公司 Distributed audio and video processing device and distributed audio and video processing method
CN103929655A (en) * 2014-04-25 2014-07-16 网易传媒科技(北京)有限公司 Method and device for transcoding audio and video file
CN106851336A (en) * 2017-02-07 2017-06-13 上海网达软件股份有限公司 The audio-video document code-transferring method and system of a kind of Dynamic Resource Allocation for Multimedia

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101640057A (en) * 2009-05-31 2010-02-03 北京中星微电子有限公司 Audio and video matching method and device therefor
CN103237258A (en) * 2013-03-29 2013-08-07 天脉聚源(北京)传媒科技有限公司 System and method for automatically adjusting video volume
CN103605709A (en) * 2013-11-12 2014-02-26 天脉聚源(北京)传媒科技有限公司 Distributed audio and video processing device and distributed audio and video processing method
CN103929655A (en) * 2014-04-25 2014-07-16 网易传媒科技(北京)有限公司 Method and device for transcoding audio and video file
CN106851336A (en) * 2017-02-07 2017-06-13 上海网达软件股份有限公司 The audio-video document code-transferring method and system of a kind of Dynamic Resource Allocation for Multimedia

Also Published As

Publication number Publication date
CN110149518A (en) 2019-08-20

Similar Documents

Publication Publication Date Title
CN110149518B (en) Method, system, device, equipment and storage medium for processing media data
CN108989885B (en) Video file transcoding system, segmentation method, transcoding method and device
CN109089130B (en) Method and device for adjusting timestamp of live video
JP2018026816A (en) Video recording method and system
US11095954B2 (en) Video-providing method and video-providing system
EP3244621B1 (en) Video encoding method, system and server
CN108566561B (en) Video playing method, device and storage medium
AU2015280330B2 (en) Efficient frame rendering
US20230006946A1 (en) First-in first-out function for segmented data stream processing
CN106657090B (en) Multimedia stream processing method and device and embedded equipment
WO2020155964A1 (en) Audio/video switching method and apparatus, and computer device and readable storage medium
CN113661692B (en) Method, apparatus and non-volatile computer-readable storage medium for receiving media data
CN105898370A (en) Multi-camera video player, playing system and playing method
CN112770168A (en) Video playing method and related device and equipment
CN113785279A (en) Stateless parallel processing method and device for tasks and workflows
US20160295256A1 (en) Digital content streaming from digital tv broadcast
WO2014161267A1 (en) Method and device for showing poster
CN111026406A (en) Application running method, device and computer readable storage medium
CN110300118B (en) Streaming media processing method, device and storage medium
CN109948082B (en) Live broadcast information processing method and device, electronic equipment and storage medium
CN108282670A (en) Code converter for real-time imaging synthesis
WO2017071642A1 (en) Media playback method, device and computer storage medium
CN109547864B (en) Media data processing method and device
KR20220146801A (en) Method, computer device, and computer program for providing high-definition image of region of interest using single stream
CN113766255A (en) Video stream merging method and device, electronic equipment and computer medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant