CN103605710B - A kind of distributed tones video process apparatus and processing method - Google Patents

A kind of distributed tones video process apparatus and processing method Download PDF

Info

Publication number
CN103605710B
CN103605710B CN201310558626.XA CN201310558626A CN103605710B CN 103605710 B CN103605710 B CN 103605710B CN 201310558626 A CN201310558626 A CN 201310558626A CN 103605710 B CN103605710 B CN 103605710B
Authority
CN
China
Prior art keywords
video
audio
audio data
video data
data segments
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310558626.XA
Other languages
Chinese (zh)
Other versions
CN103605710A (en
Inventor
武悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TVMining Beijing Media Technology Co Ltd
Original Assignee
TVMining Beijing Media Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TVMining Beijing Media Technology Co Ltd filed Critical TVMining Beijing Media Technology Co Ltd
Priority to CN201310558626.XA priority Critical patent/CN103605710B/en
Publication of CN103605710A publication Critical patent/CN103605710A/en
Application granted granted Critical
Publication of CN103605710B publication Critical patent/CN103605710B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The present invention provides a kind of distributed tones video file processing system, including:Input processing unit, for receiving source video file, processing is carried out to source video file and obtains audio/video data, and be divided into audio/video data after audio/video data fragment in order respectively, and the audio/video data fragment according to obtained by certain allocation rule by segmentation is distributed to corresponding audio/video data processing unit and handled;Several video data processing elements, are respectively used to handle video data segment after singulated;Several voice data processing units, are respectively used to handle audio data fragment after singulated;Processing unit is exported, for the audio/video data fragment after processing to be handled and exported;Scheduling unit, for coordinating input processing unit, several audio/video data processing units and the work for exporting processing unit.Early childhood of the invention supplies a kind of distributed tones video file processing method and a kind of distributed tones video file processing unit.

Description

Distributed audio and video processing device and processing method
Technical Field
The present invention relates to an apparatus and method for processing data using a computer or a data processing apparatus, and more particularly, to an apparatus and method for processing an audio/video file using a distributed computer or a data processing apparatus.
Technical Field
With the development of networks and culture causes, audio and video resources are greatly enriched, and the requirement for processing audio and video files is rapidly increased.
The audio and video file processing roughly comprises the following flows: firstly, decapsulating an audio/video file to be processed into a video frame sequence and an audio frame sequence; then decoding the video frame sequence and the audio frame sequence into RAW format data and PCM format data respectively; processing the RAW format data and the PCM format data; then encoding the data in RAW format and PCM format into audio frame sequence and video frame sequence in required format; finally, the audio frame sequence and the video frame sequence are packaged into a required file format.
The above processing is performed by a computer or a data processing apparatus constituted by a computer, and the existing computer or data processing apparatus realizes the processing of the file by using the software and hardware resources of the computer. The processing method has the advantages that the calculation amount of the audio and video file processing is huge, the calculation capacity and the storage resource consumption of a processing device are large, the bottleneck problem of processing the audio and video file by a single machine is increasingly prominent along with the increasing of high-definition audio and video files and the increasing of the processing requirement, the single machine processing speed is low, and the system breakdown is easy to occur. Even if the user uses a highly configured computer, it is difficult to ensure the speed and stability of the processing, especially not to satisfy the processing tasks with large batch and high time requirements.
In view of the above problems in the prior art, the present invention provides a distributed processing system. The parallel processing is realized by using a plurality of computers or processing devices, so that the time required by the processing is greatly reduced, the processing pressure of the system is reduced, and the possibility of system breakdown is reduced. Because a perfect monitoring system is used, the processing reliability is very high, and the requirement on the processing quality can be completely met.
Disclosure of Invention
A first aspect of the present invention provides a distributed audio/video file processing system, including: the input processing unit is used for receiving a source video file, processing the source video file to obtain video data and audio data, respectively dividing the video data and the audio data into video data segments and audio data segments in sequence, and distributing the divided video data segments and audio data segments to the corresponding video data processing unit and audio data processing unit according to a certain distribution rule for processing; the plurality of video data processing units are respectively used for processing the segmented video data fragments; the plurality of audio data processing units are respectively used for processing the segmented audio data segments; the output processing unit is used for processing and outputting the processed video data segment and the processed audio data segment; and the scheduling unit is used for coordinating the work of the input processing unit, the video data processing units, the audio data processing units and the output processing unit.
Preferably, after the input processing unit divides the video data and the audio data into video data segments and audio data segments in sequence, the sequence numbers of the video data segments and the audio data segments are subjected to modulo operation according to the number of the video data processing units and the audio data processing units, and the video data segments and the audio data segments are distributed to the corresponding video data processing units and audio data processing units for processing according to the result of the modulo operation.
Preferably, the input processing unit has a first monitoring module, a first processing module and a first transmission module; each video data processing unit is respectively provided with a second monitoring module, a second processing module and a second transmission module; each audio data processing unit is respectively provided with a third monitoring module, a third processing module and a third transmission module; the output processing unit is provided with a fourth monitoring module, a fourth processing module and a fourth transmission module; the first monitoring module, the second monitoring module, the third monitoring module and the fourth monitoring module are respectively in communication connection with the scheduling unit, receive related instructions from the scheduling unit and respectively report the running state of the related processing unit to the scheduling unit; the first transmission module is respectively in communication connection with each second transmission module and each third transmission module, and sends corresponding video data segments and audio data segments to be processed to each second transmission module and each third transmission module respectively; the fourth transmission module is respectively in communication connection with each second transmission module and each third transmission module, and receives the processed video data segment and the processed audio data segment.
Preferably, the input processing unit decapsulates the received source file to obtain a video sequence and an audio sequence, and performs segmentation processing on the video sequence and the audio sequence, respectively.
Preferably, the output processing unit, after determining that all the processed video data segments and audio data segments are received from the video data processing units and the audio data processing units, merges the received video data segments and audio data segments, and packages the merged video data segments and audio data segments according to a predetermined format.
Preferably, the scheduling unit communicates with the first, second, third and fourth monitoring modules through address information obtained by a system configuration file.
Preferably, the first, second, third and fourth monitoring modules respectively periodically send the running state information of the corresponding processing unit to the scheduling unit; and the scheduling unit maintains a system configuration file according to the running state information and sends the updated system configuration file to the first monitoring module, the second monitoring module, the third monitoring module and the fourth monitoring module respectively.
Preferably, the content of the system configuration file includes the number of the video data processing units, the physical address, the operating status, and the working directory, and the number of the audio data processing units, the physical address, the operating status, and the working directory.
Preferably, when an error occurs in the processing of one of the video data processing unit and the audio data processing unit, its corresponding second or third monitoring module reports the error to the scheduling unit; the scheduling module determines to re-execute the erroneous process or to ignore the error.
Preferably, the input processing unit adds the erroneous video data segment and/or audio data segment into the remaining video data segment and/or audio data segment that is not sent again and re-sequences the erroneous video data segment and/or audio data segment, and then modulo arithmetic is performed on the sequence numbers obtained by the re-sequencing according to the number of the video data processing units and/or audio data processing units, and the erroneous video data segment and/or audio data segment is re-allocated to the corresponding video data processing unit and/or audio data processing unit for processing according to the result of the modulo arithmetic.
Preferably, the input processing unit obtains the video frame number, the audio frame number, and the audio-video encoding format of the source video file after decapsulation, and sends the video frame number, the audio frame number, and the audio-video encoding format to the corresponding video data processing unit and the audio data processing unit.
Preferably, the input processing unit sends the number of divided video data segments and the number of divided audio data segments of the source file to the output processing unit.
Preferably, the input processing unit inserts, into the divided video data segment and audio data segment, source file information of the segment, a sequence number and file type information in all video or audio data segments of the source file.
Preferably, the input processing unit inserts the source file information, the sequence numbers in all the video or audio data segments of the source file, and the file type information into the file names of the divided video data segments and audio data segments.
Preferably, the output processing unit establishes a receiving mapping table of the source file according to the number of the divided video data segments and the number of the divided audio data segments, and marks the received processed data segments as received in the receiving mapping table according to the source file information of the segments inserted into the divided video data segments and audio data segments by the input processing unit, the sequence numbers in all the video or audio data segments of the source file, and the file type information; when the output processing unit judges that all the video data segments and the audio data segments in the receiving mapping table are marked as received, the processed video data segments and the processed audio data segments are integrated to obtain processed audio and video files; and when the output processing unit judges that video data segments and audio data segments are marked as not received in the receiving mapping table after a certain time, the output processing unit sends the corresponding missing information of the segmented video data segments or/and the segmented audio data segments to the scheduling unit.
Preferably, the receiving mapping table further includes the number of received video data segments and the number of audio data segments; the output processing unit judges whether the number of the video data segments and the number of the audio data segments in the receiving mapping table are consistent with the number of the video data segments and the number of the audio data segments sent by the input processing unit, and then judges whether all the video data segments and the audio data segments in the receiving mapping table are marked as received.
Preferably, when the scheduling unit receives the missing information, the scheduling unit instructs the input processing unit to perform modulo operation on the sequence numbers of the missing video data segments and/or audio data segments according to the number of the current video data processing units and/or audio data processing units, reallocates each missing video data segment and/or audio data segment to a corresponding video data processing unit and/or audio data processing unit for processing according to the result of the modulo operation, and sends the video data segment and/or audio data segment after being processed again to the output processing unit.
Preferably, when the scheduling unit receives the missing information of the video data segments, the scheduling unit instructs the input processing unit to number all the missing video data segments in sequence again, performs modulo operation on the numbers according to the number of the current video data processing units, reallocates each missing video data segment to a corresponding video data processing unit for processing according to the result of the modulo operation, and sends the video data segment after being processed again to the output processing unit; and when the scheduling unit receives the missing information of the audio data segments, the scheduling unit instructs the input processing unit to number all the missing audio data segments again according to the sequence, performs modular operation on the numbers according to the number of the current audio data processing units, reallocates each missing audio data segment to the corresponding audio data processing unit according to the result of the modular operation for processing, and sends the audio data segments after being processed again to the output processing unit.
Another aspect of the present invention provides an audio/video file processing method, including: an input processing step, namely receiving a source video file through an input processing unit, processing the source video file to obtain video data and audio data, respectively dividing the video data and the audio data into video data segments and audio data segments in sequence, and allocating the divided video data segments and audio data segments to corresponding video data processing units and audio data processing units according to a certain allocation rule for processing; a video data processing step, wherein a plurality of video data processing units are respectively used for processing the segmented video data segments; an audio data processing step, wherein a plurality of audio data processing units are respectively used for processing the segmented audio data segments; an output processing step of processing and outputting the processed video data segment and audio data segment by an output processing unit; and a scheduling step, in which the input processing unit, the plurality of video data processing units, the plurality of audio data processing units and the output processing unit are coordinated through a scheduling unit.
Preferably, in the input processing step, after the video data and the audio data are respectively sequentially divided into video data segments and audio data segments, modulo arithmetic is performed on sequence numbers of the video data processing units and the audio data processing units for each video data segment and audio data segment, and each video data segment and each audio data segment are allocated to the corresponding video data processing unit and corresponding audio data processing unit for processing according to a result of the modulo arithmetic.
Preferably, the decapsulating step decapsulates the received source file to obtain a video sequence and an audio sequence; and a segmentation step, which is used for respectively segmenting the video sequence and the audio sequence.
Preferably, in the output processing step, after determining that all the processed video data segments and audio data segments are received from the respective video data processing units and the respective audio data processing units, the received video data segments and audio data segments are combined and packaged in a predetermined format.
Preferably, in the scheduling step, address information acquired through a system configuration file is communicated.
Preferably, in the scheduling step, the operation state information of the corresponding processing unit is periodically sent to the scheduling unit, and the scheduling unit maintains a system configuration file according to the operation state information and sends an updated system configuration file.
Preferably, the content of the system configuration file includes the number of the video data processing units, the physical address, the operating status, and the working directory, and the number of the audio data processing units, the physical address, the operating status, and the working directory.
Preferably, in the video data processing step, when an error occurs in the processing of one of the video data processing units, or in the audio data processing step, when an error occurs in the processing of one of the audio data processing units, the error is reported to the scheduling unit; the scheduling module determines to re-execute the erroneous process or to ignore the error.
Preferably, the input processing unit adds the erroneous video data segment and/or audio data segment into the remaining video data segment and/or audio data segment that is not sent again and re-sequences the erroneous video data segment and/or audio data segment, and then modulo arithmetic is performed on the sequence numbers obtained by the re-sequencing according to the number of the video data processing units and/or audio data processing units, and the erroneous video data segment and/or audio data segment is re-allocated to the corresponding video data processing unit and/or audio data processing unit for processing according to the result of the modulo arithmetic.
Preferably, in the input processing step, the decapsulated video frame number, audio frame number, and audio-video encoding format of the source video file are obtained, and the video frame number, the audio frame number, and the audio-video encoding format are sent to the corresponding video data processing unit and the audio data processing unit.
Preferably, in the input processing step, division information of the number of divided pieces of video data and the number of divided pieces of audio data of the source file is sent to the output processing unit.
Preferably, in the input processing step, the input processing unit inserts the source file information of the divided video data segment and audio data segment, the sequence number and file type information in all the video or audio data segments of the source file into the segments.
Preferably, in the output processing step, the output processing unit establishes a receiving mapping table of the source file according to the number of the divided video data segments and the number of the divided audio data segments, and marks the received processed data segments as received in the receiving mapping table according to the source file information of the segments inserted into the divided video data segments and audio data segments by the input processing unit, the sequence numbers in all the video or audio data segments of the source file, and the file type information; when the output processing unit judges that all the video data segments and the audio data segments in the receiving mapping table are marked as received, the processed video data segments and the processed audio data segments are integrated to obtain processed audio and video files; and when the output processing unit judges that video data segments and audio data segments are marked as not received in the receiving mapping table after a certain time, the output processing unit sends the corresponding missing information of the segmented video data segments or/and the segmented audio data segments to the scheduling unit.
Preferably, the receiving mapping table further includes the number of received video data segments and the number of audio data segments; the output processing unit judges whether the number of the video data segments and the number of the audio data segments in the receiving mapping table are consistent with the number of the video data segments and the number of the audio data segments sent by the input processing unit, and then judges whether all the video data segments and the audio data segments in the receiving mapping table are marked as received.
Preferably, when the scheduling unit receives the missing information, the scheduling unit instructs the input processing unit to perform modulo operation on the sequence numbers of the missing video data segments and/or audio data segments according to the number of the current video data processing units and/or audio data processing units, reallocates each missing video data segment and/or audio data segment to a corresponding video data processing unit and/or audio data processing unit for processing according to the result of the modulo operation, and sends the video data segment and/or audio data segment after being processed again to the output processing unit.
Preferably, when the scheduling unit receives the missing information of the video data segments, the scheduling unit instructs the input processing unit to number all the missing video data segments in sequence again, performs modulo operation on the numbers according to the number of the current video data processing units, reallocates each missing video data segment to a corresponding video data processing unit for processing according to the result of the modulo operation, and sends the video data segment after being processed again to the output processing unit; and when the scheduling unit receives the missing information of the audio data segments, the scheduling unit instructs the input processing unit to number all the missing audio data segments again according to the sequence, performs modular operation on the numbers according to the number of the current audio data processing units, reallocates each missing audio data segment to the corresponding audio data processing unit according to the result of the modular operation for processing, and sends the audio data segments after being processed again to the output processing unit.
The invention also provides a distributed audio and video file processing device, which comprises: the first servers are respectively used for processing the video data fragments; the plurality of second servers are respectively used for processing the audio data fragments; the third server is used for processing and outputting the processed video data fragments and audio data fragments; and the fourth server is used for receiving a source video file, processing the source video file to obtain video data and audio data, dividing the video data and the audio data into video data segments and audio data segments respectively according to a certain distribution rule, distributing the divided video data segments and audio data segments to the corresponding first server and second server for processing, and coordinating the work among the first servers, the second servers and the third servers.
Preferably, after the fourth server divides the video data and the audio data into video data segments and audio data segments in sequence, modulo arithmetic is performed on sequence numbers of the video data segments and the audio data segments according to the number of the first server and the second server, and the video data segments and the audio data segments are allocated to the corresponding first server and the second server for processing according to a result of the modulo arithmetic.
Preferably, each first server only runs one video data processing process; each second server runs only one audio data processing process.
Preferably, the fourth server decapsulates the received source file to obtain a video sequence and an audio sequence, and performs segmentation processing on the video sequence and the audio sequence, respectively.
Preferably, after determining that all the processed video data segments and audio data segments are received from the first servers and the second servers, the third server merges the received video data segments and audio data segments, and packages the merged video data segments and audio data segments according to a predetermined format.
Preferably, the fourth server obtains the video frame number, the audio frame number, and the audio-video encoding format of the source video file after decapsulation, and sends the video frame number, the audio frame number, and the audio-video encoding format to the corresponding first server and the second server.
Preferably, the fourth server sends the source file splitting information of the number of split video data segments and the number of split audio data segments to the third server.
Preferably, the fourth server inserts the source file information, the sequence numbers in all the video or audio data segments of the source file, and the file type information into the file names of the divided video data segments and audio data segments.
Preferably, the third server establishes a receiving mapping table of the source file according to the number of the segmented video data segments and the number of the segmented audio data segments, and marks the received processed data segments as received in the receiving mapping table according to the source file information of the segments inserted into the segmented video data segments and audio data segments by the fourth server, the sequence numbers in all the video or audio data segments of the source file, and the file type information; when the third server judges that all the video data segments and the audio data segments in the receiving mapping table are marked as received, the processed video data segments and the processed audio data segments are integrated to obtain processed audio and video files; and when the third server judges that video data segments and audio data segments are marked as not received in the receiving mapping table after a certain time, the third server sends the corresponding missing information of the segmented video data segments and/or the segmented audio data segments to the fourth server.
Preferably, the receiving mapping table further includes the number of received video data segments and the number of audio data segments; and after the third server judges that the number of the video data fragments and the number of the audio data fragments in the receiving mapping table are consistent with the number of the video data fragments and the number of the audio data fragments sent by the fourth server, judging whether all the video data fragments and all the audio data fragments in the receiving mapping table are marked as received.
Preferably, when the fourth server receives the missing information, performing modulo operation on the sequence numbers of the missing video data segments and/or audio data segments according to the number of the current first server and/or second server, reallocating each missing video data segment and/or audio data segment to the corresponding first server and/or second server for processing according to the result of the modulo operation, and sending the video data segment and/or audio data segment after being processed again to the third server.
Preferably, when the fourth server receives missing information of video data segments, numbering all the missing video data segments again in sequence, performing modular operation on the numbers according to the number of the current first servers, reallocating each missing video data segment to a corresponding first server for processing according to the result of the modular operation, and sending the video data segments after being processed again to the third server; and when the fourth server receives the missing information of the audio data segments, numbering all the missing audio data segments again according to the sequence, performing modular operation on the numbers according to the number of the current second servers, reallocating each missing audio data segment to the corresponding second server according to the result of the modular operation for processing, and sending the audio data segments after being processed again to the third server.
The distributed processing system and the processing method solve the speed bottleneck when a single machine carries out processing, and can greatly shorten the time required by processing the audio and video files. Because a reliable task allocation mechanism and an error correction mechanism are established, the reliability of the processing result can be ensured. Meanwhile, faults such as frequent dead halt and the like when the single machine processing load is too high can be effectively avoided. The system has good expandability and flexible system configuration, and is particularly suitable for processing large or large audio and video files.
Drawings
FIG. 1 is a block diagram of a distributed processing system according to an embodiment of the present invention;
FIG. 2 is a block diagram of an input processing module of a distributed processing system according to an embodiment of the present invention;
FIG. 3 is a block diagram of an output processing module of a distributed processing system according to an embodiment of the present invention;
FIG. 4 is a process flow diagram of a distributed processing system according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating a Map file structure of a distributed processing system according to an embodiment of the present invention;
fig. 6 is a flowchart of processing of step S5 in the distributed processing system according to the embodiment of the present invention;
FIG. 7 is a diagram illustrating an information file structure of a distributed processing system according to an embodiment of the present invention;
FIG. 8 is a block diagram illustrating a partitioned audio and video file structure of a distributed processing system according to an embodiment of the present invention;
fig. 9 is a flowchart of processing a video file in the processing step S8 of the distributed processing system according to the embodiment of the present invention;
fig. 10 is a flowchart of processing an audio file in the processing step S8 in the distributed processing system according to the embodiment of the present invention;
fig. 11 is a flowchart of processing of the output processing unit in the processing step S9 in the distributed processing system according to the embodiment of the present invention;
fig. 12 is a diagram illustrating a structure of a reception mapping table in a distributed processing system according to an embodiment of the present invention.
Detailed Description
The invention is elucidated below on the basis of an embodiment shown in the drawing. The embodiments disclosed herein are to be considered in all respects as illustrative and not restrictive. The scope of the present invention is not limited to the description of the above embodiments, but is defined only by the scope of the claims, and includes all modifications having the same meaning as and within the scope of the claims.
Fig. 1 is a block diagram of a distributed processing system. As shown in fig. 1, the distributed system of the present embodiment includes a scheduling module (Dispatcher module) 1, an input processing unit 2, a plurality of video processing units 3 and 4, a plurality of audio processing units 5 and 6, an output processing unit 7, a monitoring module (Dispatcher module) 8, and a Client module (Client module) 9. The scheduling module 1 is used for coordinating the operation of each part of the whole system. The input processing unit 2 includes a monitoring module (Monitor module) 21, an input processing module (Ingress module) 22, and a transmission module (Offer module) 23. Preferably, the monitoring module 21, the input processing module 22 and the transmission module 23 may communicate with each other by using message queues. The video processing unit 3 (4) includes a Monitor module (Monitor module) 31 (41), a video processing module (VP module) 32 (42), and a transmission module (Offer module) 33 (43). Preferably, the monitoring module 31 (41), the video processing module 32 (42) and the transmission module 33 (43) may communicate with each other by using a message queue. The audio processing unit 5 (6) includes a monitoring module (Monitor module) 51 (61), an audio processing module (AP module) 52 (62), and a transmission module (Offer module) 53 (63). Preferably, the monitoring module 51 (61), the audio processing module 52 (62) and the transmission module 53 (63) may communicate with each other by using a message queue. The output processing unit 7 includes a monitoring module (Monitor module) 71, an output processing module (Egress module) 72, and a transmission module (Offer module) 73. Preferably, the monitoring module 71, the output processing module 72 and the transmission module 73 may communicate with each other by using a message queue. The monitoring module 8 shares memory with the scheduling module 1 and thus obtains all information in the scheduling module 1. The monitoring module 8 sends the acquired information to the client module 9, which is displayed to the user in a graphical interface by the client module 9. The scheduling module 1, the input processing unit 2, and the monitoring module (watchdog module) 8 may share one physical machine (server). Each video processing unit 3 or 4 can use one physical machine (server) separately, namely: the monitoring module 31, the video processing module 32 and the transmission module 33 may share one physical machine (server), and the monitoring module 41, the video processing module 42 and the transmission module 43 may share one physical machine (server), wherein each physical machine (server) may run only one video processing process (VP process) at a time; each audio processing unit 5 or 6 can be implemented with a physical machine (server) alone, namely: the monitoring module 51, the audio processing module 52 and the transmission module 53 may share one physical machine (server), and the monitoring module 61, the audio processing module 62 and the transmission module 63 share one physical machine (server), wherein each physical machine (server) may run only one audio processing process (AP process) at a time. The output processing unit 7 may use one physical machine (server) alone, that is: the monitoring module 71, the output processing module 72, and the transmission module 73 may share one physical machine (server).
The scheduling module 1 is in communication connection with the monitoring module 21 in the input processing unit 2, the monitoring modules (monitoring modules 31 and 41) in the video processing units (video processing units 3 and 4), the monitoring modules (monitoring modules 51 and 61) in the audio processing units (audio processing units 5 and 6), and the monitoring module 71 in the output processing unit 7, respectively, and is used for coordinating the operation of the input processing unit 2, the video processing units 3 and 4, the audio processing units 5 and 6, the output processing unit 7, and other parts in the whole system. The transmission module 23 in the input processing unit 2 is in communication connection with the transmission modules (transmission modules 33 and 43) in the respective video processing units (video processing units 3 and 4) and the transmission modules (monitoring modules 53 and 63) in the respective audio processing units (audio processing units 5 and 6), respectively, for transmitting the corresponding information and data to the transmission modules (transmission modules 33 and 43) in the respective video processing units (video processing units 3 and 4) and the transmission modules (transmission modules 53 and 63) in the respective audio processing units (audio processing units 5 and 6), respectively. The transmission module 73 in the output processing unit 7 is communicatively connected to the transmission modules (transmission modules 33 and 43) in the respective video processing units (video processing units 3 and 4) and the transmission modules (transmission modules 53 and 63) in the respective audio processing units (audio processing units 5 and 6), respectively, for accepting corresponding information and data from the transmission modules (transmission modules 33 and 43) in the respective video processing units (video processing units 3 and 4) and the transmission modules (transmission modules 53 and 63) in the respective audio processing units (audio processing units 5 and 6), respectively.
Fig. 2 is a block diagram of the input processing module 22. As shown in fig. 2, the input processing module 22 includes a decapsulation module 221, an input data processing module 222, and a data storage module 223, wherein the decapsulation module 221 decapsulates the source video file received by the input processing unit 2, the input data processing module 222 processes the audio/video file, and the data storage module 223 stores the audio/video file and related information. The decapsulation module 221 includes an audio/video file format determination unit 2211, a decapsulation selection unit 2212, and a plurality of decapsulation units 2213, 2214, 2215 … …. The decapsulation units (2213, 2214, 2215 … …) have different formats, and the decapsulation selection unit 2212 may select a corresponding decapsulation unit from the decapsulation units to decapsulate the source video file according to the determination result of the audio/video file format determination unit 2211, so that the decapsulation module 221 may decapsulate the source video file according to different file formats. The input data processing module 222 has a segmentation module 2221 and an assignment module 2222. The splitting module 2221 may split the decapsulated audio/video file into a plurality of audio data segments and video data segments, and sequence the plurality of audio data segments and the plurality of video data segments, respectively, and the allocating module 2222 determines the audio processing unit and the video processing unit corresponding to each audio data segment and video data segment by taking a modulus of the sequence of the split segment according to the number of the audio processing units and the video processing units, respectively. The input processing module 22 can acquire the package format information of the source video file and transmit the information to the output processing unit 7 through the transmission modules 23, 33, 43, 53, and 63.
Fig. 3 is a block diagram of the output processing module 72. As shown in fig. 3, the output processing module 72 includes a packaging module 721, an output data processing module 722, and a storage module 723. The storage module 723 stores the audio files and the video files processed by the video processing units 3 and 4 and the audio processing units 5 and 6, and the output data processing module 722 processes the audio files and the video files processed by the video processing units 3 and 4 and the audio processing units 5 and 6 and transmits the audio files and the video files processed by the output data processing module 722 to the encapsulation module 721.
The encapsulation module 721 includes an encapsulation format selection unit 7211 and a plurality of encapsulation units 7212, 7213, 7214 … … with different encapsulation formats, wherein the encapsulation format selection unit 7211 selects a corresponding encapsulation unit from the plurality of encapsulation units (7212, 7213, 7214 … …) to encapsulate according to the encapsulation format information received by the transmission module 73, so that the encapsulation module 721 can encapsulate according to the requirements of different encapsulation formats.
FIG. 4 is a process flow diagram of a distributed processing system of the present invention. The process flow of the distributed processing system is described below in conjunction with fig. 4. The operator turns on each physical machine (server) and starts the distributed processing system (step S1). The above-mentioned scheduling module 1 reads the system file (step S2). Preferably, in step S2, the scheduling module 1 reads the system configuration file (e.g., tvmccd. cfg) in the specified directory, obtains configuration items such as < Input > (Input), < Server > (Server), and < Port > (Port) of the process, and simultaneously reads corresponding files (e.g., tvmccd. par file and logo file) in the directory (e.g., opt/tvmccd/ingress/Dispatcher/Input) specified by the < Input > item. A mapping table file (Map file, e.g., tvmccd. Map) having a source video file name corresponding to the file ID is generated from the contents in the corresponding file (e.g., tvmccd. par). Fig. 5 is a schematic diagram illustrating an example of a Map file of the distributed processing system according to the present invention, wherein the Map file can be used to query the correspondence between the source video file name and the file ID. There may be a plurality of logo files or no logo file. The system configuration file records configuration information of the whole distributed system, such as: IP address of computer where each module is located, used port number, position of working directory, number of video processing and audio processing modules, task allocation method, etc. The PAR file records the file name of the file to be processed, the processed file name, the encoding format, the code rate, the frame rate, the resolution ratio and other parameters of the processed audio/video file. The LOGO file records the contents of watermarks, subtitles and the like which need to be added into the video by a user and the parameters of the size, the position and the like of the contents. Similarly to the scheduling module 1, other modules in the system also read the system configuration file from their respective designated directories, and obtain information such as the communication addresses of the modules in the system from the configuration file, thereby establishing connections with other modules that need to communicate. The scheduling module 1 establishes communication connections with the monitoring module 21 of the input processing unit 2, the monitoring modules (31 and 41) of the video processing units (3 and 4), the monitoring modules (51 and 61) of the audio processing units, and the monitoring module 71 of the output processing unit 7, respectively (step S3). Preferably, in step S3, the scheduling module 1 establishes Socket listening (listening Port number Dispatcher _ Port is in a system configuration file, such as tvmccd. Each processing unit sends a request for establishing Socket connection to the scheduling module 1 according to an address in the system configuration file, once the scheduling module 1 monitors the Socket connection request of the monitoring module, receives the connection request of each monitoring module and establishes Socket link, and sends the corresponding file to the corresponding monitoring module (monitoring module 21, 31, 41, 51, 61 or 71). For example, tvmccd.par and tvmccd.map files are sent to each monitoring module, and logo files are sent to the monitoring modules (51 and 61) of the respective video processing units (5 and 6).
The transmission module 23 of the above-mentioned input processing unit 2 establishes communication connections with the transmission modules (33 and 43) of the respective video processing units (3 and 4) and the transmission modules (53 and 63) of the respective audio processing units (5 and 6), while the transmission module 73 of the above-mentioned output processing unit 7 establishes communication connections with the transmission modules (33 and 43) of the respective video processing units (3 and 4) and the transmission modules (53 and 63) of the respective audio processing units (5 and 6) (step S4). The input processing unit 2 described above starts processing for the source video file (step S5).
Fig. 6 is a flowchart of the processing of step S5. As shown in fig. 6, the input processing module 22 of the input processing unit 2 transmits information such as the start time and the name of the processing procedure of the source video file to the monitoring module 21 (step S51). The input processing module 22 reads a system configuration file (such as tvmccd. cfg file) in a specified directory, and obtains corresponding configuration items such as < Source > (Source item), < Processed > (processing item), < Failed > (error item), < Output > (Output item), and < Send > (sending item) (step S52). The input processing module 22 sequentially reads Source video files from the < Source > entry book (step S53), and the decapsulation module 221 of the input processing module 22 sequentially decapsulates the Source video files (step S54). If the Source video file is successfully unpacked (step S55: Yes), a corresponding sequence of audio and video frames is obtained, and the Source video file that is successfully unpacked is moved from the < Source > entry record to the < Processed > directory and the sequence of audio and video frames that are not packed is stored in the storage module 223 (step S56). The input data processing module 222 of the input processing module 22 processes the sequence of audio and video frames in the storage module 223 to acquire relevant information (e.g., encoding information) thereof, and writes the acquired relevant information (e.g., encoding information) into an information file (info file) (step S57). Fig. 7 is a schematic diagram of the format of the information file. As shown in fig. 7, the information file contains a source file ID, a file length, an encoding type, and the like. In this embodiment, the video information file is named in a format of VI-file id. info, which further includes information such as video bit rate, video height, video width, and the like; info is named in AI-file id format, which also contains information such as audio bit rate, sampling rate, number of channels, etc. In step S57, when the info file is being written, the info file is written to the < Output > directory, and when the info file writing is completed and no error is confirmed, the info file is transferred to the < Send > directory. The input data processing module 222 of the input processing module 22 divides the video frame sequence and the audio frame sequence into a plurality of video data clips (video files) in units of GOPs (groups of pictures) and a plurality of audio data clips (audio files) in units of GOAs (i.e., audio data units corresponding to the GOPs), respectively (step S58). For example, for an MPEG file, each GOP begins with an I-frame, which may be followed by P-frames and B-frames to form a GOP. Since only I-frames have data of complete pictures and P-frames and B-frames have only data that is changed relative to other frames, the frame division of each GOP segment in one file can avoid that the data of P-frames and B-frames cannot recover complete picture data because the corresponding I-frames are not in one GOP file. Fig. 8 is a schematic diagram of a format of a GOP or a GOA file. As shown in fig. 8, each of the divided GOP file and GOA file includes the source file ID, the sequence number of the GOP or GOA file in all the GOP or GOA files of the source file, the number of frames included, and other information, and a frame data portion. The input data processing module 222 names the divided GOP file and GOA file according to a certain rule, and the file name includes the file ID of the source file corresponding to the file, the sequence number of the file in all GOPs or GOA files of the source file, and the file type (GOP file or GOA file) (step S59). For example, a file may be labeled F01-1.GOP, where "F01" is the file ID of the source file, "-1" indicates that the file is numbered 1 in the divided file, and the file suffix ". GOP" indicates that the file is a GOP file; from this file name, it can be determined that the file is the 1 st GOP file of the source file having the file ID F01. The input data processing module 222 allocates the video files and the audio files according to a certain allocation rule, for example, the input data processing module 222 queries the number of audio and video processing units in the system configuration file, and modulo the sequence number of the GOP or the GOA file according to the number of video or audio processing units, thereby determining to which video or audio processing unit the GOP or the GOA file is allocated for processing (step S510). For example, there are 9 GOP files with sequence numbers 1-9, and currently there are 2 video processing units 3 and 4 as shown in FIG. 1; the input data processing module 222 performs modulo-2 (the number of video processing units) operation sequentially with the sequence number of each GOP file, and the file whose result is 1 is allocated to the video processing unit 3, and the file whose result is 0 is allocated to the video processing unit 4. Thus, files 1, 3, 5, 7, 9 are assigned to video processing unit 3, and files 2, 4, 6, 8 are assigned to video processing unit 4. If there are currently 4 video processing units VP1-VP4, a modulo-4 (number of video processing units) operation is performed with the sequence number of each GOP file, with the result that a 1 file is assigned to VP1, a 2 file is assigned to VP2, a 3 file is assigned to VP3, and a 0 file is assigned to VP 4. Thus, files 1, 5, 9 are assigned to VP1, files 2, 6 are assigned to VP2, files 3, 7 are assigned to VP3, and files 4, 8 are assigned to VP 4. Likewise, the GOA file determines which audio processing unit is assigned according to the rule. After the allocation of each GOP or GOA file is completed, the input processing module 22 sends the allocation result to the transmission module 23 of the input processing unit 2 through a message. The input data processing module 222 of the input processing module 22 writes the generated GOP and GOA files into the < Output > entry directory, respectively, and moves the generated GOP and GOA files to the < Send > entry directory when each GOP or GOA file is written to the disk and no error is found, and moves the Source file from the < Source > directory to the < Processed > directory after the processing is completed (step S511), and writes the total number of the GOP and GOA files into the total number file (total file) (step S512). In step S510, when the total file is being written, it is written into the < Output > entry directory, and when the total file is written and no error is confirmed, it is moved to the < Send > entry directory. The transmission module 23 of the input processing unit 2 receives the allocation result of each GOP and GOA file sent by the input processing module 22, and sends the GOP and GOA file together with the total file to the corresponding audio/video processing unit according to the result (step S513). In step S514, the file name (including the file name of the source file and GOP, and the GOA file), the file ID, the number of video frames, the number of audio frames, the video encoding format, and the audio encoding format are sent to the monitoring module 21. If the Source video file is decapsulated with an error (step S55: no), the input processing unit 2 transfers the Source video file with an error in decapsulation from the < Source > item directory to the < Failed > item directory (step S515), and simultaneously transmits the Source video file ID with an error to the monitoring module 21 (step S516). The monitoring module 8 sends the error message received by the monitoring module 21 to the client module 9. The user may input a command through the client module 9 to determine whether to perform a new round of processing on the source video file again, or to directly abandon the new round of processing on the source video file.
Returning to fig. 4, in step S6, the monitoring module 21 of the input processing unit 2 transmits the file name, the file ID, the number of video frames, the number of audio frames, the video encoding format, the audio encoding format, and other relevant information to the scheduling module 1, and the transmission module 23 of the input processing unit 2 transmits the data such as the info file, the GOP and the GOA file, and the total file in the < Send > directory to the corresponding video processing unit 3 (4) and audio processing unit 5 (6) (step S6).
In step S7, the scheduling module 1 sends the relevant information such as the video frame number, the audio frame number, the video encoding format, and the audio encoding format to the corresponding video processing units 3 and 4, the audio processing units 5 and 6, and the output processing unit 7.
In step S8, the video processing module 32 (42) of the video processing unit 3 (4) performs corresponding processing on the GOP file (video file) according to the information about the number of frames and the video coding format of the video received by the monitoring module 31 (41) and the information about the video file and the info file received by the transport module 33 (43), and the audio processing unit 5 (6) performs corresponding processing on the GOA file (audio file) according to the information about the number of frames and the audio coding format of the audio received by the monitoring module 51 (61) and the information about the audio file and the info file received by the transport module 53 (63), and sends the processed audio/video file to the output processing unit 7.
Fig. 9 is a flowchart of the video file processing in step S8. In the present embodiment, the video processing unit 3 and the video processing unit 4 have the same video file processing flow, and therefore the video file processing flow will be described only with respect to the video processing unit 3. As shown in FIG. 9, the video processing module 32 of the above-mentioned video processing unit 3 transmits the start time of the video file processing procedure and the ID information of the received video file (for example, the following files: F01-1.gop, F01-3.gop, F01-5.gop, etc. and F02-1.gop, F02-3.gop, etc. are received) to the above-mentioned monitoring module 31 (step S811). The monitoring module 31 can feed the start time and the ID information of the video file back to the scheduling module 1, so that the system can monitor the processing status of the video processing unit 3. The video processing module 32 reads the system configuration file (tvmccd. cfg) in the specified directory, acquires configuration items such as < Source > (Source item), < Output > (Output item), and < Send > (transmission item) of the processing process of the video processing unit 3 (step S812), and reads the related information (info file) and the video file (gop file) in the < Source > entry (step S813). If the < Source > entry has a total file, the video processing module 32 moves the total file to the < Send > entry directory. Then, the video processing module 32 selects a corresponding decoder to decode the video file according to the video encoding format information received by the monitoring module 31 (step S814). If the decoding is successful (step S815: Yes), the video processing module 32 performs predetermined processing on the decoded video data (step S816). The predetermined processing may be a video frame rate customization adjustment, adding scroll information to the video, merging of different audio/video files, etc. The video processing module 32 encodes the processed video data according to the parameter requirement of the processed file received from the scheduling module 1 by the monitoring module 31, obtains the processed video file (gop file), writes the video file (gop file) being outputted to the < Output > item directory, and transfers the file to the < Send > item directory after the disk writing is finished and the check is correct, and then the file is transferred to the Output processing unit 7 by the transfer module 33 (step S817).
If the decoding is erroneous (step S815: No) and the number of decoding times n of the video file does not exceed the predetermined threshold a (step S818: No), the above-mentioned number of decoding times n is added by 1 (step S819) and returned to step S814 for re-decoding, and if the number of decoding times n of the video file exceeds the predetermined threshold a (step S818: Yes), the ID information of the video file in which the decoding is erroneous is transmitted to the monitoring module 31 (step S8110). The monitoring module 31 feeds back the ID information of the video file with decoding error to the scheduling module 1, and the scheduling module 1 may select to let the input processing unit 2 send the video file with decoding error to the video processing unit 3 again, or let the input processing unit 2 add the video file with decoding error to the remaining video files that are not sent by the transmission module 23 of the input processing unit 2 again and re-sequence the video files, and then allocate the video file with decoding error to the corresponding video processing unit for processing according to the certain allocation rule. In addition, the scheduling module 1 may also choose to directly ignore the decoding error and output the error information to the user.
Fig. 10 is a flowchart of the audio file processing in step S8. In the present embodiment, the video file processing flow of the audio processing unit 5 and the audio processing unit 6 is the same, and therefore, only the audio file processing flow will be described with respect to the audio processing unit 5. As shown in FIG. 10, the audio processing module 52 of the audio processing unit 5 transmits the start time of the audio file processing process and the ID information of the audio files (e.g., the files F01-1.goa, F01-3.goa, F01-5.goa, etc. and F02-1.goa, F02-3.goa, etc. are received) to the monitoring module 51 (step S821). The monitoring module 51 can feed the start time and the ID information of the audio file back to the scheduling module 1, so that the system can monitor the processing status of the audio processing unit 5. The audio processing module 52 reads the system configuration file (tvmccd. cfg) in the specified directory, acquires configuration items such as < Source > (Source item), < Output > (Output item), and < Send > (transmission item) of the processing process of the audio processing unit 5 (step S822), and reads the related information (info file) and the audio file (goa file) in the < Source > entry (step S823). If the < Source > entry has a total file, the audio processing module 52 moves the total file to the < Send > entry directory. Then, the audio processing module 52 selects a corresponding decoder to decode the audio file according to the audio coding format information received by the monitoring module 51 from the scheduling module 1 (step S824). If the decoding is successful (step S825: YES), the audio processing module 52 performs predetermined processing on the decoded audio data (step S826). The predetermined processing may be volume automatic adjustment, volume customized adjustment, or channel processing. The audio processing module 52 encodes the processed audio data according to the parameter requirement of the processed file received from the scheduling module 1 by the monitoring module 51, acquires a processed audio file (goa file), writes the audio file being Output (goa file) into the < Output > item directory, transfers the file to the < Send > item directory after the disk writing is finished and the check is correct, and transmits the file to the Output processing unit 7 by the transmission module 33 (step S827).
If the decoding is erroneous (step S825: NO) and the number of decoding times m of the audio file does not exceed the predetermined threshold b (step S828: NO), the number of decoding times m is added by 1 (step S829) and returned to step S824 for re-decoding, and if the number of decoding times m of the audio file exceeds the predetermined threshold b (step S828: YES), the ID information of the audio file in which the decoding is erroneous is transmitted to the monitoring module 51 (step S8210). The monitoring module 51 feeds back the ID information of the audio file with decoding error to the scheduling module 1, and the scheduling module 1 may select to let the input processing unit 2 send the audio file with decoding error to the audio processing unit 5 again, or let the input processing unit 2 add the audio file with decoding error to the remaining audio files that are not sent by the transmission module 23 of the input processing unit 2 again and reorder the audio files, and then allocate the audio file with decoding error to the corresponding audio processing unit for processing according to the certain allocation rule. In addition, the scheduling module 1 may also choose to directly ignore the decoding error and output the error information to the user.
Returning to fig. 4, in step S9, the output processing unit 7 processes the video file from the video processing unit 3 (4) and the audio file from the audio processing unit 5 (6) according to the relevant information received by the monitoring module 71 from the scheduling module 1, so as to obtain a processed audio/video file.
Fig. 11 is a flowchart of the processing of the output processing unit 7 in step S9. As shown in fig. 11, the output processing module 72 of the output processing unit 7 transmits the start time of the output processing unit 7 and the name of the processing procedure to the monitoring module 71 of the output processing unit 7 (step S91). The Output processing module 72 reads a system configuration file (e.g., tvmccd. cfg) in the specified directory, acquires configuration items of the processing process, such as configuration items of < Source >, < Output >, < Finished > (completion item) (step S92), monitors a write _ close event (write close event) of the < Source > item directory, acquires a file name of a file of the write _ close when the write _ close event arrives, and records a time of the latest write _ close event (step S93). Thus, it is possible to monitor the file newly received in the < Source > entry, and if the total file transmitted to the transmission module 73 by the transmission module 23 via the transmission modules 33, 43, 53 and/or 63 is received, the total information on the audio files and the total information on the video files, such as the total information on the GOP files and the total information on the GOA files, in the total file is read, while the output data processing module 721 obtains the audio/video files, such as the GOP and GOA files (step S94). Based on the total number information of the GOPs and the GOA files in the total file, the output data processing module 722 generates a receiving mapping table of the GOPs and the GOA files for each source file (step S95). The structure of the reception mapping table refers to fig. 12, and the above-mentioned total number information of the received audio files and the total number information of the video files and the generated reception mapping table may be stored in the storage module 723 of the above-mentioned output processing module 72. Fig. 12 is a diagram illustrating a structure of a reception mapping table. As can be seen from fig. 12, the file ID of the source file and all GOP and GOA files of the source file are recorded in the receiving mapping table, and a status identifier is reserved for each GOP and GOA file. The initial status flag is 0, indicating that the GOP or GOA file has not been received. The output data processing module 722 scans the received GOP or GOA file, and sets the status flag of the GOP or GOA file to 1 in the receiving mapping table in turn, indicating that the GOP or GOA file is received (step S96). For example, the output data processing module 722 receives a total file with file ID F01, which shows that the file F01 has 9 GOP files F01-1.GOP to F01-9.GOP and 9 GOA files F01-1.GOA to F01-9. GOA. Accordingly, the output data processing module 722 establishes a reception mapping table as shown in fig. 12 for the source file having the file ID F01. Then, the output data processing module 722 scans the parts of the GOP and GOA files that have been received that belong to the source file. According to the naming convention when the input processing unit divides the file, the output data processing module 722 scans the.gop or. goa file beginning with F01 to determine that the file belongs to the source file F01, and F01-1.GOP is the first GOP file of the source file. As such, all GOP and GOA files of the received source file F01 can be marked as received in the receive mapping table. Since the write _ close event of the < Source > entry directory is monitored, the output data processing module 722 may monitor the newly received GOP and GOA files and mark them as received in the receive map. The receiving mapping table further includes the number information of the received GOPs and the received GOA files, and every time a GOP or GOA file is received, the corresponding file number is added by 1 (step S97).
The output data processing module 722 compares the number of GOPs and GOA files recorded in the receiving mapping table with the number recorded in the total file (step S98). When the numbers coincide (step S98: YES), the output data processing module 722 scans the reception mapping table to see whether the status flag of each GOP and GOA file is received (step S99). When the total number of the received files is different from the information of the total file, the GOP or GOA file is not received.
If the output data processing module 722 determines that all the GOPs and GOA files of the source video file have been received (step S99: yes), the encapsulation process is performed in the encapsulation module 721 of the output processing module 72 (step S910). If the packaging is successful (step S911: yes), the packaging module 721 outputs the successfully packaged new audio/video file to the transmission module 73 of the output data processing unit 7 (step S912). If the packaging is erroneous (step S911: NO), the output data processing module 722 determines whether the number M of attempts to repackage the data exceeds a predetermined value B (step S917). If the number of times M of the above-mentioned re-encapsulation process attempts does not exceed the predetermined value B (step S917: NO), M is incremented by 1 (step S918), and the process is re-attempted by the above-mentioned encapsulation module 721. If the number of attempts of repackaging exceeds the predetermined value B (step S917: YES), the ID information of the audio/video file with the packaging error is sent to the monitoring module 71 of the output processing unit 7 (step S919), and the monitoring module 71 feeds back the packaging error information and the ID information of the audio/video file with the packaging error to the scheduling module 1 (step S920). The scheduling module 1 determines whether to perform a new round of processing on the source video file again or directly abandons the new round of processing on the source video file according to requirements.
If the output data processing module 722 determines that all the audio and video files (GOP and GOA files) in the source video file have not been completely received (step S96: no), the output data processing module 722 determines whether the time N for waiting for the audio and video files that have not been received exceeds a predetermined threshold a (step S913). If the time N for waiting for the audio and video file not to be received does not exceed the predetermined threshold A (step S913: NO), the audio and video file not to be received is continuously waited for. If the time N for waiting for the audio/video file not to be received exceeds the predetermined threshold a (step S913: yes), the output data processing module 722 checks the reception mapping table of the audio/video file, obtains the ID information of the audio file and/or the ID information of the video file, etc., which are not received, and transmits them to the monitoring module 71 of the output processing unit 7 (step S914). For example, as shown in fig. 12, the output data processing module 722 checks the reception mapping table to find that two files, F01-4.goa and F01-6.gop, of tvm. mp4 are not received after the waiting time exceeds the predetermined threshold a, and the output data processing module 722 transmits the file ID of the file that is not received to the monitoring module 71 of the output processing unit 7. The waiting time may be calculated after the output processing module 72 reads the total file of a certain source file, or may be calculated after the receiving mapping table of the source file is established. The monitoring module 71 feeds back the reception error information of the audio and video files and the ID information of the audio and/or video files that have not been received to the scheduling module 1 (step S915). The scheduling module 1 may instruct the input processing unit 2 to re-allocate the audio and video files to the audio processing unit and/or the video processing unit according to the certain allocation rule according to the audio and video file receiving error information and the ID information of the audio files and/or the ID information of the video files that are fed back by the monitoring module 71 (step S916), that is, the serial number of the file is modulo the number of the audio or video processing units. Thus, if the number of processing units changes from the initial allocation, the file may be allocated to a different processing unit than the initial allocation. For example, F01-7. the first allocation of gops is by 4 video processing units VP1-VP4 in the system, which file is allocated to VP 3. The output processing unit does not receive the file and then requests the input processing unit 2 to reprocess the file. At this time, VP2 has exited the system due to a failure, and there are 3 video processing units VP1, VP3, and VP4 in the system. Thus, the allocation rule of the input processing unit 2 becomes modulo 3 by the file number, with the result that a 1 file is allocated to VP1, the result that a 2 file is allocated to VP3, and the result that a 0 file is allocated to VP 4. According to this rule, F01-7.gop is reassigned to VP 1.
The following processing may also be adopted in step S916: the input processing unit 2 assigns a reprocessing sequence number to each GOP or GOA file to be reprocessed, and assigns the sequence number according to the sequence number. For example, the input processing unit 2 receives the IDs of 3 GOP files that need to be reprocessed in sequence: f01-7.gop, F03-2.gop, F02-4. gop. The allocation module 2222 of the input processing unit 2 adds 1 to the sequence number of the GOP file to be reprocessed that it maintains and assigns a newly received GOP file each time it receives a GOP file to be reprocessed. Thus, F01-7.gop, F03-2.gop and F02-4.gop are assigned with numbers 1, 2 and 3 in this order. Assuming that there are 2 video processing units VP1-VP2 in the system at this time, the assignment module 2222 assigns the sequence number 1 of F01-7.gop modulo 2, resulting in 1, to VP 1; sequence number 2 of F03-2.gop modulo 2, resulting in 0, is assigned to VP 2; the sequence number 3 of F02-4.gop modulo 2, resulting in 1, is assigned to VP 1. The allocation module 2222 then sends the above allocation information to the transmission module 23, and the transmission module 23 sends the information to the corresponding video processing unit. If the input processing unit 2 receives the GOA file to be reprocessed, the same method is used for the allocation. The GOP files and GOA files which need to be reprocessed are respectively sorted and have independent serial numbers.
The input processing module, the video processing module, the audio processing module and the output processing module report the working state, the current task, the completed task and the like of the respective modules to the respective monitoring modules through message queues at regular time, and simultaneously, the contents are recorded into respective log files. Each monitoring module sends the IP address, system load, network status, the process status of the processing unit and the information received from the processing module of the processing unit to the scheduling module 1 at regular time. The monitoring module 8 shares the information with the scheduling module 1 at regular time, and displays the information to the user through a graphical interface after processing by the client module 9.
The scheduling module 1 maintains a configuration file by using information sent by each monitoring module, a state attribute is allocated to each physical machine in the configuration file, and when the machine temporarily exits from the system due to power failure or network disconnection, the state attribute is set as unavailable; when a new physical machine joins the system, the user instructs the scheduling module 1 to add the information of the machine to the configuration file. After the information in the configuration file is changed, the scheduling module 1 gives a new version number to the configuration file and immediately sends the version number to each control module.
If the scheduling module 1 is powered off or has no response during the operation of the system, the client module 9 cannot acquire data from the monitoring module 8, and after the situation lasts for a certain time, the client module 9 prompts a user to remove the fault through a prompt box or an alarm sound or other methods. After the fault is eliminated, the scheduling module 1 reads the configuration file again to establish connection with each monitoring module, and each monitoring module retransmits the information which is not successfully transmitted to the scheduling module 1 according to the respective log.
If a certain monitoring module stops working, the scheduling module 1 judges that the module has a fault after not receiving information from the module after a certain time. The cause of the failure may be a physical failure caused by a power outage or network outage or the process of the module may be unresponsive. The scheduling module 1 detects whether the connection to the physical machine of the monitoring module is normal (for example, a ping command may be used), and if the abnormal description is a physical fault, sends a prompt to the user through the client module 9. If the connection is normal, that is, the process of the module has no response, the process waits for a certain time and tries again, and after a certain number of attempts, the process cannot communicate and sends a prompt to the user through the client module 9.
If the physical machine where a certain processing unit is located is powered off, the processing unit of the monitoring module is reinitialized to establish connection with other modules after being powered on again. If the power-down machine is the machine where the input processing unit, the video processing unit or the audio processing unit is located, the Processed file is moved from the < Source > entry to the < Processed > or < failed > directory, so that after recovery, the file is read from the < Source > entry and Processed continuously. If the machine with power failure is the machine where the output processing unit is located, after initialization, the output processing module scans the < Source > item directory, reads the total file under the directory, establishes a receiving mapping table of audio and video files, and fills the receiving mapping table according to GOA files and GOPs under the < Source > item directory. While scanning, the latest received file is monitored, and the processing of S93 to S917 is continued after the above operations are completed for all files in the < Source > entry. If the physical machine where a certain processing unit is located is disconnected from the network, when the connection is recovered, the scheduling module 1 sends the latest configuration file to the physical machine, the monitoring module of the unit inquires the log, and instructs the transmission module of the unit to transmit the file in the < Send > directory to the specified processing unit.
If the process of the monitoring module is restarted after no response, communication is firstly established with other modules after the restart, then the last communication time of each module is inquired in a log, and the scheduling module 1, the corresponding processing module and the corresponding transmission module are required to retransmit information after the time. The monitoring module sends information to the scheduling module 1 with the version number of the configuration file, so that the scheduling module 1 will resend a new configuration file when discovering that the configuration file in the information is expired. And the monitoring module executes normal operation on the received retransmitted information.
As described above, the input processing module, the video processing module, the audio processing module, and the output processing module regularly report the operating status of each module to each monitoring module through the message queue, so that when the process of a certain module fails to respond for more than a certain time, the corresponding monitoring module will find an abnormality, and if the process cannot be recovered after waiting for a certain time, the monitoring module will remind the user to restart the process of the module or automatically restart the process of the module. After restarting, the modules only need to read files from respective working directories for processing.
In the present embodiment, the physical devices are connected via a local area network, and the address of each module is the IP address and the corresponding port number of the device, but the physical devices may be connected via another method such as a wide area network or a high-speed bus, as long as the address representation method corresponding to each module can be provided.
In addition, when the system of the invention is used for processing a large number of video files with small volume, the step of dividing the audio/video files can be omitted, and each audio processing module and each video processing module can independently process the audio or video part of one source file.
In this embodiment, the distributed system is used for processing audio/video files, but the system can also be used for processing other types of data after changing the functions of the corresponding modules, as long as the processing flow of combining after the block processing does not affect the processing result of the data.

Claims (37)

1. A distributed audio-video file processing system comprising:
the input processing unit is used for receiving a source video file, processing the source video file to obtain video data and audio data, respectively dividing the video data and the audio data into video data segments and audio data segments in sequence, and distributing the divided video data segments and audio data segments to corresponding video data processing units/audio data processing units according to a certain distribution rule for processing;
the plurality of video data processing units are respectively used for processing the segmented video data fragments;
the plurality of audio data processing units are respectively used for processing the segmented audio data segments;
the output processing unit is used for processing and outputting the processed video data segment and the processed audio data segment;
the scheduling unit is used for coordinating the work of the input processing unit, the video data processing units, the audio data processing units and the output processing unit; wherein
The input processing unit sends the number of the video data segments and the number of the audio data segments of the source file to the output processing unit;
the input processing unit inserts the source file information of the segment, the sequence number and the file type information in all video or audio data segments of the source file into the video data segment and the audio data segment which are divided;
the output processing unit establishes a receiving mapping table of the source file according to the number of the segmented video data segments and the number of the segmented audio data segments, marks the received processed data segments as received in the receiving mapping table according to the source file information of the segments, the sequence numbers and the file type information in all the video or audio data segments of the source file, which are inserted into the segmented video data segments and audio data segments by the input processing unit, and counts the number of the received video data segments and the number of the audio data segments;
the receiving mapping table also comprises the number of the received video data fragments and the number of the audio data fragments; the output processing unit judges whether the number of the video data segments and the number of the audio data segments in the receiving mapping table are consistent with the number of the video data segments and the number of the audio data segments sent by the input processing unit, and then judges whether all the video data segments and the audio data segments in the receiving mapping table are marked as received;
when a new audio processing unit and/or a new video processing unit are added, the user instructs the scheduling unit to add the information of the unit in the configuration file, the scheduling unit updates the version number of the configuration file, and sends the updated configuration file to each unit of the system.
2. The audio-video file processing system according to claim 1, wherein:
and after the input processing unit divides the video data and the audio data into video data fragments and audio data fragments respectively according to a sequence, performing modular operation on sequence numbers of the video data fragments and the audio data fragments according to the number of the video data processing units and the audio data processing units respectively, and distributing the video data fragments and the audio data fragments to the corresponding video data processing units and audio data processing units respectively according to the modular operation.
3. The audio-video file processing system according to claim 1, wherein:
the input processing unit is provided with a first monitoring module, a first processing module and a first transmission module;
each video data processing unit is respectively provided with a second monitoring module, a second processing module and a second transmission module;
each audio data processing unit is respectively provided with a third monitoring module, a third processing module and a third transmission module;
the output processing unit is provided with a fourth monitoring module, a fourth processing module and a fourth transmission module; wherein,
the first, second, third and fourth monitoring modules are respectively in communication connection with the scheduling unit, receive related instructions from the scheduling unit, and respectively report the running state of the related processing unit to the scheduling unit; the first transmission module is respectively in communication connection with each second transmission module and each third transmission module, and sends corresponding video data segments and audio data segments to be processed to each second transmission module and each third transmission module respectively; the fourth transmission module is respectively in communication connection with each second transmission module and each third transmission module, and receives the processed video data segment and the processed audio data segment.
4. The audio-video file processing system according to claim 3, wherein:
the input processing unit unpacks the received source file to obtain a video sequence and an audio sequence, and respectively divides the video sequence and the audio sequence.
5. The audio-video file processing system according to claim 1, wherein: and after judging that all the processed video data fragments and audio data fragments are received from each video data processing unit and each audio data processing unit, the output processing unit combines the received video data fragments and audio data fragments and packages the combined video data fragments and audio data fragments according to a preset format.
6. The audio-video file processing system according to claim 3, wherein:
the scheduling unit communicates with the first, second, third and fourth monitoring modules through address information obtained by a system configuration file.
7. The audio-video file processing system according to claim 6, wherein:
the first monitoring module, the second monitoring module, the third monitoring module and the fourth monitoring module respectively and periodically send the running state information of the corresponding processing unit to the scheduling unit;
and the scheduling unit maintains a system configuration file according to the running state information and sends the updated system configuration file to the first monitoring module, the second monitoring module, the third monitoring module and the fourth monitoring module respectively.
8. The audio-video file processing system according to claim 6, wherein:
the content of the system configuration file includes the number of the video data processing units, the physical address, the operating status, and the working directory, and the number of the audio data processing units, the physical address, the operating status, and the working directory.
9. The audio-video file processing system according to claim 6, wherein:
when an error occurs in the processing of one of the video data processing unit and the audio data processing unit, the corresponding second or third monitoring module reports the error to the scheduling unit;
the scheduling module determines to re-execute the erroneous process or to ignore the error.
10. The audio-video file processing system according to claim 9, wherein:
the input processing unit adds the video data segments and/or audio data segments with errors into the remaining video data segments and/or audio data segments which are not sent again and carries out re-sequencing, then carries out modular operation on the sequence numbers obtained by the re-sequencing according to the number of the video data processing units and/or the audio data processing units, and redistributes the video data segments and/or the audio data segments with errors to the corresponding video data processing units and/or the audio data processing units according to the results of the modular operation for processing.
11. The audio-video file processing system according to claim 1, wherein:
the input processing unit acquires the decapsulated video frame number, audio frame number and audio-video coding format of the source video file, and sends the video frame number, the audio frame number and the audio-video coding format to the corresponding video data processing unit and the audio data processing unit.
12. The audio-video file processing system according to claim 1, wherein:
the input processing unit inserts the source file information, the sequence numbers in all the video or audio data segments of the source file, and the file type information into the file names of the divided video data segments and audio data segments.
13. The audio-video file processing system according to claim 1, wherein:
when the output processing unit judges that all the video data segments and the audio data segments in the receiving mapping table are marked as received, the processed video data segments and the processed audio data segments are integrated to obtain processed audio and video files;
and when the output processing unit judges that video data segments and audio data segments are marked as not received in the receiving mapping table after a certain time, the output processing unit sends the corresponding missing information of the segmented video data segments or/and the segmented audio data segments to the scheduling unit.
14. The audio-video file processing system according to claim 13, wherein:
and when the scheduling unit receives the missing information, the scheduling unit instructs the input processing unit to perform modular operation on the sequence numbers of the missing video data segments and/or audio data segments according to the number of the current video data processing units and/or audio data processing units, reallocates each missing video data segment and/or audio data segment to the corresponding video data processing unit and/or audio data processing unit for processing according to the result of the modular operation, and sends the video data segment and/or audio data segment which is processed again to the output processing unit.
15. The audio-video file processing system according to claim 13, wherein:
when the scheduling unit receives missing information of the video data segments, the scheduling unit instructs the input processing unit to number all the missing video data segments again in sequence, performs modular operation on the numbers according to the number of the current video data processing units, reallocates each missing video data segment to the corresponding video data processing unit for processing according to the result of the modular operation, and sends the video data segments after being processed again to the output processing unit;
and when the scheduling unit receives the missing information of the audio data segments, the scheduling unit instructs the input processing unit to number all the missing audio data segments again according to the sequence, performs modular operation on the numbers according to the number of the current audio data processing units, reallocates each missing audio data segment to the corresponding audio data processing unit according to the result of the modular operation for processing, and sends the audio data segments after being processed again to the output processing unit.
16. A distributed audio and video file processing method comprises the following steps:
an input processing step, namely receiving a source video file through an input processing unit, processing the source video file to obtain video data and audio data, respectively dividing the video data and the audio data into video data segments and audio data segments in sequence, and distributing the divided video data segments and audio data segments to corresponding video data processing units and audio data processing units according to a certain distribution rule for processing;
a video data processing step, wherein a plurality of video data processing units are respectively used for processing the segmented video data segments;
an audio data processing step, wherein a plurality of audio data processing units are respectively used for processing the segmented audio data segments;
an output processing step of processing and outputting the processed video data segment and audio data segment by an output processing unit;
a scheduling step, in which the input processing unit, the plurality of video data processing units, the plurality of audio data processing units and the output processing unit are coordinated through a scheduling unit;
in the input processing step, the division information of the number of divided video data segments and the number of divided audio data segments of the source file is sent to the output processing unit;
the input processing unit inserts the source file information of the segment, the sequence number and the file type information in all video or audio data segments of the source file into the video data segment and the audio data segment which are divided;
the output processing unit establishes a receiving mapping table of the source file according to the number of the segmented video data segments and the number of the segmented audio data segments, marks the received processed data segments as received in the receiving mapping table according to the source file information of the segments, the sequence numbers and the file type information in all the video or audio data segments of the source file, which are inserted into the segmented video data segments and audio data segments by the input processing unit, and counts the number of the received video data segments and the number of the audio data segments;
the receiving mapping table also comprises the number of the received video data fragments and the number of the audio data fragments; the output processing unit judges whether the number of the video data segments and the number of the audio data segments in the receiving mapping table are consistent with the number of the video data segments and the number of the audio data segments sent by the input processing unit, and then judges whether all the video data segments and the audio data segments in the receiving mapping table are marked as received;
when a new audio processing unit and/or a new video processing unit are added, the user instructs the scheduling unit to add the information of the unit in the configuration file, the scheduling unit updates the version number of the configuration file, and sends the updated configuration file to each unit of the system.
17. The audio-video file processing method according to claim 16, wherein: the input processing step includes:
in the input processing step, after the video data and the audio data are respectively divided into video data segments and audio data segments in sequence, the sequence numbers of the video data segments and the audio data segments are respectively subjected to modular operation according to the number of the video data processing units and the audio data processing units, and the video data segments and the audio data segments are respectively distributed to the corresponding video data processing units and audio data processing units for processing according to the result of the modular operation.
18. The audio-video file processing method according to claim 16, wherein:
a decapsulation step of decapsulating the received source file to obtain a video sequence and an audio sequence;
and a segmentation step, which is used for respectively segmenting the video sequence and the audio sequence.
19. The audio-video file processing method according to claim 16, wherein: in the output processing step, after judging that all the processed video data segments and audio data segments are received from the video data processing units and the audio data processing units, the received video data segments and audio data segments are combined and packaged according to a preset format.
20. An audio-video file processing method according to any one of claims 16 to 19, wherein:
in the scheduling step, address information acquired through a system configuration file is communicated.
21. The audio-video file processing method according to claim 20, wherein:
in the scheduling step, the operation state information of the corresponding processing unit is respectively and periodically sent to the scheduling unit, and the scheduling unit maintains a system configuration file according to the operation state information and sends an updated system configuration file.
22. The audio-video file processing method according to claim 20, wherein:
the content of the system configuration file includes the number of the video data processing units, the physical address, the operating status, and the working directory, and the number of the audio data processing units, the physical address, the operating status, and the working directory.
23. The audio-video file processing method according to claim 20, wherein:
reporting, in the video data processing step, when an error occurs in processing by one of the video data processing units, or when an error occurs in processing by one of the audio data processing units, the error to the scheduling unit in the audio data processing step;
the scheduling module determines to re-execute the erroneous process or to ignore the error.
24. The audio-video file processing method according to claim 23, wherein:
the input processing unit adds the video data segments and/or audio data segments with errors into the remaining video data segments and/or audio data segments which are not sent again and carries out re-sequencing, then carries out modular operation on the sequence numbers obtained by the re-sequencing according to the number of the video data processing units and/or the audio data processing units, and redistributes the video data segments and/or the audio data segments with errors to the corresponding video data processing units and/or the audio data processing units according to the results of the modular operation for processing.
25. The audio-video file processing method according to claim 16, wherein:
in the input processing step, the video frame number, the audio frame number and the audio and video coding format of the source video file after de-encapsulation are obtained, and the video frame number, the audio frame number and the audio and video coding format are sent to a corresponding video data processing unit and an audio data processing unit.
26. The audio-video file processing method according to claim 16, wherein: when the output processing unit judges that all the video data segments and the audio data segments in the receiving mapping table are marked as received, the processed video data segments and the processed audio data segments are integrated to obtain processed audio and video files;
and when the output processing unit judges that video data segments and audio data segments are marked as not received in the receiving mapping table after a certain time, the output processing unit sends the corresponding missing information of the segmented video data segments or/and the segmented audio data segments to the scheduling unit.
27. The audio-video file processing method according to claim 26, wherein:
and when the scheduling unit receives the missing information, the scheduling unit instructs the input processing unit to perform modular operation on the sequence numbers of the missing video data segments and/or audio data segments according to the number of the current video data processing units and/or audio data processing units, reallocates each missing video data segment and/or audio data segment to the corresponding video data processing unit and/or audio data processing unit for processing according to the result of the modular operation, and sends the video data segment and/or audio data segment which is processed again to the output processing unit.
28. The audio-video file processing method according to claim 26, wherein:
when the scheduling unit receives missing information of the video data segments, the scheduling unit instructs the input processing unit to number all the missing video data segments again in sequence, performs modular operation on the numbers according to the number of the current video data processing units, reallocates each missing video data segment to the corresponding video data processing unit for processing according to the result of the modular operation, and sends the video data segments after being processed again to the output processing unit;
and when the scheduling unit receives the missing information of the audio data segments, the scheduling unit instructs the input processing unit to number all the missing audio data segments again according to the sequence, performs modular operation on the numbers according to the number of the current audio data processing units, reallocates each missing audio data segment to the corresponding audio data processing unit according to the result of the modular operation for processing, and sends the audio data segments after being processed again to the output processing unit.
29. A distributed audio and video file processing apparatus comprising:
the first servers are respectively used for processing the video data fragments;
the plurality of second servers are respectively used for processing the audio data fragments;
the third server is used for processing and outputting the processed video data fragments and audio data fragments;
the fourth server is used for receiving a source video file, processing the source video file to obtain video data and audio data, sequentially dividing the video data and the audio data into video data segments and audio data segments respectively, distributing the divided video data segments and audio data segments to the corresponding first server and the second server according to a certain distribution rule for processing, and coordinating the work among the first server, the second server and the third server; wherein,
the fourth server sends the dividing information of the number of the divided video data segments and the number of the divided audio data segments of the source file to the third server;
the fourth server inserts the source file information, the sequence numbers and the file type information in all the video or audio data segments of the source file into the file names of the divided video data segments and audio data segments;
the third server establishes a receiving mapping table of the source file according to the number of the segmented video data segments and the number of the segmented audio data segments, marks the received processed data segments as received in the receiving mapping table according to the source file information of the segments, the sequence numbers and the file type information in all the video or audio data segments of the source file, which are inserted into the segmented video data segments and audio data segments by the fourth server, and counts the number of the received video data segments and the number of the received audio data segments;
the receiving mapping table also comprises the number of the received video data fragments and the number of the audio data fragments; after the third server judges that the number of the video data fragments and the number of the audio data fragments in the receiving mapping table are consistent with the number of the video data fragments and the number of the audio data fragments sent by the fourth server, the third server judges whether all the video data fragments and the audio data fragments in the receiving mapping table are marked as received;
when a new audio processing unit and/or a new video processing unit are added, the user instructs the scheduling unit to add the information of the unit in the configuration file, the scheduling unit updates the version number of the configuration file, and sends the updated configuration file to each unit of the system.
30. The apparatus of claim 29, wherein:
and after the fourth server divides the video data and the audio data into video data fragments and audio data fragments in sequence, respectively performing modular operation on sequence numbers of the video data fragments and the audio data fragments according to the number of the first server and the second server, and respectively allocating the video data fragments and the audio data fragments to the corresponding first server and the second server for processing according to the result of the modular operation.
31. The apparatus of claim 29, wherein:
each first server only runs one video data processing process;
each second server runs only one audio data processing process.
32. The apparatus of claim 29, wherein:
and the fourth server decapsulates the received source file to obtain a video sequence and an audio sequence, and respectively segments the video sequence and the audio sequence.
33. The apparatus of claim 29, wherein: and after judging that all the processed video data fragments and audio data fragments are received from the first servers and the second servers, the third server combines the received video data fragments and audio data fragments and packages the video data fragments and the audio data fragments according to a preset format.
34. The apparatus of claim 29, wherein:
and the fourth server acquires the video frame number, the audio frame number and the audio and video coding format of the source video file after de-encapsulation, and sends the video frame number, the audio frame number and the audio and video coding format to the corresponding first server and the second server.
35. The apparatus of claim 29, wherein:
when the third server judges that all the video data segments and the audio data segments in the receiving mapping table are marked as received, the processed video data segments and the processed audio data segments are integrated to obtain processed audio and video files;
and when the third server judges that video data segments and audio data segments are marked as not received in the receiving mapping table after a certain time, the third server sends the corresponding missing information of the segmented video data segments or/and the segmented audio data segments to the fourth server.
36. The apparatus of claim 35, wherein:
and when the fourth server receives the missing information, performing modular operation on the sequence numbers of the missing video data segments and/or audio data segments according to the number of the current first server and/or second server, reallocating each missing video data segment and/or audio data segment to the corresponding first server and/or second server for processing according to the result of the modular operation, and sending the video data segments and/or audio data segments after being processed again to the third server.
37. The apparatus of claim 35, wherein:
when the fourth server receives missing information of the video data segments, numbering all the missing video data segments again according to the sequence, performing modular operation on the numbers according to the number of the current first servers, reallocating each missing video data segment to the corresponding first server according to the result of the modular operation for processing, and sending the video data segments after the secondary processing to the third server;
and when the fourth server receives the missing information of the audio data segments, numbering all the missing audio data segments again according to the sequence, performing modular operation on the numbers according to the number of the current second servers, reallocating each missing audio data segment to the corresponding second server according to the result of the modular operation for processing, and sending the audio data segments after being processed again to the third server.
CN201310558626.XA 2013-11-12 2013-11-12 A kind of distributed tones video process apparatus and processing method Expired - Fee Related CN103605710B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310558626.XA CN103605710B (en) 2013-11-12 2013-11-12 A kind of distributed tones video process apparatus and processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310558626.XA CN103605710B (en) 2013-11-12 2013-11-12 A kind of distributed tones video process apparatus and processing method

Publications (2)

Publication Number Publication Date
CN103605710A CN103605710A (en) 2014-02-26
CN103605710B true CN103605710B (en) 2017-10-03

Family

ID=50123933

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310558626.XA Expired - Fee Related CN103605710B (en) 2013-11-12 2013-11-12 A kind of distributed tones video process apparatus and processing method

Country Status (1)

Country Link
CN (1) CN103605710B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103838878B (en) * 2014-03-27 2017-03-01 无锡天脉聚源传媒科技有限公司 A kind of distributed tones processing system for video and processing method
CN105224291B (en) * 2015-09-29 2017-12-08 北京奇艺世纪科技有限公司 A kind of data processing method and device
CN105354242A (en) * 2015-10-15 2016-02-24 北京航空航天大学 Distributed data processing method and device
CN105407360A (en) * 2015-10-29 2016-03-16 无锡天脉聚源传媒科技有限公司 Data processing method and device
CN105354058A (en) * 2015-10-29 2016-02-24 无锡天脉聚源传媒科技有限公司 File updating method and apparatus
CN105357229B (en) * 2015-12-22 2019-12-13 深圳市科漫达智能管理科技有限公司 Video processing method and device
CN110635864A (en) * 2019-10-09 2019-12-31 中国联合网络通信集团有限公司 Parameter decoding method, device, equipment and computer readable storage medium
CN111372011B (en) * 2020-04-13 2022-07-22 杭州友勤信息技术有限公司 KVM high definition video decollator
CN114900718A (en) * 2022-07-12 2022-08-12 深圳市华曦达科技股份有限公司 Multi-region perception automatic multi-subtitle realization method, device and system
CN116108492B (en) * 2023-04-07 2023-06-30 安羚科技(杭州)有限公司 Laterally expandable data leakage prevention system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101098483A (en) * 2007-07-19 2008-01-02 上海交通大学 Video cluster transcoding system using image group structure as parallel processing element
CN101098260A (en) * 2006-06-29 2008-01-02 国际商业机器公司 Distributed equipment monitor management method, equipment and system
CN101141627A (en) * 2007-10-23 2008-03-12 深圳市迅雷网络技术有限公司 Storage system and method of stream media file
CN102739799A (en) * 2012-07-04 2012-10-17 合一网络技术(北京)有限公司 Distributed communication method in distributed application

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8296358B2 (en) * 2009-05-14 2012-10-23 Hewlett-Packard Development Company, L.P. Method and system for journaling data updates in a distributed file system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101098260A (en) * 2006-06-29 2008-01-02 国际商业机器公司 Distributed equipment monitor management method, equipment and system
CN101098483A (en) * 2007-07-19 2008-01-02 上海交通大学 Video cluster transcoding system using image group structure as parallel processing element
CN101141627A (en) * 2007-10-23 2008-03-12 深圳市迅雷网络技术有限公司 Storage system and method of stream media file
CN102739799A (en) * 2012-07-04 2012-10-17 合一网络技术(北京)有限公司 Distributed communication method in distributed application

Also Published As

Publication number Publication date
CN103605710A (en) 2014-02-26

Similar Documents

Publication Publication Date Title
CN103605710B (en) A kind of distributed tones video process apparatus and processing method
CN103605709B (en) A kind of distributed tones video process apparatus and processing method
EP3806477B1 (en) Video transcoding system and method, apparatus, and storage medium
US10110655B2 (en) Method and apparatus for transmitting/receiving media contents in multimedia system
KR101978738B1 (en) Streaming media data transmission method, client and server
JP2016528799A (en) Network device and error processing method
US9807433B2 (en) Encoding system and encoder reallocation method
CN103838878B (en) A kind of distributed tones processing system for video and processing method
EP3457654B1 (en) Apparatus and method for delivering and receiving multimedia data in hybrid network
US11876723B2 (en) Method and apparatus for transmitting and receiving packet in communication system
CN105900446A (en) Communication apparatus, communication data generation method, and communication data processing method
EP2680587A1 (en) Video encoding device and video decoding device
JP2007049556A (en) Video server system and data transfer method
US10880586B2 (en) Method and device for transmitting and receiving MMTP packet
JP5269850B2 (en) Broadcast material reproduction apparatus and data transmission method
KR20130119888A (en) Apparatus for processing video data having interdependencies
US8499223B2 (en) Receiving apparatus, receiving method and non-transitory computer readable recording medium for recording receiving program
KR102191970B1 (en) Apparatus and Method for Transmitting Data
JP2013026804A (en) Video transmission system
KR20160150065A (en) MMT apparatus and method for supporting de-capsulation buffer management

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A distributed audio and video processing device and processing method

Effective date of registration: 20210104

Granted publication date: 20171003

Pledgee: Inner Mongolia Huipu Energy Co.,Ltd.

Pledgor: TVMINING (BEIJING) MEDIA TECHNOLOGY Co.,Ltd.

Registration number: Y2020990001527

PE01 Entry into force of the registration of the contract for pledge of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20171003

Termination date: 20211112

CF01 Termination of patent right due to non-payment of annual fee