CN113645490B - Soft-hard combined multichannel video synchronous decoding method - Google Patents

Soft-hard combined multichannel video synchronous decoding method Download PDF

Info

Publication number
CN113645490B
CN113645490B CN202110697282.5A CN202110697282A CN113645490B CN 113645490 B CN113645490 B CN 113645490B CN 202110697282 A CN202110697282 A CN 202110697282A CN 113645490 B CN113645490 B CN 113645490B
Authority
CN
China
Prior art keywords
frame
data
decoding
length
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110697282.5A
Other languages
Chinese (zh)
Other versions
CN113645490A (en
Inventor
高娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Jinhang Computing Technology Research Institute
Original Assignee
Tianjin Jinhang Computing Technology Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Jinhang Computing Technology Research Institute filed Critical Tianjin Jinhang Computing Technology Research Institute
Priority to CN202110697282.5A priority Critical patent/CN113645490B/en
Publication of CN113645490A publication Critical patent/CN113645490A/en
Application granted granted Critical
Publication of CN113645490B publication Critical patent/CN113645490B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A soft-hard combined multichannel video synchronous decoding method. The invention belongs to a video decoding technology under a linux system, and relates to a multichannel video synchronous decoding design method using soft and hard combination under the linux system. The method is characterized in that: firstly, transplanting ffmpeg to a Hai Si platform, and modifying source codes of the ffmpeg to enable the ffmpeg to have a function of returning parameter frames and adapt to the Hai Si chip. Secondly, starting a network to perform data transmission interaction with a host, and adopting a buffer mechanism to received data so as to realize the synchronization of multiple paths of videos; then starting a dynamic library of the ffmpeg to dynamically filter the data transmitted by the network, removing an error frame, and simultaneously acquiring a data packet combining the image and the parameter information; and finally, the complete data packet is transmitted to a hard decoding module of the Haishi chip, and the decoding module data is obtained and then transmitted to the host again through a network, so that the decoding task is completed.

Description

Soft-hard combined multichannel video synchronous decoding method
Technical Field
The invention belongs to a video decoding technology under a linux system, and particularly relates to a soft-hard combined multichannel video synchronous decoding method.
Background
Hi3559AV100 is professional 8K Ultra HD Mobile Camera SOC, which provides digital video recording with 8K30/4K120 broadcast-level image quality, supports multipath Sensor input, supports H.265 coded output or video-level RAW data output, integrates high-performance ISP processing, and simultaneously provides excellent image processing capability for users by adopting an advanced low-power technology and a low-power architecture design.
Hi3559AV100 supports industry leading multi-channel 4K Sensor input, multi-channel ISP image processing, HDR10 high dynamic range technology standard, and multi-channel panoramic hardware stitching. The Hi3559AV100 provides hardened 6-Dof digital anti-shake under the support of 8K30/4K120 video recording, and reduces the dependence on a mechanical holder.
However, hi3559AV100 belongs to the category of hard decoding, and in the case where protocol frames do not fully conform to the decoding protocol or there are a large number of erroneous frames, the decoding efficiency is low or decoding is impossible. In addition, when multiple images are simultaneously transmitted, the resolution is not synchronized due to different transmission rates of the multiple images. The invention adopts the ffmpeg decoding library to obtain the parameter frame information and the data frame information in the original frame, effectively filters the error frame, combines a complete data packet to the hard decoding module, greatly shortens the decoding time, and simultaneously adopts the buffer mechanism to process the image data, thereby effectively solving the problem of asynchronous decoding.
Disclosure of Invention
The invention solves the technical problems that: the invention provides a soft and hard combined multichannel video synchronous decoding method, which overcomes the defects of the prior art, and takes a Hai Si Hi3559AV100 chip as a hard decoding module under the condition that a system is linux, and uses an FFMPEG decoding library to acquire complete frame information, so that compressed frame parameter information is effectively acquired, and decoding time is reduced. Meanwhile, the received image data is buffered, and the phenomenon that multiple paths of images are not synchronous due to different transmission rates is reduced.
The technical scheme of the invention is as follows:
in a first aspect, a soft-hard combined multi-channel video synchronous decoding method includes the following steps:
1) Configuring compiling attribute and parameter of the ffmpeg, and transplanting the ffmpeg dynamic library to a Hai Si platform;
2) Creating a network to receive image data task and storing the data into a ring buffer;
3) The original data is taken out from the ring buffer area, and the error frames are filtered by using the ffmpeg dynamic library to obtain complete data packets;
4) And screening the error frames of the complete data packet acquired by the ffmpeg based on the fault tolerance strategy.
5) Decoding the image data packet by using a hard decoding module in the Haishi platform, and transmitting the screened complete data packet to the hard decoding module;
6) And acquiring the decoded channel image, and returning the decoded data to the host computer by using the network.
Optionally, step 2) the task of creating a network to receive image data specifically includes:
212 Acquiring a receiving ip address and a port in a configuration file;
213 A) creating a network socket;
214 Zero clearing the receiving buffer area, and entering step 215 after waiting to receive the image data sent by the network);
215 Judging whether the length received at this time is greater than zero, if so, proceeding to the next step, otherwise, returning to the step 213);
216 If the protocol frame head meets the protocol requirements, discarding the image frame, otherwise, storing the data into the ring buffer area.
Optionally, the storing the data in the ring buffer in step 2) is specifically:
221 Judging whether the data length len received by the network is more than 0, if so, carrying out the next step, and if not, exiting; waiting for the original compressed image data transmitted by the next network;
222 Searching the channel number of the current data according to bytes specified by the protocol, and storing the channel number;
223 Judging whether the sum of the existing data lengths cirLen and LEN of the data buffer of the channel number is smaller than the maximum length MAX_LEN of the buffer, if so, proceeding to the next step, otherwise, proceeding to the step 226);
224 Data of the network reception area is copied to the ring buffer with a copy length len.
225 The total length of data in the buffer area is increased by len number, and the head address ptr of the stored data is also moved by len positions.
226 Copying the network receiving area data into a buffer area, wherein the copy length is equal to the maximum value of the_Len and the cirLen;
227 Resetting the write pointer putPtr, moving the pointer to the buffer head address;
228 Calculating the total buffer data length cirLen to len and step 226) copy length difference;
229 Receiving an array copy from the network to the buffer, the buffer position being shifted by max_len minus cirLen bytes length, the copy length being cirLen;
2210 A) forward move write pointer putPtr, a move cirLen bytes.
Optionally, in the step 4), the filtering of the error frames is performed on the complete data packet acquired by the ffmpeg, specifically:
41 Searching for a frame header in the acquired data according to the h265 protocol frame header;
42 Searching for a frame tail in the acquired data according to the h265 protocol frame head;
43 Finding the frame of the frame header and frame trailer as a complete frame and recording the image frame type, if not, discarding the image frame.
44 Recording the type of the original data required by each image frame to obtain a coding rule;
45 Analyzing erroneous image data that does not conform to the law in step 44);
451 Storing vps frame, sps frame, pps frame and sei frame which are earlier than the current frame in time and are nearest;
452 If the current frame and the frame stored before do not conform to the encoding rule Encode, and the current frame is an unnecessary frame, modifying the first p frame in the encoding rule into an I frame, and combining all p frames in the encoding rule with the vps frame, the sps frame, the pps frame, the sei frame obtained in the step 451) to form image data; completing the error frame screening work to obtain a complete data packet after screening; otherwise, go to step 453);
453 If the current frame does not conform to the encoding rule Encode with the frame stored before and the frame after the current frame is a vps frame, modifying the first p frame of the current encoding rule into an I frame, and combining all p frames in the vps frame, sps frame, pps frame, sei frame and modified encoding rule of step 451) as one image data; completing the error frame screening work to obtain a complete data packet after screening; otherwise, go to step 454);
454 If the current frame accords with the coding rule with the frame stored before, and the current frame is an error frame, discarding the current frame, modifying the first p frame in the coding rule into an I frame, and taking the first two p frames and the vps frame, sps frame, pps frame and sei frame of step 451) to be combined together as image data; and (3) completing the error frame screening work to obtain the screened complete data packet.
Optionally, the step 3) of obtaining the complete data packet specifically includes:
321 The method comprises the following steps of:
3211 Acquiring the write pointer position putPtr of the current buffer area and the total length cirLen of the data of the buffer area;
3212 Judging whether the read pointer readPtr is consistent with the putPtr, if so, delaying for 1ms, returning to step 3211), otherwise, proceeding to the next step.
3213 Judging whether the read data readLen is smaller than the cirLen, if so, continuing the next step, otherwise, step 3217);
3214 Judging whether the difference value between the cirLen and the readLen is larger than or equal to the fixed length frame_len of the protocol frame, if so, continuing the next step, and if not, executing a step 3216);
3215 Taking the current readPtr as a decoded image data first address bufPtr, wherein the length bufLen is frame_len, simultaneously moving a pointer readPtr, moving bytes are frame_len, increasing the read length readLen, and increasing the number of bytes to be frame_len;
3216 The current readPtr is taken as a decoded image data head address bufPtr, the length bufLen is cirLen minus readLen, meanwhile, the readPtr is moved, the moving number is cirLen minus readLen, the reading length readLen is increased, and the number of added bytes is the difference value between cirLen and readLen.
3217 Judging whether the sum of the readLen and the frame_len is smaller than the maximum length MAX_LEN of the buffer area, if so, proceeding to the next step, if not, proceeding to step 3219);
3218 The current readPtr is taken as a decoded image data head address bufPtr, the data length bufLen is frame_len, a read pointer is moved, the number of bytes moved by the pointer is frame_len, the read data length readLen is increased, and the number of bytes increased is frame_len.
3219 The current readPtr is taken as the decoded image data head address bufPtr, the data length bufLen is the difference of MAX_LEN minus readLen, the read pointer is moved to the buffer head address, and the read length readLen is set to 0.
322 Judging whether the current data length bufLen is larger than 0, if so, continuing the next step, if not, exiting the decoding process of the data, and waiting for the original compressed image data transmitted by the next network;
323 Transmitting a data array head address pointer bufPtr and a length bufLen to a soft decoding module, dividing data into frames by using a library function av_player_passe2, recording to continue the next step if a complete image frame data packet can be successfully obtained from the array, and exiting the decoding process if the complete image frame data packet is not obtained;
324 Saving the data packet length ret of the image frame segmentation in the current data array, eliminating the data packet length ret of the image frame segmentation from the total length bufLen of the data array, and moving the first address pointer bufPtr forwards for ret;
325 The image complete data packet segmented at this time is put into a queue to be decoded, and the original compressed image data transmitted by the next network is waited for, and the step 321 is returned.
Optionally, the decoding of the image data packet in step 5) using a hard decoding module in the haisi platform is specifically:
51 Initializing a hard decoding module according to the image parameters and the decoding type, configuring the size of a video data buffer area, and starting the decoding module;
52 A buffer area buf of a dynamic application video image frame data packet;
53 If yes, exiting the processing flow to carry out step 57), otherwise, carrying out the next step;
54 Setting parameters of the current frame): a stream end identifier, a frame header flag, a frame end flag;
55 Placing the image frame data into buf;
56 Calling a dynamic library function to send buf data to a decoding module;
57 Monitoring the decoding state of the hard decoding module in real time, if decoding errors occur, soft restarting the hard decoding module and resetting parameters, and if decoding is normal, calling a library function to obtain a decoded image;
58 Calling a library function to stop sending video streams to the decoding module, closing the decoding channel, unbinding the binding relation of each module, and clearing resources.
Optionally, the step 6) returns the decoded data to the host using the network, specifically:
61 Creating a thread for acquiring the decoded image;
62 Entering an image acquisition loop;
63 Inquiring the working state in the channel, if the working state is the reset state, delaying for 1ms, continuously inquiring the reset state, and if the working state is not the reset state, performing the next step;
64 Calling a library function HI_MPI_VDEC_GetFrame to acquire a memory address of the image storage; jump to step 63) if failed, go to the next step if successful;
65 A network transmission task is created to transmit the data held by the memory address in step 64) to the host.
In a second aspect, a processing apparatus includes:
a memory for storing a computer program;
a processor for calling and running the computer program from the memory to perform the method of the first aspect.
A computer readable storage medium having stored therein a computer program or instructions which, when executed, implement the method of the first aspect.
A computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of the first aspect.
Compared with the prior art, the invention has the advantages that:
the invention can realize the video decoding and transmission problems under the linux system, and the method has been verified by the algorithm and has been subjected to experimental verification. The result shows that the scheme can dynamically filter the error frames by adopting a soft decoding method to solve the image decoding problem, acquire complete data packets, complete the decoding process by using a hard decoding module of a chip, and effectively reduce the decoding time. The received data is synchronized with the multi-path image data decoding process using a buffering mechanism.
Drawings
Fig. 1 is a flow chart of a soft-hard combined multi-channel video synchronous decoding method in the invention.
Detailed Description
Aiming at the attribute of a hard decoding module of a Hai Si Hi3559AV100 chip and combining the soft decoding characteristic, the design realizes a multi-channel video synchronous decoding method based on the combination of soft decoding and hard decoding under a linux system. The method comprises the following steps:
1) Configuring compiling attribute and parameter of the ffmpeg, and transplanting the ffmpeg dynamic library to a Hai Si platform; first, the ffmpeg compile attribute is configured. Then, modifying the ffmpeg source code to make it have the function of returning parameter frames, and cross-compiling the ffmpeg source code. And finally, obtaining the ffmpeg decoding dynamic library and copying the ffmpeg decoding dynamic library to a Hai Si development board. And 1) configuring a soft decoding environment for the board card, and transplanting a soft decoding library ffmpeg of the board card to adapt to a Hai Si platform.
2) Creating a network to receive image data task and storing the data into a ring buffer; firstly, the information of the ip and the port number in the transmission is acquired according to the configuration file. Then, the image data waiting for the network transmission is blocked. And finally, storing the data into a buffer area. Step 2) creating a network receiving task, classifying and receiving data of different channels, storing the data into corresponding annular buffer areas, synchronizing the data receiving process of each channel, and reducing the phenomenon of asynchronous decoding caused by different transmission rates;
3) The original data is taken out from the ring buffer area, and the error frames are filtered by using the ffmpeg dynamic library to obtain complete data packets;
first, the ffmpeg use environment is initialized. And secondly, creating a data packet acquisition thread, taking out the original data from the annular buffer area, and dynamically filtering error frames by using a ffmpeg dynamic library to acquire a complete data packet of the compressed image. Step 3) performing fixed protocol length fetch operation on the ring buffer, creating a use environment for ffmpeg, and designating a decoder with starting requirements, so that protocol analysis can be performed on the original data. At the same time, the original data is circularly acquired from the data area received by the network, the length of the data packet which can be combined into a complete image frame is acquired by calling the library function, the length is removed from the annular buffer data area, and the cycle is repeatedly performed until the data area has no data.
4) And screening the error frames of the complete data packet acquired by the ffmpeg based on the fault tolerance strategy.
5) Decoding the image data packet by using a hard decoding module in the Haishi platform, and transmitting the screened complete data packet to the hard decoding module; after obtaining the complete data packet extracted by the ffmpeg, the video stream is sent to the decoding module through the Hai Si dynamic library function. And starting the decoding module to perform decoding tasks, monitoring the working state of the decoding module in real time, and performing soft reset on the decoder according to the decoding condition to prevent the decoder from generating faults that the decoder cannot continue decoding due to error frames. And 5) starting a hard decoding module of the chip, dynamically applying for the size of the image buffer area according to the size of the image to be decoded, copying the original image data into the buffer, and performing hard decoding on the configuration information of the original image data. Meanwhile, a real-time monitoring thread is started for the condition that the decoding module cannot work due to the error frame, the decoding state of the decoding module is analyzed, and soft reset measures are implemented so that the decoding module can continuously perform decoding work.
6) And acquiring the decoded channel image, and returning the decoded data to the host computer by using the network.
And establishing a task of acquiring the decoded image, reading the working state of the decoder in real time, calling a library function to acquire the decoded image in the decoding channel, and transmitting the decoded image data to a host through a network transmission task. Through the steps, the video decoding function under the linux system can be realized. And 6) starting a network sending task, and sending the decoded image obtained from the decoding module in real time to a host computer through the network, so as to complete a data transmission task.
Step 2) the task of creating network to receive image data is specifically:
212 Firstly, acquiring a receiving ip address and a port in a configuration file;
213 Then, creating a network socket;
214 Secondly, clearing the receiving buffer area, and entering step 215 after waiting to receive the image data sent by the network);
215 Judging whether the length received at this time is greater than zero, if so, proceeding to the next step, otherwise, returning to the step 213);
216 If the protocol frame head meets the protocol requirements, discarding the image frame, and if the protocol frame head meets the protocol requirements, storing the image frame into a ring buffer area.
Step 2) storing the data into the ring buffer, specifically:
221 Judging whether the data length len received by the network is more than 0, if so, carrying out the next step, and if not, exiting; waiting for the original compressed image data transmitted by the next network;
222 Searching the channel number of the current data according to bytes specified by the protocol, and storing the channel number;
223 Judging whether the sum of the existing data lengths cirLen and LEN of the data buffer of the channel number is smaller than the maximum length MAX_LEN of the buffer, if so, proceeding to the next step, otherwise, proceeding to the step 226);
224 Data of the network reception area is copied to the ring buffer with a copy length len.
225 The total length of data in the buffer area is increased by len number, and the head address ptr of the stored data is also moved by len positions.
226 Copying the network receiving area data into a buffer area, wherein the copy length is equal to the maximum value of the_Len and the cirLen;
227 Resetting the write pointer putPtr, moving the pointer to the buffer head address;
228 Calculating the total buffer data length cirLen to len and step 226) copy length difference;
229 Receiving an array copy from the network to the buffer, the buffer position being shifted by max_len minus cirLen bytes length, the copy length being cirLen;
2210 A) forward move write pointer putPtr, a move cirLen bytes.
The step 4) of screening the error frames of the complete data packet obtained by the ffmpeg specifically includes:
41 Searching for a frame header in the acquired data according to the h265 protocol frame header;
42 Searching for a frame tail in the acquired data according to the h265 protocol frame head;
43 Finding the frame of the frame header and frame trailer as a complete frame and recording the image frame type, if not, discarding the image frame.
44 Recording the type of the original data required by each image frame to obtain a coding rule;
45 Analyzing erroneous image data that does not conform to the law in step 44);
451 Storing vps frame, sps frame, pps frame and sei frame which are earlier than the current frame in time and are nearest;
452 If the current frame and the frame stored before do not conform to the encoding rule Encode, and the current frame is an unnecessary frame, modifying the first p frame in the encoding rule into an I frame, and combining all p frames in the encoding rule with the vps frame, the sps frame, the pps frame, the sei frame obtained in the step 451) to form image data; completing the error frame screening work to obtain a complete data packet after screening; otherwise, go to step 453);
453 If the current frame does not conform to the encoding rule Encode with the frame stored before and the frame after the current frame is a vps frame, modifying the first p frame of the current encoding rule into an I frame, and combining all p frames in the vps frame, sps frame, pps frame, sei frame and modified encoding rule of step 451) as one image data; completing the error frame screening work to obtain a complete data packet after screening; otherwise, go to step 454);
454 If the current frame accords with the coding rule with the frame stored before, and the current frame is an error frame, discarding the current frame, modifying the first p frame in the coding rule into an I frame, and taking the first two p frames and the vps frame, sps frame, pps frame and sei frame of step 451) to be combined together as image data; and (3) completing the error frame screening work to obtain the screened complete data packet.
The step 3) of obtaining the complete data packet specifically includes:
321 The method comprises the following steps of:
3211 Acquiring the write pointer position putPtr of the current buffer area and the total length cirLen of the data of the buffer area;
3212 Judging whether the read pointer readPtr is consistent with the putPtr, if so, delaying for 1ms, returning to step 3211), otherwise, proceeding to the next step.
3213 Judging whether the read data readLen is smaller than the cirLen, if so, continuing the next step, otherwise, step 3217);
3214 Judging whether the difference value between the cirLen and the readLen is larger than or equal to the fixed length frame_len of the protocol frame, if so, continuing the next step, and if not, executing a step 3216);
3215 Taking the current readPtr as a decoded image data first address bufPtr, wherein the length bufLen is frame_len, simultaneously moving a pointer readPtr, moving bytes are frame_len, increasing the read length readLen, and increasing the number of bytes to be frame_len;
3216 The current readPtr is taken as a decoded image data head address bufPtr, the length bufLen is cirLen minus readLen, meanwhile, the readPtr is moved, the moving number is cirLen minus readLen, the reading length readLen is increased, and the number of added bytes is the difference value between cirLen and readLen.
3217 Judging whether the sum of the readLen and the frame_len is smaller than the maximum length MAX_LEN of the buffer area, if so, proceeding to the next step, if not, proceeding to step 3219);
3218 The current readPtr is taken as a decoded image data head address bufPtr, the data length bufLen is frame_len, a read pointer is moved, the number of bytes moved by the pointer is frame_len, the read data length readLen is increased, and the number of bytes increased is frame_len.
3219 The current readPtr is taken as the decoded image data head address bufPtr, the data length bufLen is the difference of MAX_LEN minus readLen, the read pointer is moved to the buffer head address, and the read length readLen is set to 0.
322 Judging whether the current data length bufLen is larger than 0, if so, continuing the next step, if not, exiting the decoding process of the data, and waiting for the original compressed image data transmitted by the next network;
323 Transmitting a data array head address pointer bufPtr and a length bufLen to a soft decoding module, dividing data into frames by using a library function av_player_passe2, recording to continue the next step if a complete image frame data packet can be successfully obtained from the array, and exiting the decoding process if the complete image frame data packet is not obtained;
324 Saving the data packet length ret of the image frame segmentation in the current data array, eliminating the data packet length ret of the image frame segmentation from the total length bufLen of the data array, and moving the first address pointer bufPtr forwards for ret;
325 The image complete data packet segmented at this time is put into a queue to be decoded, and the original compressed image data transmitted by the next network is waited for, and the step 321 is returned.
The step 5) decodes the image data packet by using a hard decoding module in the haisi platform, specifically:
51 Initializing a hard decoding module according to the image parameters and the decoding type, configuring the size of a video data buffer area, and starting the decoding module;
52 A buffer area buf of a dynamic application video image frame data packet;
53 If yes, exiting the processing flow to carry out step 57), otherwise, carrying out the next step;
54 Setting parameters of the current frame): a stream end identifier, a frame header flag, a frame end flag;
55 Placing the image frame data into buf;
56 Calling a dynamic library function to send buf data to a decoding module;
57 Monitoring the decoding state of the hard decoding module in real time, if decoding errors occur, soft restarting the hard decoding module and resetting parameters, and if decoding is normal, calling a library function to obtain a decoded image;
58 Calling a library function to stop sending video streams to the decoding module, closing the decoding channel, unbinding the binding relation of each module, and clearing resources.
The step 6) returns the decoded data to the host computer by using the network, specifically:
61 Creating a thread for acquiring the decoded image;
62 Entering an image acquisition loop;
63 Inquiring the working state in the channel, if the working state is the reset state, delaying for 1ms, continuously inquiring the reset state, and if the working state is not the reset state, performing the next step;
64 Calling a library function HI_MPI_VDEC_GetFrame to acquire a memory address of the image storage; jump to step 63) if failed, go to the next step if successful;
65 A network transmission task is created to transmit the data held by the memory address in step 64) to the host.
A processing apparatus, comprising:
a memory for storing a computer program;
and a processor for calling and running the computer program from the memory to perform the method described above.
A computer readable storage medium having stored therein a computer program or instructions which, when executed, implement the method described above.
A computer program product comprising instructions which, when run on a computer, cause the computer to perform the method described above.
In order to solve the problem of synchronous decoding of video in a linux system, a method of combining soft decoding based on ffmpeg and hard decoding based on a Hai Si chip module, transmitting data by using PICE and storing the data in a cache is adopted. The present invention is further described below as shown in fig. 1.
1. Transplanting ffmpeg to a Hai Si platform
11 First, the ffmpeg compile attribute is configured, and parameters are configured according to the platform type, cpu type, codec attribute, format conversion attribute, cross-compile attribute.
12 Modifying acode.h file, increasing parameter frame length sei_len and array sei_buf, and determining parameter frame SIZE SEI_BUF_SIZE according to application layer requirements;
13 Adding the get parameter frame function to the decode_nal_sei_prefix function in the hevc_sei.c file): acquiring parameter SIZE in the function, assigning the SIZE to sei_len, judging whether the SIZE is smaller than or equal to SEI_BUF_SIZE, if the SIZE is smaller than or equal to SEI_BUF_SIZE, copying data in a context parameter array gb to sei_buF, wherein the copy length is the SIZE, and the array subscript i of the copied sei_buF is index of the gb array divided by 8, namely sei_buf [ i ] = gb_buf [ index/8];
14 A configure command is then executed, the decoded library libavcodec, libavformat, libavutil, libswscale to be generated under the subfolder lib of the configuration folder.
15 Finally, copying the dynamic library to the position under the/usr/lib path of the decoding board card;
2. creating a network-receiving image data task and storing the data in a ring buffer
21 Creating a network receive data thread, comprising the following specific steps:
212 Firstly, acquiring a receiving ip address and a port in a configuration file;
213 Then, creating a network socket;
214 Secondly, clearing a receiving buffer zone to block image data sent by a receiving network;
215 Judging whether the length received at this time is greater than zero, if so, proceeding to the next step, otherwise, returning to the step 213);
216 Judging whether the protocol frame head meets the protocol requirements, and if not, discarding the frame;
22 Storing data received by the network into a buffer, the specific steps are as follows:
221 Judging whether the data length len received by the network is more than 0, if so, carrying out the next step, and if not, exiting; waiting for the original compressed image data transmitted by the next network;
222 Searching the channel number of the current data according to bytes specified by the protocol, and storing the channel number;
223 Judging whether the sum of the existing data lengths cirLen and LEN of the data buffer of the channel number is smaller than the maximum length MAX_LEN of the buffer, if so, proceeding to the next step, otherwise, proceeding to the step 226);
224 Data of the network reception area is copied to the ring buffer with a copy length len.
225 The total length of data in the buffer is increased by len number, and the head address ptr (initialized to the buffer head address) of the stored data is also shifted by len positions.
226 Copying the network receiving area data into a buffer area, wherein the copy length is equal to the maximum value of the_Len and the cirLen;
227 Resetting the write pointer putPtr, moving the pointer to the buffer head address;
228 Calculating the total buffer data length cirLen to len and step 226) copy length difference;
229 Receiving an array copy from the network to a buffer, the buffer position moving by MAX_LEN-cirLen lengths, the copy length being cirLen;
2210 A) forward move write pointer putPtr, a move cirLen bytes.
3. Obtaining compressed image complete data packets using a ffmpeg dynamic library
Taking out the original data from the ring buffer area, and filtering the error frame by using a ffmpeg dynamic library to obtain a complete data packet;
31 Initializing a decoding library use environment
First, a decoder type is set, and a decoder of HEVC (h 265) type is employed. Then, the decoder context is initialized, and the image frame storage space is dynamically applied.
32 A ffmpeg acquisition data packet thread is created, and the specific flow of the thread is as follows:
321 The method comprises the following steps of:
3211 Acquiring the write pointer position putPtr of the current buffer area and the total length cirLen of the data of the buffer area;
3212 Judging whether the read pointer readPtr is consistent with the putPtr, if so, delaying for 1ms, returning to step 3211), otherwise, proceeding to the next step.
3213 Judging whether the read data readLen is smaller than the cirLen, if so, continuing the next step, otherwise, step 3217);
3214 Judging whether the difference value between the cirLen and the readLen is larger than or equal to the fixed length frame_len of the protocol frame, if so, continuing the next step, and if not, executing a step 3216);
3215 Currently readPtr is taken as a decoded image data head address bufPtr, the length bufLen is frame_len, the pointer readPtr is moved, the moving byte is frame_len, the reading length readLen is increased, and the number of the increased bytes is frame_len.
3216 The current readPtr is taken as the first address bufPtr of the decoded image data, the length bufLen is the difference of the cirLen minus the readLen, the readPtr is moved, the moving number is the difference of the cirLen minus the readLen, the reading length readLen is increased, and the number of the added bytes is the (cirLen-readLen) difference.
3217 Judging whether the sum of the readLen and the frame_len is smaller than the maximum length MAX_LEN of the buffer area, if so, proceeding to the next step, if not, proceeding to step 3219);
3218 The current readPtr is taken as a decoded image data head address bufPtr, the data length bufLen is frame_len, a read pointer is moved, the number of bytes moved by the pointer is frame_len, the read data length readLen is increased, and the number of bytes increased is frame_len.
3219 The current readPtr is taken as a decoded image data head address bufPtr, the data length bufLen is MAX_LEN-readLen, a read pointer is moved to a buffer head address, and the read length readLen is set to 0.
322 If yes, continuing the next step, if not, exiting the decoding process of the data, and waiting for the original compressed image data transmitted by the next network.
323 Transmitting the data array head address pointer bufPtr and the length bufplen to the soft decoding module, dividing the data into frames by using a library function av_player_passe2, recording the next step if a complete image frame data packet can be successfully obtained from the array, and exiting the decoding process if the complete image frame data packet is not obtained.
324 The data packet length ret of the image frame segmentation in the data array is saved, the data packet length ret of the image frame segmentation is removed from the total length bufLen of the data array, the head address pointer bufPtr is moved forwards, and the moving times are ret.
325 Placing the complete data packet of the image segmented at this time into a queue to be decoded, waiting for the original compressed image data transmitted by the next network, and returning to the step 321);
4. screening error frames of complete data packets acquired by ffmpeg based on fault-tolerant strategy
And screening the error frames of the complete data packet acquired by the ffmpeg based on the fault tolerance strategy. The method comprises the following specific steps:
41 Searching for a frame header in the acquired data according to the h265 protocol frame header;
42 Searching for a frame tail in the acquired data according to the h265 protocol frame head;
43 Finding the frame of the frame header and frame trailer as a complete frame and recording the image frame type, if not, discarding the image frame.
44 The type of raw data required to record each image frame): vps frame, sps frame, pps frame, sei frame, P frame and I frame number, i.e. encoding rule encodings { vps, sps, pps, sei, n.i, n.p }.
45 Analyzing erroneous image data that does not meet the rule in step 44) (image frames are incomplete due to packet loss), and processing according to the following rule:
451 Storing vps frame, sps frame, pps frame and sei frame which are earlier than the current frame in time and are nearest;
452 If the current frame and the frame stored before do not conform to the encoding rule Encode, and the current frame is an excessive frame, modifying the first p frame in the encoding rule into an I frame, and combining 451) the obtained vps frame, sps frame, pps frame, sei frame and all p frames in the encoding rule together to form image data; completing the error frame screening work to obtain a complete data packet after screening; otherwise, go to step 453);
453 If the current frame and the previously stored frame do not conform to the encoding rule Encode, and the frame following the current frame is a vps frame, modifying the first p frame of the current encoding rule into an I frame, and combining 451) all p frames in the vps frame, sps frame, pps frame, sei frame and modified encoding rule as one image data; completing the error frame screening work to obtain a complete data packet after screening; otherwise, go to step 454);
454 If the current frame accords with the coding rule with the frame stored before, and the current frame is an error frame, discarding the current frame, modifying the first p frame in the coding rule into an I frame, and taking the first two p frames and the vps frame, sps frame, pps frame and sei frame of step 451) to be combined together as image data; and (3) completing the error frame screening work to obtain the screened complete data packet.
5. Transmitting the screened complete data packet to a hard decoding module
51 Initializing a hard decoding module according to the image parameters and the decoding type, configuring the size of a video data buffer area, and starting the decoding module;
52 A buffer area buf of a dynamic application video image frame data packet;
53 If yes, exiting the processing flow to carry out step 57), otherwise, carrying out the next step;
54 Setting parameters of the current frame): a stream end identifier, a frame header flag, a frame end flag;
55 Placing the image frame data into buf;
56 Calling a dynamic library function to send buf data to a decoding module;
57 Monitoring the decoding state of the hard decoding module in real time, if decoding errors occur, soft restarting the hard decoding module and resetting parameters, and if decoding is normal, calling a library function to obtain a decoded image;
58 Calling a library function to stop sending video streams to the decoding module, closing the decoding channel, unbinding the binding relation of each module, and clearing resources.
6. Transmitting the decoded image to a host computer through a network
And establishing a task of acquiring the decoded image, reading the working state of the decoder in real time, calling a library function to acquire the decoded image in the decoding channel, and transmitting the decoded image data to a host through a network transmission task. The method comprises the following specific steps:
61 Creating a thread for acquiring the decoded image;
62 Entering an image acquisition loop;
63 Inquiring the working state in the channel, if the working state is the reset state, delaying for 1ms, continuously inquiring the reset state, and if the working state is not the reset state, performing the next step;
64 A library function hi_mpi_vdec_getframe is called to obtain the memory address of the image store. Jump to step 63) if failed, go to the next step if successful;
65 A network transmission task is created to transmit the data held by the memory address in step 64) to the host.
Although the present invention has been described in terms of the preferred embodiments, it is not intended to be limited to the embodiments, and any person skilled in the art can make any possible variations and modifications to the technical solution of the present invention by using the methods and technical matters disclosed above without departing from the spirit and scope of the present invention, so any simple modifications, equivalent variations and modifications to the embodiments described above according to the technical matters of the present invention are within the scope of the technical matters of the present invention.
What is not described in detail in the present specification is a well known technology to those skilled in the art.

Claims (7)

1. A soft-hard combined multichannel video synchronous decoding method is characterized by comprising the following steps:
1) Configuring compiling attribute and parameter of the ffmpeg, and transplanting the ffmpeg dynamic library to a Hai Si platform;
2) Creating a network to receive image data task and storing original data into a ring buffer;
3) The original data is taken out from the annular buffer area, and a complete data packet is obtained;
step 3) the obtaining of the complete data packet specifically includes:
321 The method comprises the following steps of:
3211 Acquiring the write pointer position putPtr of the current buffer area and the total length cirLen of the data of the buffer area;
3212 Judging whether the read pointer readPtr is consistent with the putPtr, if so, delaying for 1ms, returning to step 3211), otherwise, carrying out the next step;
3213 Judging whether the read data readLen is smaller than the cirLen, if so, continuing the next step, otherwise, step 3217);
3214 Judging whether the difference value between the cirLen and the readLen is larger than or equal to the fixed length frame_len of the protocol frame, if so, continuing the next step, and if not, executing a step 3216);
3215 Taking the current readPtr as a decoded image data first address bufPtr, wherein the length bufLen is frame_len, simultaneously moving a pointer readPtr, moving bytes are frame_len, increasing the read length readLen, and increasing the number of bytes to be frame_len;
3216 Taking the current readPtr as a decoded image data first address bufPtr, wherein the length bufLen is cirLen minus readLen, moving the readPtr at the same time, wherein the moving number is cirLen minus readLen, increasing the reading length readLen, and the number of the added bytes is the difference value between the cirLen and readLen;
3217 Judging whether the sum of the readLen and the frame_len is smaller than the maximum length MAX_LEN of the buffer area, if so, proceeding to the next step, if not, proceeding to step 3219);
3218 Taking the current readPtr as a decoded image data head address bufPtr, wherein the data length bufLen is frame_len, moving a read pointer, the number of bytes moved by the pointer is frame_len, increasing the read data length readLen, and the number of the increased bytes is frame_len;
3219 Taking the current readPtr as a decoded image data head address bufPtr, subtracting the readLen difference value from the data length bufLen of MAX_LEN, moving a read pointer to the buffer head address, and setting the read length readLen to 0;
322 Judging whether the current data length bufLen is larger than 0, if so, continuing the next step, if not, exiting the decoding process of the data, and waiting for the original data transmitted by the next network;
323 Transmitting a data array head address pointer bufPtr and a length bufLen to a soft decoding module, dividing data into frames by using a library function av_player_passe2, recording to continue the next step if a complete image frame data packet can be successfully obtained from the array, and exiting the decoding process if the complete image frame data packet is not obtained;
324 Saving the data packet length ret of the image frame segmentation in the current data array, eliminating the data packet length ret of the image frame segmentation from the total length bufLen of the data array, and moving the first address pointer bufPtr forwards for ret;
325 Placing the complete data packet of the image segmented at this time into a queue to be decoded, waiting for the original compressed image data transmitted by the next network, and returning to the step 321);
4) Screening error frames of the complete data packet obtained by the ffmpeg based on a fault tolerance strategy;
the step 4) of screening the error frames of the complete data packet obtained by the ffmpeg specifically includes:
41 Searching for a frame header in the acquired data according to the h265 protocol frame header;
42 Searching for a frame tail in the acquired data according to the h265 protocol frame head;
43 Finding the frame of the frame head and the frame tail as a complete one-frame image frame and recording the type of the image frame, if not, discarding the image frame;
44 Recording the type of the original data required by each image frame to obtain a coding rule;
45 Analyzing the erroneous image data that does not conform to the coding law in step 44);
451 Storing vps frame, sps frame, pps frame and sei frame which are earlier than the current frame in time and are nearest;
452 If the current frame and the frame stored before do not conform to the encoding rule Encode, and the current frame is an unnecessary frame, modifying the first p frame in the encoding rule into an I frame, and combining all p frames in the encoding rule with the vps frame, the sps frame, the pps frame, the sei frame obtained in the step 451) to form image data; completing the error frame screening work to obtain a complete data packet after screening; otherwise, go to step 453);
453 If the current frame does not conform to the encoding rule Encode with the frame stored before and the frame after the current frame is a vps frame, modifying the first p frame of the current encoding rule into an I frame, and combining all p frames in the vps frame, sps frame, pps frame, sei frame and modified encoding rule of step 451) as one image data; completing the error frame screening work to obtain a complete data packet after screening; otherwise, go to step 454);
454 If the current frame accords with the coding rule with the frame stored before, and the current frame is an error frame, discarding the current frame, modifying the first p frame in the coding rule into an I frame, and taking the first two p frames and the vps frame, sps frame, pps frame and sei frame of step 451) to be combined together as image data; completing the error frame screening work to obtain a complete data packet after screening;
5) Transmitting the screened complete data packet to a hard decoding module in the Haishi platform for decoding;
6) And acquiring an image through the decoding channel, and returning the decoded data to the host by using the network.
2. The method for synchronously decoding soft and hard combined multi-channel video according to claim 1, wherein the creating network task in step 2) specifically comprises:
212 Acquiring a receiving ip address and a port in a configuration file;
213 A) creating a network socket;
214 Zero clearing the receiving buffer area, and entering step 215 after waiting to receive the image data sent by the network);
215 Judging whether the length received at this time is greater than zero, if so, proceeding to the next step, otherwise, returning to the step 213);
216 If the protocol frame head of the image frame meets the protocol requirements, discarding the image frame, otherwise, storing the data into the annular buffer area.
3. The method for synchronously decoding soft and hard combined multi-channel video according to claim 1, wherein step 2) stores the original data into a ring buffer, specifically:
221 Judging whether the data length len received by the network is more than 0, if so, carrying out the next step, and if not, exiting; waiting for the original data transmitted by the next network;
222 Searching the channel number of the current data according to bytes specified by the protocol, and storing the channel number;
223 Judging whether the sum of the existing data lengths cirLen and LEN of the data buffer of the channel number is smaller than the maximum length MAX_LEN of the buffer, if so, proceeding to the next step, otherwise, proceeding to the step 226);
224 Copying the data of the network receiving area to the annular buffer area, wherein the copy length is len;
225 Increasing the total length of data in the buffer area by len number, and moving the head address ptr of the stored data by len positions;
226 Copying the data of the network receiving area into a buffer area, wherein the copy length is equal to the maximum value of the_Len and the cirLen;
227 Resetting the write pointer putPtr, moving the pointer to the buffer head address;
228 Calculating the total buffer data length cirLen to len and step 226) copy length difference;
229 Receiving the array from the network again and copying to a buffer, the buffer position moving max_len minus the cirLen byte length, the copy length being cirLen;
2210 A write pointer putPtr moved forward, the write pointer putPtr moved cirLen bytes.
4. The method for synchronously decoding soft and hard combined multi-channel video according to claim 1, wherein the step 5) is characterized in that the filtered complete data packet is sent to a hard decoding module in a haisi platform for decoding, and specifically comprises the following steps:
51 Initializing a hard decoding module according to the image parameters and the decoding type, configuring the size of a video data buffer area, and starting the decoding module;
52 A buffer area buf of a dynamic application video image frame data packet;
53 If yes, exiting the processing flow to carry out step 57), otherwise, carrying out the next step;
54 Setting parameters of the current frame): a stream end identifier, a frame header flag, a frame end flag;
55 Placing the image frame data into buf;
56 Calling a dynamic library function to send buf data to a decoding module;
57 Monitoring the decoding state of the hard decoding module in real time, if decoding errors occur, soft restarting the hard decoding module and resetting parameters, and if decoding is normal, calling a library function to obtain a decoded image;
58 Calling a library function to stop sending video streams to the decoding module, closing the decoding channel, unbinding the binding relation of each module, and clearing resources.
5. The method for synchronously decoding soft and hard combined multi-channel video according to claim 1, wherein the method comprises the following steps: the step 6) returns the decoded data to the host computer by using the network, specifically:
61 Creating a thread for acquiring the decoded image;
62 Entering an image acquisition loop;
63 Inquiring the working state in the channel, if the working state is the reset state, delaying for 1ms, continuously inquiring the reset state until the working state is not the reset state, and then carrying out the next step;
64 Calling a library function HI_MPI_VDEC_GetFrame to acquire a memory address of the image storage; jump to step 63) if failed, go to the next step if successful;
65 A network transmission task is created to transmit the data held by the memory address in step 64) to the host.
6. A processing apparatus, comprising:
a memory for storing a computer program;
a processor for calling and running the computer program from the memory to perform the method of any of claims 2 to 5.
7. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program or instructions, which when executed, implement the method of any of claims 2 to 5.
CN202110697282.5A 2021-06-23 2021-06-23 Soft-hard combined multichannel video synchronous decoding method Active CN113645490B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110697282.5A CN113645490B (en) 2021-06-23 2021-06-23 Soft-hard combined multichannel video synchronous decoding method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110697282.5A CN113645490B (en) 2021-06-23 2021-06-23 Soft-hard combined multichannel video synchronous decoding method

Publications (2)

Publication Number Publication Date
CN113645490A CN113645490A (en) 2021-11-12
CN113645490B true CN113645490B (en) 2023-05-09

Family

ID=78416127

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110697282.5A Active CN113645490B (en) 2021-06-23 2021-06-23 Soft-hard combined multichannel video synchronous decoding method

Country Status (1)

Country Link
CN (1) CN113645490B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114339408A (en) * 2021-11-26 2022-04-12 惠州华阳通用电子有限公司 Video decoding method
CN115373645B (en) * 2022-10-24 2023-02-03 济南新语软件科技有限公司 Complex data packet operation method and system based on dynamic definition
CN117255222A (en) * 2023-11-20 2023-12-19 上海科江电子信息技术有限公司 Digital television monitoring method, system and application
CN117560501B (en) * 2024-01-11 2024-04-12 杭州国芯科技股份有限公司 Multi-standard video decoder

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1387338A (en) * 2001-03-29 2002-12-25 松下电器产业株式会社 Data reproducing device and method
CN101159843A (en) * 2007-10-29 2008-04-09 中兴通讯股份有限公司 Image switching method and system for improving video switch effect in video session
CN104104913A (en) * 2014-07-14 2014-10-15 华侨大学 Intelligent distributed type video collecting system based on Android system
CN106792124A (en) * 2016-12-30 2017-05-31 合网络技术(北京)有限公司 Multimedia resource decodes player method and device
CN108206956A (en) * 2016-12-20 2018-06-26 深圳市中兴微电子技术有限公司 A kind of processing method and processing device of video decoding error
CN108235096A (en) * 2018-01-18 2018-06-29 湖南快乐阳光互动娱乐传媒有限公司 Method for intelligently switching soft decoding and playing video through hard decoding of mobile terminal
WO2019228078A1 (en) * 2018-05-31 2019-12-05 腾讯科技(深圳)有限公司 Video transcoding system and method, apparatus, and storage medium
WO2020078165A1 (en) * 2018-10-15 2020-04-23 Oppo广东移动通信有限公司 Video processing method and apparatus, electronic device, and computer-readable medium
CN112261460A (en) * 2020-10-19 2021-01-22 天津津航计算技术研究所 PCIE-based multi-channel video decoding scheme design method
CN112511840A (en) * 2020-12-24 2021-03-16 北京睿芯高通量科技有限公司 Decoding system and method based on FFMPEG and hardware acceleration equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11936480B2 (en) * 2019-10-23 2024-03-19 Mediatek Singapore Pte. Ltd. Apparatus and methods for HARQ in a wireless network

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1387338A (en) * 2001-03-29 2002-12-25 松下电器产业株式会社 Data reproducing device and method
CN101159843A (en) * 2007-10-29 2008-04-09 中兴通讯股份有限公司 Image switching method and system for improving video switch effect in video session
CN104104913A (en) * 2014-07-14 2014-10-15 华侨大学 Intelligent distributed type video collecting system based on Android system
CN108206956A (en) * 2016-12-20 2018-06-26 深圳市中兴微电子技术有限公司 A kind of processing method and processing device of video decoding error
CN106792124A (en) * 2016-12-30 2017-05-31 合网络技术(北京)有限公司 Multimedia resource decodes player method and device
CN108235096A (en) * 2018-01-18 2018-06-29 湖南快乐阳光互动娱乐传媒有限公司 Method for intelligently switching soft decoding and playing video through hard decoding of mobile terminal
WO2019228078A1 (en) * 2018-05-31 2019-12-05 腾讯科技(深圳)有限公司 Video transcoding system and method, apparatus, and storage medium
WO2020078165A1 (en) * 2018-10-15 2020-04-23 Oppo广东移动通信有限公司 Video processing method and apparatus, electronic device, and computer-readable medium
CN112261460A (en) * 2020-10-19 2021-01-22 天津津航计算技术研究所 PCIE-based multi-channel video decoding scheme design method
CN112511840A (en) * 2020-12-24 2021-03-16 北京睿芯高通量科技有限公司 Decoding system and method based on FFMPEG and hardware acceleration equipment

Also Published As

Publication number Publication date
CN113645490A (en) 2021-11-12

Similar Documents

Publication Publication Date Title
CN113645490B (en) Soft-hard combined multichannel video synchronous decoding method
CN113727114B (en) Transcoded video decoding method
CN112261460B (en) PCIE (peripheral component interface express) -based multi-channel video decoding scheme design method
CN106937121B (en) Image decoding and encoding method, decoding and encoding device, decoder and encoder
DK2901680T3 (en) DECODING AND CODING IMAGES FROM A VIDEO SEQUENCE
US8499058B2 (en) File transfer system and file transfer method
CN113711605B (en) Method, apparatus, system and computer readable medium for video encoding and decoding
JP2005176069A (en) Distributed parallel transcoder system and method thereof
CN113727116B (en) Video decoding method based on filtering mechanism
CN112291483B (en) Video pushing method and system, electronic equipment and readable storage medium
CN113596469A (en) Soft-hard combined and high-efficiency transmission video decoding method
CN112422984B (en) Code stream preprocessing device, system and method of multi-core decoding system
CN115802045A (en) Data packet filtering method and decoding method based on data packet filtering method
CN110868610B (en) Streaming media transmission method, device, server and storage medium
CN113727115B (en) Efficient transcoded video decoding method
KR20160023777A (en) Picture referencing control for video decoding using a graphics processor
CN113645467B (en) Soft and hard combined video decoding method
CN113747171B (en) Self-recovery video decoding method
JP3621332B2 (en) Sprite encoding method and apparatus, sprite encoded data decoding method and apparatus, and recording medium
CN113709496B (en) Multi-channel video decoding method based on fault tolerance mechanism
CN113709518A (en) RTSP (real time streaming protocol) -based video real-time transmission mode design method
CN108200481B (en) RTP-PS stream processing method, device, equipment and storage medium
KR20190101579A (en) Reconfigurable Video System for Multi-Channel Ultra High Definition Video Processing
CN111447490A (en) Streaming media file processing method and device
CN116647702A (en) Self-recovery video decoding method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant