CN108449567B - A kind of method and device being used for transmission digital video - Google Patents

A kind of method and device being used for transmission digital video Download PDF

Info

Publication number
CN108449567B
CN108449567B CN201810247401.5A CN201810247401A CN108449567B CN 108449567 B CN108449567 B CN 108449567B CN 201810247401 A CN201810247401 A CN 201810247401A CN 108449567 B CN108449567 B CN 108449567B
Authority
CN
China
Prior art keywords
data
video
audio
conversion
parallel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810247401.5A
Other languages
Chinese (zh)
Other versions
CN108449567A (en
Inventor
欧俊文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ava Electronic Technology Co Ltd
Original Assignee
Ava Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ava Electronic Technology Co Ltd filed Critical Ava Electronic Technology Co Ltd
Priority to CN201810247401.5A priority Critical patent/CN108449567B/en
Publication of CN108449567A publication Critical patent/CN108449567A/en
Application granted granted Critical
Publication of CN108449567B publication Critical patent/CN108449567B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/015High-definition television systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4305Synchronising client clock from received content stream, e.g. locking decoder clock with encoder clock, extraction of the PCR packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4398Processing of audio elementary streams involving reformatting operations of audio signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440236Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by media transcoding, e.g. video is transformed into a slideshow of still pictures, audio is converted into text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/10Adaptations for transmission by electrical cable

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Television Systems (AREA)

Abstract

This application discloses a kind of methods for being used for transmission digital video, comprising: obtains video source uncorrected data, audio-source and user's auxiliary data, carries out pretreatment according to intended conversion mode and generate preprocessed data;For the preprocessed data, asynchronous clock domain conversion is carried out based on predetermined sampling clock and generates synchronous sequence data;Generation video frame is encapsulated again by carrying out frame structure to the synchronous sequence data;Scrambling coding is carried out for the video frame, and the data after coding are subjected to parallel-serial conversion, serial signal is generated, coaxial cable is sent for the serial signal by cable driver and is transmitted.Long distance transmission UHD video at lower cost, to solve the problems, such as the at high cost of existing long distance transmission UHD video.

Description

Method and device for transmitting digital video
Technical Field
The present application relates to the field of audio and video signal transmission, and in particular, to a method and an apparatus for transmitting digital video. The application also relates to a method and apparatus for receiving digital video. The application also relates to a digital video transmission system.
Background
UHD video (Ultra High Definition video) refers to Ultra High Definition video with a resolution of up to 3840x2160, often also referred to as 4k video for short. The 4K ultra-high definition display technology is a popular high-end technology in the market rapidly by virtue of excellent clear image quality and shocking appearance, and is deeply loved by wide consumers. Serial Digital Interface (SDI) transmission methods have a place among many audio/video transmission methods due to the advantages of non-distorted uncompressed images, low delay, high definition, and the like. Generally, HDMI (High Definition multimedia interface) and DP (display port) are adopted to realize short-distance transmission, and basically, signals can realize audio and video synchronization without delay and with very small loss. However, for image transmission without distortion in a long distance, for example, for transmitting 4K ultra high definition video, the transmission data amount is very large, and the currently adopted technical solution uses SDI together with a suitable Equalizer (Equalizer) and cable drive (CableDriver) for transmission.
The existing technical scheme for transmitting UHD video in a long distance needs SDI (Serial digital interface) matched with a peripheral Equalizer (Equalizer) and a cable driver (CableDriver), so that the requirement on a chip for transmitting and receiving an ultra-high definition video signal is high, the cost is very high, and the popularization of a 4K ultra-high definition technology is greatly hindered.
Disclosure of Invention
The application provides a method for transmitting digital video, which aims to solve the problem that the existing long-distance transmission UHD video is high in cost.
The present application further provides an apparatus for transmitting digital video.
The present application also provides a method for receiving digital video.
The present application further provides an apparatus for receiving digital video.
The application also relates to a digital video transmission system.
The present application provides a method for transmitting digital video, comprising:
acquiring bare data of a video source, an audio source and user auxiliary data, and preprocessing the bare data, the audio source and the user auxiliary data according to a preset conversion mode to generate preprocessed data;
performing asynchronous clock domain conversion on the preprocessed data based on a preset sampling clock to generate synchronous time sequence data;
generating a video frame by performing frame structure repackaging on the synchronous time sequence data;
and scrambling and coding the video frame, performing parallel-serial conversion on the coded data to generate a serial signal, and sending the serial signal to a coaxial cable for transmission through a cable driver.
Optionally, the preprocessing according to the predetermined conversion manner to generate the preprocessed data includes:
carrying out format conversion and repacking on the video source bare data to generate video preprocessing data;
performing I2S serial-parallel conversion and effective data bit expansion on an audio source to generate audio preprocessing data;
performing effective data bit width expansion on the user auxiliary data to generate preprocessing auxiliary data;
wherein the video pre-processing data, the audio pre-processing data and the pre-processing auxiliary data constitute the pre-processing data.
Optionally, the generating, for the preprocessed data, synchronous timing data by performing asynchronous clock domain conversion based on a predetermined sampling clock includes:
adopting a sampling clock suitable for 3G transmission, aiming at the video preprocessing data, performing effective data re-receiving in an FIFO asynchronous clock domain, and compressing a video blanking area to obtain synchronous video data;
adopting a sampling clock suitable for 3G transmission to perform FIFO asynchronous clock domain conversion on the audio preprocessing data and the preprocessing auxiliary data to obtain synchronous audio data and synchronous auxiliary data;
wherein the synchronized video data, the synchronized audio data, and the synchronized auxiliary data constitute the synchronized timing data;
wherein the sampling clock suitable for 3G transmission is 156.6Mhz sampling clock; correspondingly, the coaxial cable is a coaxial cable with a transmission bandwidth of 3G.
Optionally, the generating a video frame by performing frame structure repackaging on the synchronous time series data includes:
repackaging the synchronous video data, the synchronous audio data and the synchronous auxiliary data according to a frame structure with the following line format to obtain the video frame: an end header, a blanking region, a start header, valid video data, audio data, user assistance data;
and during the repackaging process, marking invalid data for the synchronous audio data and the synchronous auxiliary data in a mode of setting zero at two high bits.
Optionally, the scrambling and encoding for the video frame includes:
by using 9-degree primitive polynomial X9+X4+1, scrambling the video frame to generate scrambled data;
the scrambled data is non-return to zero inverse encoded using an X +1 generator polynomial and converted to NRZI encoded data.
Optionally, the format conversion and repackaging are performed on the bare video source data to generate video preprocessing data, including:
converting video source naked data in a format of 4K2K @30fpsYCbCr 42216bit into video data in a format of 4K2K @30fps YCbCr42012 bit;
performing effective data bit width expansion on the 4K2K @30fps YCbCr42012bit format video data to obtain 20 bits, and generating 2304 × 20 bits of video preprocessing data of line effective data;
the video source naked data in the format of 4K2K @30fpsYCbCr 42216bit is a 4K2K @30fps video source from a double-edge sampling parallel interface, the effective resolution of the video source naked data is 3840x2160 @3032bit, and the sampling clock is 148.5 Mhz; accordingly, the line valid data is 2304 × 20bit of video pre-processed data, which has an effective resolution 2304 × 2160@3020bit and a sampling clock of 148.5 Mhz.
Optionally, the performing I2S serial-to-parallel conversion and valid data bit extension on the audio source to generate audio pre-processing data includes:
aiming at an I2S audio source with a sampling rate of 48Khz, carrying out I2S serial-parallel conversion to generate audio parallel data with a specific format; the specific format is a double-channel audio format with a sampling rate of 48Khz16 bit;
and expanding the bit width of the effective data of the audio parallel data with the specific format into 20 bits by increasing a high-bit effective data identifier to expand the bit width of the effective data, and generating the audio preprocessing data.
Optionally, the performing effective data bit width expansion on the user auxiliary data to generate preprocessed auxiliary data includes:
and expanding the effective data bit width of the auxiliary data of the 32-bit user into two 20 bits by increasing the high-bit effective data identifier to generate the preprocessed auxiliary data.
The present application also provides a method for receiving digital video, comprising:
receiving an electric signal of a coaxial cable, performing serial-parallel conversion to generate a parallel signal, and performing descrambling and decoding processing on the parallel signal to generate parallel data;
separating data to be converted from the parallel data according to a preset frame structure, wherein the data to be converted comprises video data to be converted, audio parallel data to be converted and auxiliary data to be converted;
and performing clock domain crossing conversion on the data to be converted, and restoring and generating corresponding video data, I2S audio data and user auxiliary data based on a preset conversion mode.
Optionally, the descrambling and decoding process includes:
converting the parallel signal from NRZI to NRZ by generating a polynomial conversion bit stream using X + 1;
using 9 primitive polynomials X9+X4+1 descrambled data is generated.
Optionally, the separating the data to be converted from the parallel data according to the preset frame structure means separating the data according to a frame structure described in the following line format: an end header, a blanking region, a start header, valid video data, audio data, user assistance data, comprising:
acquiring a data segment between a start header and an end header in an effective line of a video frame, and separating effective video data from the data segment to serve as the video data to be converted;
separating the effective audio parallel data with the high-order effective identifier from the data segment as the audio parallel data to be converted;
the auxiliary data with the high-order effective identifier is separated from the data segment as the auxiliary data to be converted.
Optionally, the performing clock domain crossing conversion on the data to be converted, and generating corresponding video data, I2S audio data, and user auxiliary data by restoring based on a preset conversion manner includes:
the video data to be converted is restored to YCbCr420 format by expanding the effective data bit width from 20bit to 32bit, the YCbCr420 format is converted to YCbCr422 format, FIFO clock domain crossing conversion is carried out on the video data of the YCbCr422 format by adopting a sampling clock corresponding to 4K2K @30fps, a blanking area is expanded, and the video data of 4K2K @30fps16bit is generated based on a standard 4K2K frame structure;
obtaining effective audio parallel data from the audio parallel data to be converted according to a high-order identifier, performing FIFO clock domain crossing conversion based on a preset audio sampling bit clock, and generating serial I2S audio data through parallel-serial conversion;
and acquiring effective auxiliary data from the auxiliary data to be converted according to the high-order identification, wherein the effective auxiliary data is used as user auxiliary data.
Optionally, the electrical signal is an electrical signal transmitted by a 3G coaxial cable; correspondingly, the parallel data is valid data with a resolution of 2308 × 216020 bit and a sampling clock of 156.6 Mhz; correspondingly, for the video data to be converted, through bit width expansion and video format conversion, data with an effective resolution of 1920 × 2160@3032bit sampling clock 156.6Mhz is generated, FIFO clock domain crossing conversion is further performed by adopting a 148.5Mhz sampling clock corresponding to 4K2K @30fps, a blanking area is expanded, and standard 4K2K frame structure video data is generated.
The present application also provides an apparatus for transmitting digital video, comprising:
the data preprocessing unit is used for acquiring naked data, an audio source and user auxiliary data of the video source, and preprocessing the naked data, the audio source and the user auxiliary data according to a preset conversion mode to generate preprocessed data;
the synchronous time sequence unit is used for carrying out asynchronous clock domain conversion on the preprocessed data based on a preset sampling clock to generate synchronous time sequence data;
a frame repackaging unit for generating a video frame by performing frame structure repackaging on the synchronous timing data;
and the coding and transmitting unit is used for scrambling and coding the video frame, performing parallel-serial conversion on the coded data to generate a serial signal, and transmitting the serial signal to a coaxial cable for transmission through a cable driver.
The present application also provides an apparatus for receiving digital video, comprising:
the receiving and decoding unit is used for receiving the electric signal of the coaxial cable, performing serial-parallel conversion to generate parallel signals, and performing descrambling and decoding processing on the parallel signals to generate parallel data;
the data separation unit is used for separating data to be converted from the parallel data according to a preset frame structure, wherein the data to be converted comprises video data to be converted, audio parallel data to be converted and auxiliary data to be converted;
and the audio and video restoring unit is used for performing cross-clock domain conversion on the data to be converted and restoring and generating corresponding video data, I2S audio data and user auxiliary data based on a preset conversion mode.
The present application also provides a digital video transmission system, comprising: said means for transmitting digital video, and said means for receiving digital video.
Compared with the prior art, the method has the following advantages:
according to the method and the device for transmitting the digital video, the bare data of the video source, the audio source and the auxiliary data of the user are obtained, and preprocessing is carried out according to a preset conversion mode to generate preprocessed data; performing asynchronous clock domain conversion on the preprocessed data based on a preset sampling clock to generate synchronous time sequence data; generating a video frame by performing frame structure repackaging on the synchronous time sequence data; scrambling and coding are carried out on the video frames, parallel-serial conversion is carried out on the coded data, serial signals are generated, the serial signals are sent to a coaxial cable through a cable driver to be transmitted, data conversion and packaging are carried out on UHD videos, the UHD videos are transmitted in a long-distance undistorted mode on the basis of 3G hardware, and the problem that the UHD videos are high in long-distance transmission cost is solved.
Drawings
Fig. 1 is a process flow diagram of a method for transmitting digital video according to an embodiment of the present application;
fig. 2 is a schematic diagram of a system deployed at a transmitting end according to an embodiment of the present application;
fig. 3 is a specific audio/video data format conversion and data flow diagram of a system deployed at a transmitting end in an embodiment of the present application;
fig. 4 is a schematic diagram of a repackaging frame structure included in an actual deployment system for a method for transmitting digital video according to an embodiment of the present application.
FIG. 5 is a process flow diagram of a method for receiving digital video as provided herein;
fig. 6 is a flowchart illustrating a decoder included in a method for receiving digital video according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a system deployed at a receiving end according to an embodiment of the present application;
fig. 8 is a specific audio/video data format conversion and data flow diagram of a system deployed at a receiving end in the embodiment of the present application;
FIG. 9 is a schematic diagram of an apparatus for transmitting digital video according to the present application;
FIG. 10 shows a schematic diagram of an apparatus for receiving digital video according to the present application;
FIG. 11 is a schematic diagram of a digital video transmission system provided in accordance with the present application;
fig. 12 is a diagram of one practical deployment of a system for transmitting digital video provided herein.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
The present application provides a method for transmitting digital video. The application also relates to an apparatus for transmitting digital video. The present application also provides a method for receiving digital video. The present application further provides an apparatus for receiving digital video. The application also relates to a digital video transmission system. Details are described in the following examples one by one.
One embodiment of the present application provides a method for transmitting digital video.
A method for transmitting digital video according to an embodiment of the present application is described below with reference to fig. 1 to 4. Fig. 1 is a flowchart of a process for transmitting digital video according to an embodiment of the present disclosure; fig. 2 is a schematic diagram of a system deployed at a transmitting end according to an embodiment of the present application; fig. 3 is a specific audio/video data format conversion and data flow diagram of a system deployed at a transmitting end in an embodiment of the present application; fig. 4 is a schematic diagram of a repackaging frame structure included in an actual deployment system for a method for transmitting digital video according to an embodiment of the present application.
The undistorted long-distance transmission of the ultra-high-definition video is very important for popularization of the 4k ultra-high-definition technology, and the method for transmitting the digital video is described in the embodiment of the application by taking transmission of the UHD video based on the 3G bandwidth as an example. In the embodiment of the application, a currently widely applied serial deserializer (SERDES) in a programmable gate array (FPGA, namely a programmable ASIC) is adopted to transmit 4K2K @30fps ultra-high definition video signals based on 3G bandwidth undistorted transmission, UHD videos are transmitted on the basis of hardware of the 3G bandwidth, and the transmission distance is increased, wherein the FPGA adopts devices of manufacturers XILINX and LATTICE, so that the limitation of short transmission distance of the existing ultra-high definition video signals is broken through, and the signals can be stably transmitted in an SDI cable 100 meters long when the frequency 3.132Gbit/s is selected for transmission. In addition, a pair of sending and receiving ASIC chips (application specific ICs or special IC chips) for transmitting the 4K2K @30fps ultra high definition video signals can be customized, the customized ASIC chips have user interfaces for video, audio and user auxiliary data, and meanwhile, the cost is reduced compared with the FPGA without increasing the hardware performance requirement, so that the ultra high definition video technology can be popularized in the consumption field with requirements on cost performance, such as audio and video fields of video monitoring, recording and broadcasting and the like.
The method for transmitting digital video shown in fig. 1 comprises: steps S101 to S104.
Step S101, acquiring bare data of a video source, an audio source and user auxiliary data, and preprocessing the bare data, the audio source and the user auxiliary data according to a preset conversion mode to generate preprocessed data.
The purpose of this step is to preprocess the video data, which consists of the raw data of the video source, the audio source and the auxiliary data of the user, in order to prepare the preprocessed data for further asynchronous domain clock conversion. Preferably, the video source bare data and the audio source are video information recorded based on a digital technology, and specifically, the video source bare data and the audio source bare data are digital signals obtained and stored after analog video signals are converted by a video capture card.
In the embodiment of the application, UHD digital audio and video are transmitted, and the method comprises the steps of capturing a high-resolution UHD image through a 4k equipment camera, converting each frame of image into a UHD video format and storing according to a certain rule to obtain UHD video source bare data; obtaining an analog audio signal at a certain sampling frequency through an audio acquisition circuit, such as a microphone, converting the analog audio signal into a digital signal and storing the digital signal as I2S audio data; the user auxiliary data refers to data having an auxiliary function in main audio/video data processing, and does not form a main entity of audio/video data processing, for example, data used for communication between a sending end chip and a receiving end chip when video is transmitted.
Specifically, the preprocessing according to the predetermined conversion mode to generate the preprocessed data includes:
carrying out format conversion and repacking on the video source bare data to generate video preprocessing data;
performing I2S serial-parallel conversion and effective data bit expansion on an audio source to generate audio preprocessing data;
performing effective data bit width expansion on the user auxiliary data to generate preprocessing auxiliary data;
wherein the video pre-processing data, the audio pre-processing data and the pre-processing auxiliary data constitute the pre-processing data.
The video source bare data in the embodiment of the application is video data in a format of 4K2K @30fpsYCbCr 42216bit, the video data comes from a 4K2K @30fps video source of a double-edge sampling parallel interface, the effective resolution is 3840x2160 @3032bit, and the sampling clock is 148.5 Mhz; performing format conversion and repackaging on the video source bare data to generate video preprocessing data, specifically comprising the following processing:
converting video source naked data in a format of 4K2K @30fpsYCbCr 42216bit into video data in a format of 4K2K @30fpsYCbCr42012 bit;
and performing effective data bit width expansion on the 4K2K @30fps YCbCr42012bit format video data to obtain 20 bits, and generating 2304 × 20 bits of video preprocessing data.
The video pre-processed data resulting from the above process has an effective resolution of 2304 × 2160@3020 bits, and a sampling clock of 148.5Mhz, where the line effective data is 2304 × 20 bits.
The audio source in the embodiment of the present application is 48Khz sampling rate I2S audio data; I2S (inter sound, or audio in integrated circuit) is a standard for audio data transmission between digital audio devices, and is widely used in various multimedia systems, and a design of transmitting a clock signal and a data signal along separate wires is adopted, so that distortion caused by time difference is avoided by separating the data signal and the clock signal, and thus, special equipment for resisting audio jitter is not purchased. Specifically, the audio source is subjected to I2S serial-parallel conversion and valid data bit extension to generate audio preprocessing data, which includes:
aiming at an I2S audio source with a sampling rate of 48Khz, carrying out I2S serial-parallel conversion to generate audio parallel data with a specific format; the specific format is a double-channel audio format with a sampling rate of 48Khz16 bit;
and expanding the bit width of the effective data of the audio parallel data with the specific format into 20 bits by increasing a high-bit effective data identifier to expand the bit width of the effective data, and generating the audio preprocessing data.
In addition, in this embodiment of the application, effective data bit width extension is further performed on the user auxiliary data to generate preprocessed auxiliary data, which specifically includes:
and expanding the effective data bit width of the auxiliary data of the 32-bit user into two 20 bits by increasing a high-bit effective data identifier to generate the preprocessed auxiliary data.
Step S102, aiming at the preprocessed data, carrying out asynchronous clock domain conversion based on a preset sampling clock to generate synchronous time sequence data.
The purpose of this step is to perform asynchronous clock domain conversion on the preprocessed data obtained in step S101 to generate synchronous timing data.
The acquired video source naked data, the audio source and the user auxiliary data can come from different IC chips, have different clock sources, and have different clock frequencies and clock phases in different clock domains, so that the audio and video data obtained through preprocessing need to be synchronized in time sequence, and accurate transmission in different clock domains is guaranteed.
In this embodiment of the application, for the preprocessed data, performing asynchronous clock domain conversion based on a predetermined sampling clock to generate synchronous timing data specifically includes:
adopting a sampling clock suitable for 3G transmission, aiming at the video preprocessing data, performing effective data re-receiving in an FIFO asynchronous clock domain, and compressing a video blanking area to obtain synchronous video data;
adopting a sampling clock suitable for 3G transmission to perform FIFO asynchronous clock domain conversion on the audio preprocessing data and the preprocessing auxiliary data to obtain synchronous audio data and synchronous auxiliary data;
wherein the synchronized video data, the synchronized audio data, and the synchronized auxiliary data constitute the synchronized timing data;
wherein the sampling clock suitable for 3G transmission is 156.6Mhz sampling clock; correspondingly, the coaxial cable is a 3G coaxial cable, that is, a coaxial cable with a transmission bandwidth of 3Gbps, for example, a 75 ohm coaxial cable is adopted, and can be used for transmitting uncompressed high definition digital video signals.
The FIFO asynchronous clock domain conversion means that a FIFO (First in First out) mode is adopted to carry out time sequence synchronization on data between a plurality of clock domains with independent periods and phases and different phases.
And step S103, generating a video frame by performing frame structure repackaging on the synchronous time sequence data.
The purpose of this step is to encapsulate the audio/video data obtained by the processing in step S102 into video frames according to a frame structure of a predetermined format.
In the embodiment of the present application, a video frame is encapsulated according to a frame structure shown in a line format in fig. 4. Specifically, the generating a video frame by performing frame structure repackaging on the synchronous time series data includes:
repackaging the synchronous video data, the synchronous audio data and the synchronous auxiliary data according to a frame structure with the following line format to obtain the video frame: end header 28, blanking area 29, start header 30, valid video data 31, audio data 32, user auxiliary data 33; and transmitting the encapsulated video frames by a binary data code stream.
It is noted that during the repackaging process, invalid data is identified for the synchronized audio data and the synchronized auxiliary data by way of two higher zeros.
And step S104, scrambling and coding the video frame, performing parallel-serial conversion on the coded data to generate a serial signal, and sending the serial signal to a coaxial cable for transmission through a cable driver.
The purpose of this step is to scramble, encode and convert the video frame generated in step S103 into a serial signal, which is transmitted via a coaxial cable.
In this embodiment of the present application, the primitive polynomial is used to scramble the video frame, and the generator polynomial is used to perform non-return-to-zero inverse coding on the scrambled data to ensure reliability of transmission, which specifically includes the following processing:
by using 9-degree primitive polynomial X9+X4+1, scrambling the video frame to generate scrambled data;
the scrambled data is non-return to zero inverted coded (NRZI, or non return to zero-inverted code) converted into NRZI encoded data by using an X +1 generator polynomial.
In addition, the polynomial used for scrambling and the polynomial used for encoding are not limited to the above-described polynomials, and the above-described polynomials are preferable. In addition, the scrambling coding step during transmission can be scrambling first and then coding, or coding first and then scrambling, and correspondingly, the processing steps of extracting the data before coding after receiving are reverse in order of the coding steps during transmission, but the scrambling polynomial and the scrambling polynomial are the same, and the coding polynomial and the decoding polynomial are the same. For example, if the encoding is performed before the scrambling in the transmission, the descrambling is performed before the decoding in the reception.
In the embodiment of the application, an M sequence generator is constructed by using a primitive polynomial, an M sequence is generated to scramble audio and video data, so-called scrambling is that continuous '0' or continuous '1' may appear after coding processing is carried out on an information stream in digital communication, the balance of the '0' code and the '1' code is damaged, the establishment and the maintenance of bit synchronization are influenced, therefore, a pseudorandom sequence is added to carry out scrambling processing on an input transmission code stream, namely scrambling is carried out, and M sequence scrambling is generally used.
The NRZI code is a code with a level which is only divided into a positive level and a negative level after being coded, has no zero level, is a non-return-to-zero code, uses one-time reversal of the level to represent a logic 0, and represents a logic 1 which is the same as the previous level.
And further performing parallel-to-serial conversion on the audio and video data subjected to scrambling coding, converting the parallel signals into serial signals, for example, using a serializer with a designed transmission bandwidth of 3.2-3.5 Gbit/s to perform serial adding, and transmitting the serial signals to a 3G coaxial cable for transmission through a cable driver after the serial adding.
The method for transmitting digital video according to the embodiment of the present application is mainly deployed at a UHD video transmitting end, see fig. 2 and fig. 3, and fig. 2 is a schematic diagram of a system deployed at the transmitting end according to the embodiment of the present application; fig. 3 is a specific audio/video data format conversion and data flow diagram of a system deployed at a transmitting end in the embodiment of the present application.
Fig. 2 shows a system deployed at a transmitting end in the embodiment of the present application, which is used as a UHD video transmitting end, and includes: the device comprises a video data preprocessing unit 3, an asynchronous clock domain conversion unit 4, an audio serial-parallel conversion unit 5, a frame structure repackaging unit 6, an encoder unit 7 and a serializer sending unit 8. The video data preprocessing unit 3 performs format conversion on the video source bare data and repacks the video source bare data; the audio frequency serial-parallel conversion unit 5 carries out I2S audio frequency conversion on the audio source and carries out parallel data conversion; the asynchronous clock domain conversion unit 4 performs clock domain asynchronous conversion and video blanking region compression on the audio data converted into parallel by the audio serial-parallel conversion unit 5, the user auxiliary data and the video data repackaged by the video data preprocessing unit 3; the frame structure repackaging unit 6 integrates the video, audio and user data converted by the asynchronous clock domain conversion unit and repackages the frame structure; the encoder unit 7 performs scrambling encoding on the newly encapsulated video frame; and the serializer sending unit 8 converts the coded parallel data into a serial signal and sends the serial signal to a coaxial cable with 3G bandwidth through a cable driver. Specifically, the format conversion of the video source bare data by the video data preprocessing unit 3 is to convert a 4K2K @30fps video source format YCbCr42216bit into a YCbCr42012 bit; the video data pre-processing unit 3 repackages the format-converted video data by expanding the effective data bit width of the 4K2K @30fps YCbCr42012bit video data to 20 bits, and changing the line effective data of the format-converted video data from 3840x 12 bits to 2304 x 20 bits. The asynchronous clock domain conversion unit 4 is used for performing clock domain asynchronous conversion on the repackaged video data and compressing a video blanking area so as to select a corresponding sampling clock to perform different clock domain effective data re-receiving through 3G transmission and cutting the compressed video blanking area; the asynchronous clock domain conversion unit 4 converts the audio frequency serial-parallel conversion unit into parallel audio frequency data, the asynchronous clock domain conversion of the user auxiliary data is realized by firstly expanding the bit width of the parallel audio frequency data and the user auxiliary data into a plurality of 20-bit data units by a method of increasing high-bit effective data identifiers, and then selecting a corresponding sampling clock to perform different clock domain conversion.
Fig. 3 is a specific audio/video data format conversion and data flow diagram of a system deployed at a transmitting end in an embodiment of the present application, including:
the effective resolution 3840 × 2160@3032bit sampling clock of the 4K2K @30fps video source from the double-edge sampling parallel interface is 148.5Mhz, and the effective resolution 2304 × 2160@3020bit sampling clock of the output video after being processed by YCbCr422 conversion YCbCr 42014 and effective data bit width expansion 18 is 148.5 Mhz;
the FIFO asynchronous domain conversion compressed line blanking area 19 adopts a sampling clock of 156.6Mhz to receive effective data and simultaneously compresses the line blanking area, Htotal (horizontal display pixels) is changed from 4400 to 2320, and Vtotal (vertical display pixels) is kept unchanged;
the sampling rate of an audio source I2S 48K is subjected to I2S serial-to-parallel signal conversion 15 to obtain a left sound channel and a right sound channel with the sampling rates of 48K and 16bit, the bit width of a high-order effective data identifier is increased to expand the bit width from 16 to 20bit, the data composition of the increased effective data identifier is expressed by bit as {1, 0, the high eight bits, 1, 0 and the low eight bits of effective channel data }, and finally, FIFO asynchronous domain conversion 17 is carried out by using a clock with the sampling frequency of 156.6Mhz to carry out asynchronous clock domain conversion;
the 32-bit user auxiliary data is processed to two 20-bit widths by increasing the high-bit effective data identifier bit width expansion 16, and the FIFO asynchronous domain conversion 17 is carried out by using a clock with the sampling frequency of 156.6Mhz for the asynchronous clock domain conversion.
Frame structure repackaging 20 repackages the video data, audio data, and user auxiliary data converted from the FIFO asynchronous domain, wherein the audio data and user auxiliary data are encapsulated by setting two high bits to zero since the original effective data rate is slower than the sampling of 156.6 Mhz;
the encoding serializer 21 scrambles and encodes and parallel-to-serial converts the repackaged frame effective resolution 2308 × 216020 bit sampling clock 156.6Mhz and transmits the encoded and parallel-to-serial converted data to the coaxial cable with the bandwidth of 3.132Gbit/s through a cable driver.
In accordance with embodiments of a method for transmitting digital video provided herein, and based thereon, a method for receiving digital video is also provided.
A method for receiving digital video according to an embodiment of the present application is described below with reference to fig. 5 to 8. Fig. 5 is a process flow diagram of a method for receiving digital video provided herein; fig. 6 is a flowchart illustrating a decoder included in a method for receiving digital video according to an embodiment of the present application; FIG. 7 is a schematic diagram of a system deployed at a receiving end according to an embodiment of the present application; fig. 8 is a specific audio/video data format conversion and data flow diagram of a system deployed at a receiving end in the embodiment of the present application.
Since the present embodiment is based on the above embodiments, the description is relatively simple, and the relevant portions should be referred to the corresponding descriptions of the above embodiments.
The method for receiving digital video shown in fig. 5 includes:
step S501, receiving an electrical signal of a coaxial cable, performing serial-to-parallel conversion to generate a parallel signal, and performing descrambling and decoding processing on the parallel signal to generate parallel data.
The purpose of this step is to perform serial-to-parallel conversion and descrambling decoding on the received audio/video electrical signals to obtain parallel data, which is generally processed according to the reverse process of the processing process of the serialization and scrambling coding before being sent to the coaxial cable for transmission.
In the embodiment of the application, the received electrical signal of the UHD video coded by the NRZI and transmitted on the 3G coaxial cable is a serial signal, the electrical signal is converted into a parallel signal after being processed by the deserializer, and the design transmission bandwidth of the deserializer is 3.2 Gbit/s-3.5 Gbit/s. The deserialized parallel signal enters a descrambling and decoding process, and corresponding processing is carried out by adopting a polynomial which is similar to the scrambling and coding process during sending but is the same as the scrambling and coding process, so as to extract data before coding, and the method specifically comprises the following steps:
converting the parallel signal from NRZI to NRZ by generating a polynomial conversion bit stream using X + 1;
using 9 primitive polynomials X9+X4+1 descrambled data is generated.
When needing to be explained, if the scrambling coding step during sending is scrambling first and then coding, the data before coding is extracted after receiving, and then decoding is carried out first and then descrambling is carried out; if the scrambling coding step during transmission is firstly coding and then scrambling, the data before coding is extracted after receiving is firstly descrambling and then decoding, the polynomial used by descrambling and the polynomial used by scrambling are kept consistent, and the polynomial used by decoding and the polynomial used by coding are kept consistent.
Fig. 6 shows a flow of a decoder used in the embodiment of the present application, and the specific implementation steps include:
splicing deserialized parallel data d [19:0] and latched data prev _ d [19:0] in the previous clock period into { d [18:0], prev _ d [19] }, so as to obtain data add _ d [19:0 ];
carrying out NRZI to NRZ conversion, and carrying out XOR on d [19:0] and add _ d [19:0] to obtain NRZ [19:0 ];
splicing nrz [19:0] and data prev _ nrz [19:0] latched in the previous clock period to obtain { nrz [19:0], prev _ nrz [19:11] } data add _ nrz [28:0 ]; let i go from 0 to 19, bit-XOR add _ nrz [ i ], add _ nrz [ i +4], add _ nrz [ i +9], resulting in descrambled decoded data desc _ q [19:0 ].
NRZ (or Non-Return to Zero) is a Non-Return-to-Zero encoding of a digital signal, in which a current is or is not emitted for the entire time of a symbol (unipolar encoding), or a positive current or a negative current is emitted (bipolar) to represent a 0 or 1 of the digital signal, and each bit of encoding occupies the entire symbol width.
Step S502, according to a preset frame structure, separating data to be converted from the parallel data, wherein the data to be converted comprises video data to be converted, audio parallel data to be converted and auxiliary data to be converted.
The purpose of this step is to separate the video data, audio data and auxiliary data from the parallel data obtained after descrambling and decoding for further restoration to the data contained in the original UHD video at the time of transmission.
In the embodiment of the present application, the data before encoding obtained after descrambling and decoding is video frame data, and the frame structure of the data before encoding represents each part in a line format, including: end headers, blanking areas, start headers, active video data, audio data, user assistance data, which are consistent with the video frame structure prior to encoding at the time of transmission. Specifically, according to a preset frame structure, separating data from the parallel data according to the frame structure described in the line format includes:
acquiring a data segment between a start header and an end header in an effective line of a video frame, and separating effective video data from the data segment to serve as the video data to be converted;
separating the effective audio parallel data with the high-order effective identifier from the data segment as the audio parallel data to be converted;
the auxiliary data with the high-order effective identifier is separated from the data segment as the auxiliary data to be converted.
Step S503, performing clock domain crossing conversion on the data to be converted, and generating corresponding video data, I2S audio data and user auxiliary data by restoring based on a preset conversion mode.
The purpose of this step is to restore the video data, audio data and auxiliary data separated from the video frames to the data contained in the original UHD video acquired before transmission, such as the raw video source data, audio source and user auxiliary data.
In the embodiment of the application, the corresponding processes of asynchronous clock domain conversion, format conversion, bit width adjustment and the like are carried out on the data contained in the collected initial UHD video during sending. Performing clock domain crossing conversion on the data to be converted, and generating corresponding video data, I2S audio data and user auxiliary data by restoring the data based on a preset conversion mode, specifically including:
the video data to be converted is restored to YCbCr420 format by expanding the effective data bit width from 20bit to 32bit, the YCbCr420 format is converted to YCbCr422 format, FIFO clock domain crossing conversion is carried out on the video data of the YCbCr422 format by adopting a sampling clock corresponding to 4K2K @30fps, a blanking area is expanded, and the video data of 4K2K @30fps16bit is generated based on a standard 4K2K frame structure;
obtaining effective audio parallel data from the audio parallel data to be converted according to a high-order identifier, performing FIFO clock domain crossing conversion based on a preset audio sampling bit clock, and generating serial I2S audio data through parallel-serial conversion;
and acquiring effective auxiliary data from the auxiliary data to be converted according to the high-order identification, wherein the effective auxiliary data is used as user auxiliary data.
For example, the electrical signal received and transmitted from the 3G coaxial cable is an electrical signal obtained by adding serial scrambling codes to 2308 × 216020 bit video frames with 156.6Mhz sampling clocks;
correspondingly, the parallel data obtained after the deserializing and descrambling decoding are effective data with the resolution of 2308 × 216020 bit and the sampling clock of 156.6 Mhz;
correspondingly, for the video data to be converted, through bit width expansion and video format conversion, data with an effective resolution of 1920 × 2160@3032bit sampling clock 156.6Mhz is generated, FIFO clock domain crossing conversion is further performed by adopting a 148.5Mhz sampling clock corresponding to 4K2K @30fps, a blanking area is expanded, and standard 4K2K frame structure video data is generated.
In addition, in order to reduce UHD transmitting and receiving cost, a pair of transmitting and receiving ASIC chips for transmitting 4K2K @30fps ultra-high definition video signals can be customized, the customized chip is provided with user interfaces of video, audio and user auxiliary data, NRZI coding is adopted for reliable transmission, and the NRZI coding is adopted for transmission through a 3G coaxial cable. Because higher hardware performance is not required, the cost performance can be improved compared with an FPGA chip of the same grade, the transmission distance of the 4K2K @30fps ultra-high-definition video signal can reach more than 100 meters, and the method is suitable for being popularized and used in scenes with high cost performance requirements, such as the security monitoring industry.
The method for receiving digital video according to the embodiment of the present application is mainly deployed at a UHD video receiving end, see fig. 7 and 8, and fig. 7 is a schematic diagram of a system deployed at the receiving end according to the embodiment of the present application; fig. 8 is a specific audio/video data format conversion and data flow diagram of a system deployed at a receiving end in the embodiment of the present application.
Fig. 7 shows a system deployed at a receiving end in an embodiment of the present application, which is used as a UHD video receiving end, and includes: a deserializer receiving unit 9, a decoder unit 10, a data separation unit 11, a video data format restoring unit 12, and an audio data parallel-serial converting unit 13. The deserializer receiving unit 9 deserializes the electrical signal of the coaxial cable after being received by the equalizer; the decoder unit 10 further descrambles and decodes the deserialized parallel data; the data separation unit 11 separates the deserialized and descrambled parallel signals into video data, audio data and user auxiliary data; the video data format restoring unit 12 restores the data structure of the packed video signals separated by the data separating unit 11, and restores the frame structure; the audio data parallel-serial conversion unit 13 performs parallel-serial conversion of the effective parallel audio data separated by the data separation unit 11 into I2S audio data. The data separation unit 11 extracts data between the start header and the end header in the active line according to the video frame, and separates the video data; valid audio data and user auxiliary data with a high-order valid identifier are extracted from the valid lines of the video frame.
In the embodiment of the present application shown in fig. 8, a specific audio/video data format conversion and data flow diagram of a system is deployed at a receiving end, and the specific audio/video data format conversion and data flow diagram includes:
the deserializing decoder 22 deserializes and descrambles the 3G electric signal of the coaxial cable after the electric signal is received by the equalizer;
the data separation 23 extracts valid video data, audio data, and auxiliary data from the video frames obtained by the deserializer 20 deserializing decoding for further recovery into the original UHD video source data (e.g., 4K2K standard frames), audio source data (e.g., serial audio I2S), and user auxiliary data obtained at transmission;
converting the video data bit width expansion YCbCr420 into a YCbCr422 double-pixel single beat 24, performing bit width expansion on the effective video data, and reducing the video format of the YCbCr422 to obtain a video effective resolution of 1920 × 2160@3032bit sampling clock 156.6 Mhz;
the video data cross-clock domain conversion FIFO 26 reads the video data in the FIFO by adopting a sampling clock corresponding to 4K2K @30fps and expands a blanking area to restore a 4K2K standard frame structure;
the effective data 25 extracting the high significant identifier separates the audio data and the user auxiliary data extracted from the data separation 23 according to the high significant identifier;
the audio cross-clock domain conversion FIFO parallel-to-serial conversion 27 reads and parallel-to-serial converts the valid audio data extracted by the high-order flag into the serial audio I2S with the corresponding audio sampling bit clock.
Corresponding to the embodiment of the method for transmitting the digital video, the application also provides a device for transmitting the digital video.
Referring to fig. 9, a schematic diagram of an apparatus for transmitting digital video according to the present application is shown. Since the device embodiment is basically similar to the method embodiment, the description is relatively simple, and the relevant portions only need to refer to the corresponding description of the method embodiment. The device embodiments described below are merely illustrative.
The device for transmitting digital video that this application embodiment provided includes:
a data preprocessing unit 901, configured to acquire bare data of a video source, an audio source, and user auxiliary data, and perform preprocessing according to a predetermined conversion manner to generate preprocessed data;
a synchronous timing unit 902, configured to perform asynchronous clock domain conversion on the preprocessed data based on a predetermined sampling clock to generate synchronous timing data;
a frame repackaging unit 903 configured to generate a video frame by performing frame structure repackaging on the synchronous timing data;
and an encoding and transmitting unit 904, configured to perform scrambling and encoding on the video frame, perform parallel-to-serial conversion on the encoded data, generate a serial signal, and transmit the serial signal to a coaxial cable through a cable driver for transmission.
Optionally, the data preprocessing unit 901 includes:
the video source preprocessing subunit is used for carrying out format conversion on the video source bare data and repackaging the video source bare data to generate video preprocessing data;
the audio source preprocessing subunit is used for performing I2S serial-parallel conversion and effective data bit expansion on an audio source to generate audio preprocessing data;
the auxiliary data preprocessing subunit is used for performing effective data bit width expansion on the auxiliary data of the user to generate preprocessed auxiliary data;
wherein the video pre-processing data, the audio pre-processing data and the pre-processing auxiliary data constitute the pre-processing data.
Optionally, the synchronous timing unit 902 is specifically configured to:
adopting a sampling clock suitable for 3G transmission, aiming at the video preprocessing data, performing effective data re-receiving in an FIFO asynchronous clock domain, and compressing a video blanking area to obtain synchronous video data;
adopting a sampling clock suitable for 3G transmission to perform FIFO asynchronous clock domain conversion on the audio preprocessing data and the preprocessing auxiliary data to obtain synchronous audio data and synchronous auxiliary data;
wherein the synchronized video data, the synchronized audio data, and the synchronized auxiliary data constitute the synchronized timing data;
wherein the sampling clock suitable for 3G transmission is 156.6Mhz sampling clock; correspondingly, the coaxial cable is a coaxial cable with a transmission bandwidth of 3G.
Optionally, the frame repackaging unit 903 is specifically configured to:
repackaging the synchronous video data, the synchronous audio data and the synchronous auxiliary data according to a frame structure with the following line format to obtain the video frame: an end header, a blanking region, a start header, valid video data, audio data, user assistance data;
and during the repackaging process, marking invalid data for the synchronous audio data and the synchronous auxiliary data in a mode of setting zero at two high bits.
Optionally, the encoding sending unit 904 includes a scrambling coding subunit, configured to:
by using 9-degree primitive polynomial X9+X4+1, scrambling the video frame to generate scrambled data;
the scrambled data is non-return to zero inverse encoded using an X +1 generator polynomial and converted to NRZI encoded data.
Optionally, the video source preprocessing subunit is specifically configured to:
converting video source naked data in a format of 4K2K @30fpsYCbCr 42216bit into video data in a format of 4K2K @30fps YCbCr42012 bit;
performing effective data bit width expansion on the 4K2K @30fps YCbCr42012bit format video data to obtain 20 bits, and generating 2304 × 20 bits of video preprocessing data of line effective data;
the video source naked data in the format of 4K2K @30fpsYCbCr 42216bit is a 4K2K @30fps video source from a double-edge sampling parallel interface, the effective resolution of the video source naked data is 3840x2160 @3032bit, and the sampling clock is 148.5 Mhz; accordingly, the line valid data is 2304 × 20bit of video pre-processed data, which has an effective resolution 2304 × 2160@3020bit and a sampling clock of 148.5 Mhz.
Optionally, the audio source preprocessing subunit is specifically configured to:
aiming at an I2S audio source with a sampling rate of 48Khz, carrying out I2S serial-parallel conversion to generate audio parallel data with a specific format; the specific format is a double-channel audio format with a sampling rate of 48Khz16 bit;
and expanding the bit width of the effective data of the audio parallel data with the specific format into 20 bits by increasing a high-bit effective data identifier to expand the bit width of the effective data, and generating the audio preprocessing data.
Optionally, the auxiliary data preprocessing subunit is specifically configured to:
and expanding the effective data bit width of the auxiliary data of the 32-bit user into two 20 bits by increasing the high-bit effective data identifier to generate the preprocessed auxiliary data.
In correspondence with an embodiment of a method for receiving digital video provided by the present application, the present application also provides an apparatus for receiving digital video.
Referring to fig. 10, there is shown a schematic diagram of an apparatus for receiving digital video according to the present application. Since the device embodiment is basically similar to the method embodiment, the description is relatively simple, and the relevant portions only need to refer to the corresponding description of the method embodiment. The device embodiments described below are merely illustrative.
The device for receiving digital video that this application embodiment provided includes:
a receiving decoding unit 1001 configured to receive an electrical signal of a coaxial cable, perform serial-to-parallel conversion to generate a parallel signal, perform descrambling and decoding processing on the parallel signal, and generate parallel data;
a data separation unit 1002, configured to separate data to be converted from the parallel data according to a preset frame structure, where the data to be converted includes video data to be converted, audio parallel data to be converted, and auxiliary data to be converted;
and the audio/video restoring unit 1003 is configured to perform cross-clock domain conversion on the data to be converted, and restore and generate corresponding video data, I2S audio data, and user auxiliary data based on a preset conversion mode.
Optionally, the receiving and decoding unit 1001 includes a descrambling and decoding subunit, configured to:
converting the parallel signal from NRZI to NRZ by generating a polynomial conversion bit stream using X + 1;
using 9 primitive polynomials X9+X4+1 descrambled data is generated.
Optionally, the data separating unit 1002 is configured to: the data is separated according to a frame structure described by the following line format: an end header, a blanking region, a start header, valid video data, audio data, user assistance data; the method comprises the following steps:
a video data separating subunit, configured to obtain a data segment between a start header and an end header in an active line of a video frame, and separate active video data from the data segment as the video data to be converted;
an audio data separating subunit, configured to separate valid audio parallel data with a high-order valid identifier from the data segment as the audio parallel data to be converted;
an auxiliary data separating subunit, configured to separate auxiliary data with a high-order valid identifier from the data segment as the auxiliary data to be converted.
Optionally, the audio/video restoring unit 1003 includes:
the video reduction subunit is used for reducing the video data to be converted into a YCbCr420 format by expanding the effective data bit width from 20 bits to 32 bits, converting the YCbCr420 format into a YCbCr422 format, performing FIFO cross-clock domain conversion on the video data of the YCbCr422 format by adopting a sampling clock corresponding to 4K2K @30fps, expanding a blanking area, and generating the video data of 4K2K @30fps16 bits based on a standard 4K2K frame structure;
the audio frequency returning unit is used for acquiring effective audio frequency parallel data from the audio frequency parallel data to be converted according to the high-order identification, carrying out FIFO clock domain crossing conversion based on a preset audio frequency sampling bit clock, and generating serial I2S audio frequency data through parallel-serial conversion;
and the auxiliary data atomic unit is used for acquiring effective auxiliary data from the auxiliary data to be converted according to the high-order identification to serve as user auxiliary data.
Optionally, the electrical signal is an electrical signal transmitted by a 3G coaxial cable; correspondingly, the parallel data is valid data with a resolution of 2308 × 216020 bit and a sampling clock of 156.6 Mhz; correspondingly, for the video data to be converted, through bit width expansion and video format conversion, data with an effective resolution of 1920 × 2160@3032bit sampling clock 156.6Mhz is generated, FIFO clock domain crossing conversion is further performed by adopting a 148.5Mhz sampling clock corresponding to 4K2K @30fps, a blanking area is expanded, and standard 4K2K frame structure video data is generated.
Based on the above embodiments, the present application further provides a digital video transmission system.
A digital video transmission system provided in an embodiment of the present application is described below with reference to fig. 11 to 12, where fig. 11 is a schematic diagram of a digital video transmission system provided in accordance with the present application; fig. 12 is a diagram of a practical deployment of the digital video transmission system provided in the present application. Since the present embodiment is based on the above embodiments, the description is relatively simple, and the relevant portions should be referred to the corresponding descriptions of the above embodiments.
The digital video transmission system shown in fig. 11 includes: said means 1101 for transmitting digital video, and said means 1102 for receiving digital video. Wherein,
the apparatus 1101 for transmitting digital video comprises:
the data preprocessing unit is used for acquiring naked data, an audio source and user auxiliary data of the video source, and preprocessing the naked data, the audio source and the user auxiliary data according to a preset conversion mode to generate preprocessed data;
the synchronous time sequence unit is used for carrying out asynchronous clock domain conversion on the preprocessed data based on a preset sampling clock to generate synchronous time sequence data;
a frame repackaging unit for generating a video frame by performing frame structure repackaging on the synchronous timing data;
and the coding and transmitting unit is used for scrambling and coding the video frame, performing parallel-serial conversion on the coded data to generate a serial signal, and transmitting the serial signal to a coaxial cable for transmission through a cable driver.
The apparatus 1102 for receiving digital video comprises:
the receiving and decoding unit is used for receiving the electric signal of the coaxial cable, performing serial-parallel conversion to generate parallel signals, and performing descrambling and decoding processing on the parallel signals to generate parallel data;
the data separation unit is used for separating data to be converted from the parallel data according to a preset frame structure, wherein the data to be converted comprises video data to be converted, audio parallel data to be converted and auxiliary data to be converted;
and the audio and video restoring unit is used for performing cross-clock domain conversion on the data to be converted and restoring and generating corresponding video data, I2S audio data and user auxiliary data based on a preset conversion mode.
In this embodiment of the present application, the apparatus 1101 for transmitting a digital video is deployed at a UHD video transmitting end, for example, but not limited to, a UHD video acquisition box for security monitoring, and performs format conversion, asynchronous domain clock domain conversion, bit width adjustment, video frame repackaging, video frame serialization scrambling coding, and then transmits the video frame serialization scrambling coding to the apparatus 1102 for receiving a digital video over a 3G coaxial cable on the basis of the acquired UHD video data including video source bare data, an audio source, and user auxiliary data; the device 1102 for receiving digital video is deployed at a UHD video receiving end, for example, a data center server which can be but is not limited to security monitoring, receives an electrical signal sent by a 3G coaxial cable from the device 1101 for transmitting digital video, extracts data before encoding through a series of processes of deserialization, descrambling and decoding, and restores the data to UHD video which includes video source bare data, an audio source and user auxiliary data and is acquired before sending through cross-clock domain conversion and format conversion. Fig. 12 is a diagram of another system structure actually deployed in the embodiment of the present application. In addition, the digital video transmission system can also provide transmission of ultra-high-definition video signals for the audio and video field of recording and broadcasting and the like.
Although the present application has been described with reference to the preferred embodiments, it is not intended to limit the present application, and those skilled in the art can make variations and modifications without departing from the spirit and scope of the present application, therefore, the scope of the present application should be determined by the claims that follow.

Claims (13)

1. A method for transmitting digital video, comprising:
acquiring bare data of a video source, an audio source and user auxiliary data, and preprocessing the bare data, the audio source and the user auxiliary data according to a preset conversion mode to generate preprocessed data;
performing asynchronous clock domain conversion on the preprocessed data based on a preset sampling clock to generate synchronous time sequence data;
generating a video frame by performing frame structure repackaging on the synchronous time sequence data;
scrambling and coding the video frame, performing parallel-serial conversion on the coded data to generate a serial signal, and sending the serial signal to a coaxial cable for transmission through a cable driver;
wherein, the preprocessing according to the predetermined conversion mode to generate the preprocessed data comprises: carrying out format conversion and repacking on the video source bare data to generate video preprocessing data; performing I2S serial-parallel conversion and effective data bit expansion on an audio source to generate audio preprocessing data; performing effective data bit width expansion on the user auxiliary data to generate preprocessing auxiliary data; the video pre-processing data, the audio pre-processing data and the pre-processing auxiliary data constitute the pre-processing data;
the generating synchronous time sequence data by performing asynchronous clock domain conversion on the preprocessed data based on a preset sampling clock comprises the following steps: adopting a sampling clock suitable for 3G transmission, aiming at the video preprocessing data, performing effective data re-receiving in an FIFO asynchronous clock domain, and compressing a video blanking area to obtain synchronous video data; adopting a sampling clock suitable for 3G transmission to perform FIFO asynchronous clock domain conversion on the audio preprocessing data and the preprocessing auxiliary data to obtain synchronous audio data and synchronous auxiliary data; the synchronized video data, the synchronized audio data, and the synchronized auxiliary data constitute the synchronized timing data; the sampling clock suitable for 3G transmission is a 156.6Mhz sampling clock; correspondingly, the coaxial cable is a coaxial cable with a transmission bandwidth of 3G.
2. The method of claim 1, wherein generating a video frame by frame structure repackaging the synchronized timing data comprises:
repackaging the synchronous video data, the synchronous audio data and the synchronous auxiliary data according to a frame structure with the following line format to obtain the video frame: an end header, a blanking region, a start header, valid video data, audio data, user assistance data;
and during the repackaging process, marking invalid data for the synchronous audio data and the synchronous auxiliary data in a mode of setting zero at two high bits.
3. The method of claim 1, wherein the scrambling encoding for the video frame comprises:
by using 9-degree primitive polynomial X9+X4+1, scrambling the video frame to generate scrambled data;
the scrambled data is non-return to zero inverse encoded using an X +1 generator polynomial and converted to NRZI encoded data.
4. The method of claim 1, wherein converting the format of the raw data for the video source and repackaging the raw data to generate the video pre-processing data comprises:
converting video source naked data in a format of 4K2K @30fpsYCbCr 42216bit into video data in a format of 4K2K @30fps YCbCr42012 bit;
performing effective data bit width expansion on the 4K2K @30fps YCbCr42012bit format video data to obtain 20 bits, and generating 2304 × 20 bits of video preprocessing data of line effective data;
the video source naked data in the format of 4K2K @30fpsYCbCr 42216bit is a 4K2K @30fps video source from a double-edge sampling parallel interface, the effective resolution of the video source naked data is 3840x2160 @3032bit, and the sampling clock is 148.5 Mhz; accordingly, the line valid data is 2304 × 20bit of video pre-processed data, which has an effective resolution 2304 × 2160@3020bit and a sampling clock of 148.5 Mhz.
5. The method of claim 1, wherein the I2S serial-to-parallel conversion and significance bit extension for an audio source generates audio pre-processed data, comprising:
aiming at an I2S audio source with a sampling rate of 48Khz, carrying out I2S serial-parallel conversion to generate audio parallel data with a specific format; the specific format is a double-channel audio format with a sampling rate of 48Khz16 bit;
and expanding the bit width of the effective data of the audio parallel data with the specific format into 20 bits by increasing a high-bit effective data identifier to expand the bit width of the effective data, and generating the audio preprocessing data.
6. The method according to claim 1, wherein said performing a valid data bit width extension for user assistance data generates pre-processed assistance data, comprising:
and expanding the effective data bit width of the auxiliary data of the 32-bit user into two 20 bits by increasing the high-bit effective data identifier to generate the preprocessed auxiliary data.
7. A method for receiving digital video, comprising:
receiving an electric signal of a coaxial cable, performing serial-parallel conversion to generate a parallel signal, and performing descrambling and decoding processing on the parallel signal to generate parallel data;
separating data to be converted from the parallel data according to a preset frame structure, wherein the data to be converted comprises video data to be converted, audio parallel data to be converted and auxiliary data to be converted;
performing clock domain crossing conversion on the data to be converted, and restoring and generating corresponding video data, I2S audio data and user auxiliary data based on a preset conversion mode;
the separating the data to be converted from the parallel data according to the preset frame structure means that the data are separated according to a frame structure described by the following line format: an end header, a blanking region, a start header, valid video data, audio data, user assistance data, comprising:
acquiring a data segment between a start header and an end header in an effective line of a video frame, and separating effective video data from the data segment to serve as the video data to be converted;
separating the effective audio parallel data with the high-order effective identifier from the data segment as the audio parallel data to be converted;
the auxiliary data with the high-order effective identifier is separated from the data segment as the auxiliary data to be converted.
8. The method of claim 7, wherein the descrambling decoding process comprises:
converting the parallel signal from NRZI to NRZ by generating a polynomial conversion bit stream using X + 1;
using 9 primitive polynomials X9+X4+1 descrambled data is generated.
9. The method according to claim 7, wherein performing cross-clock domain conversion on the data to be converted and generating corresponding video data, I2S audio data and user auxiliary data based on a preset conversion manner includes:
the video data to be converted is restored to YCbCr420 format by expanding the effective data bit width from 20bit to 32bit, the YCbCr420 format is converted to YCbCr422 format, FIFO clock domain crossing conversion is carried out on the video data of the YCbCr422 format by adopting a sampling clock corresponding to 4K2K @30fps, a blanking area is expanded, and the video data of 4K2K @30fps16bit is generated based on a standard 4K2K frame structure;
obtaining effective audio parallel data from the audio parallel data to be converted according to a high-order identifier, performing FIFO clock domain crossing conversion based on a preset audio sampling bit clock, and generating serial I2S audio data through parallel-serial conversion;
and acquiring effective auxiliary data from the auxiliary data to be converted according to the high-order identification, wherein the effective auxiliary data is used as user auxiliary data.
10. The method of claim 7, wherein the electrical signal is an electrical signal transmitted by a 3G coaxial cable; correspondingly, the parallel data is valid data with a resolution of 2308 × 216020 bit and a sampling clock of 156.6 Mhz; correspondingly, for the video data to be converted, through bit width expansion and video format conversion, data with an effective resolution of 1920 × 2160@3032bit sampling clock 156.6Mhz is generated, FIFO clock domain crossing conversion is further performed by adopting a 148.5Mhz sampling clock corresponding to 4K2K @30fps, a blanking area is expanded, and standard 4K2K frame structure video data is generated.
11. An apparatus for transmitting digital video, comprising:
the data preprocessing unit is used for acquiring naked data, an audio source and user auxiliary data of the video source, and preprocessing the naked data, the audio source and the user auxiliary data according to a preset conversion mode to generate preprocessed data;
the synchronous time sequence unit is used for carrying out asynchronous clock domain conversion on the preprocessed data based on a preset sampling clock to generate synchronous time sequence data;
a frame repackaging unit for generating a video frame by performing frame structure repackaging on the synchronous timing data;
the encoding and transmitting unit is used for scrambling and encoding the video frame, performing parallel-serial conversion on the encoded data to generate a serial signal, and transmitting the serial signal to a coaxial cable for transmission through a cable driver;
wherein, the data preprocessing unit comprises:
the video source preprocessing subunit is used for carrying out format conversion on the video source bare data and repackaging the video source bare data to generate video preprocessing data;
the audio source preprocessing subunit is used for performing I2S serial-parallel conversion and effective data bit expansion on an audio source to generate audio preprocessing data;
the auxiliary data preprocessing subunit is used for performing effective data bit width expansion on the auxiliary data of the user to generate preprocessed auxiliary data; the video pre-processing data, the audio pre-processing data and the pre-processing auxiliary data constitute the pre-processing data;
the synchronization timing unit is specifically configured to: adopting a sampling clock suitable for 3G transmission, aiming at the video preprocessing data, performing effective data re-receiving in an FIFO asynchronous clock domain, and compressing a video blanking area to obtain synchronous video data; adopting a sampling clock suitable for 3G transmission to perform FIFO asynchronous clock domain conversion on the audio preprocessing data and the preprocessing auxiliary data to obtain synchronous audio data and synchronous auxiliary data; the synchronized video data, the synchronized audio data, and the synchronized auxiliary data constitute the synchronized timing data; the sampling clock suitable for 3G transmission is a 156.6Mhz sampling clock; correspondingly, the coaxial cable is a coaxial cable with a transmission bandwidth of 3G.
12. An apparatus for receiving digital video, comprising:
the receiving and decoding unit is used for receiving the electric signal of the coaxial cable, performing serial-parallel conversion to generate parallel signals, and performing descrambling and decoding processing on the parallel signals to generate parallel data;
the data separation unit is used for separating data to be converted from the parallel data according to a preset frame structure, wherein the data to be converted comprises video data to be converted, audio parallel data to be converted and auxiliary data to be converted;
the audio and video restoration unit is used for performing cross-clock domain conversion on the data to be converted and restoring and generating corresponding video data, I2S audio data and user auxiliary data based on a preset conversion mode; wherein
The data separation unit is specifically configured to: the data is separated according to a frame structure described by the following line format: an end header, a blanking region, a start header, valid video data, audio data, user assistance data; the method comprises the following steps:
a video data separating subunit, configured to obtain a data segment between a start header and an end header in an active line of a video frame, and separate active video data from the data segment as the video data to be converted;
an audio data separating subunit, configured to separate valid audio parallel data with a high-order valid identifier from the data segment as the audio parallel data to be converted;
an auxiliary data separating subunit, configured to separate auxiliary data with a high-order valid identifier from the data segment as the auxiliary data to be converted.
13. A digital video transmission system, comprising: apparatus for transmitting digital video according to claim 11, and apparatus for receiving digital video according to claim 12.
CN201810247401.5A 2018-03-23 2018-03-23 A kind of method and device being used for transmission digital video Active CN108449567B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810247401.5A CN108449567B (en) 2018-03-23 2018-03-23 A kind of method and device being used for transmission digital video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810247401.5A CN108449567B (en) 2018-03-23 2018-03-23 A kind of method and device being used for transmission digital video

Publications (2)

Publication Number Publication Date
CN108449567A CN108449567A (en) 2018-08-24
CN108449567B true CN108449567B (en) 2019-03-19

Family

ID=63196996

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810247401.5A Active CN108449567B (en) 2018-03-23 2018-03-23 A kind of method and device being used for transmission digital video

Country Status (1)

Country Link
CN (1) CN108449567B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109194985B (en) * 2018-11-19 2021-03-26 上海高骏精视信息技术有限公司 Audio and video Ethernet transmission method and system
CN110278412B (en) * 2018-12-06 2021-11-23 义晶科技股份有限公司 Image display system and control signal data volume increasing method thereof
CN112292848B (en) * 2019-05-22 2023-07-14 西安诺瓦星云科技股份有限公司 Video source expansion method, device and system and video source expander
CN110166724B (en) * 2019-05-24 2021-01-12 无锡中感微电子股份有限公司 Multimedia data sending method and device based on coaxial cable
CN110351512B (en) * 2019-05-24 2021-08-20 无锡中感微电子股份有限公司 Data sending method and device based on coaxial cable
CN112584092B (en) * 2019-09-30 2024-05-03 广州汽车集团股份有限公司 Data acquisition device and data acquisition system
CN110891154B (en) * 2019-12-09 2021-01-12 天津瑞发科半导体技术有限公司 Video fault-tolerant transmission system and fault-tolerant sending device
CN112995560B (en) * 2021-02-05 2022-11-22 北京视通科技有限公司 Transmission method, device and equipment for parallel video signals and storage medium
CN113400937B (en) * 2021-04-15 2022-05-24 浙江吉利控股集团有限公司 Vehicle entertainment information display system and vehicle
CN113518259B (en) * 2021-05-25 2023-06-09 龙迅半导体(合肥)股份有限公司 Data processing method and device
CN113556619B (en) * 2021-07-15 2024-04-19 广州市奥威亚电子科技有限公司 Device and method for link transmission and method for link reception
CN113810645B (en) * 2021-11-16 2022-02-08 北京数字小鸟科技有限公司 System, method and equipment for transmitting custom data of video blanking area
CN114416626B (en) * 2021-11-22 2024-04-12 中国科学院西安光学精密机械研究所 Asynchronous serial data recovery method based on 8B/10B coding
CN114760401B (en) * 2022-04-14 2024-07-19 上海富瀚微电子股份有限公司 Method for directly expanding output video resolution of parallel interface of image processing chip
CN116721678B (en) * 2022-09-29 2024-07-05 荣耀终端有限公司 Audio data monitoring method, electronic equipment and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101150383A (en) * 2007-10-15 2008-03-26 中兴通讯股份有限公司 IP data transmission method
CN102065208A (en) * 2010-12-08 2011-05-18 南开大学 Digital audio and video signal SerDes and realization method thereof
CN102413322A (en) * 2011-12-07 2012-04-11 中国航空无线电电子研究所 Avionics digital video bus (ADVB) framing system and method based on line synchronization
CN202889508U (en) * 2012-05-28 2013-04-17 四川九州电子科技股份有限公司 Electrical interface module for automatic switching between SDI and ASI, and video device
CN203406970U (en) * 2013-07-22 2014-01-22 深圳市朗驰欣创科技有限公司 Audio/video transmission system
CN104301642A (en) * 2014-09-04 2015-01-21 中航华东光电有限公司 LCD contrast adjusting system and method
CN107371033A (en) * 2017-08-22 2017-11-21 广州波视信息科技股份有限公司 A kind of TICO format 4s K/8K encoders and its implementation

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4463040B2 (en) * 2004-08-06 2010-05-12 株式会社日立国際電気 Signal converter
CN105704408B (en) * 2016-02-04 2018-08-03 天津市英贝特航天科技有限公司 The real-time overlapping controller of asynchronous image and its stacking method
CN206042212U (en) * 2016-08-03 2017-03-22 北京蛙视通信技术股份有限公司 System for high -definition video signal's transmission node equipment, central receiving terminal equipment and transmission

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101150383A (en) * 2007-10-15 2008-03-26 中兴通讯股份有限公司 IP data transmission method
CN102065208A (en) * 2010-12-08 2011-05-18 南开大学 Digital audio and video signal SerDes and realization method thereof
CN102413322A (en) * 2011-12-07 2012-04-11 中国航空无线电电子研究所 Avionics digital video bus (ADVB) framing system and method based on line synchronization
CN202889508U (en) * 2012-05-28 2013-04-17 四川九州电子科技股份有限公司 Electrical interface module for automatic switching between SDI and ASI, and video device
CN203406970U (en) * 2013-07-22 2014-01-22 深圳市朗驰欣创科技有限公司 Audio/video transmission system
CN104301642A (en) * 2014-09-04 2015-01-21 中航华东光电有限公司 LCD contrast adjusting system and method
CN107371033A (en) * 2017-08-22 2017-11-21 广州波视信息科技股份有限公司 A kind of TICO format 4s K/8K encoders and its implementation

Also Published As

Publication number Publication date
CN108449567A (en) 2018-08-24

Similar Documents

Publication Publication Date Title
CN108449567B (en) A kind of method and device being used for transmission digital video
US10027971B2 (en) Compressed blanking period transfer over a multimedia link
US11233971B2 (en) Method and apparatus for digital data transmission based on an analog composite video signal
WO1998015121A1 (en) Sending device, receiving device, sending-receiving device, transmitter, and transmitting method
JPWO2018056002A1 (en) Video surveillance system
KR20130067665A (en) Digital image transmitting/receiving system based on ethernet
US20170280087A1 (en) Method and Apparatus for Improving Transmission of Transport Video Signal
TW201406159A (en) Surveillance system, image compression serializer and image decompression deserializer
CN103826084A (en) Audio encoding method
CN108769699B (en) A kind of Video transmission system and method
US20070058684A1 (en) Transparent methods for altering the video decoder frame-rate in a fixed-frame-rate audio-video multiplex structure
JP4483457B2 (en) Transmission system
KR101783992B1 (en) Digital image transmitting/receiving system based on ethernet
CN112637673B (en) Decoding method and decoding system of iSP (information processing system) signal
CN213547715U (en) High-definition video data sending device, receiving device and transmission system
JP2006054550A (en) Transmission system
CN111385524B (en) Method and system for realizing high-speed long-distance data transmission of analog high-definition camera
US7477168B2 (en) Apparatus for and method of processing data
JP2012239123A (en) Video data transmission device and receiving device
WO2008041305A1 (en) High-quality compressed video transmitting system
JPH10190767A (en) Data transmission device/method and data reception device/method
CN113923318B (en) Method for realizing simultaneous transmission of HD and 4K HDR video signals and SDI device
Seth-Smith et al. 11.88 Gbits/sec: Continuing the Evolution of Serial Digital Interface
CN115695815A (en) FPGA-based video lossless and lossy compression system
JP2014168152A (en) Signal transmitter, signal receiver, and signal transmission system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant