CN117979006A - Low-illumination live video real-time transcoding enhancement bright structure quantization method - Google Patents

Low-illumination live video real-time transcoding enhancement bright structure quantization method Download PDF

Info

Publication number
CN117979006A
CN117979006A CN202311856203.6A CN202311856203A CN117979006A CN 117979006 A CN117979006 A CN 117979006A CN 202311856203 A CN202311856203 A CN 202311856203A CN 117979006 A CN117979006 A CN 117979006A
Authority
CN
China
Prior art keywords
video
quantization
transcoding
illumination
low
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311856203.6A
Other languages
Chinese (zh)
Inventor
吴水勇
刘耀平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202311856203.6A priority Critical patent/CN117979006A/en
Publication of CN117979006A publication Critical patent/CN117979006A/en
Pending legal-status Critical Current

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application discloses a low-illumination live video real-time transcoding enhancement bright structure quantization method, which comprises the steps of constructing a low-illumination video transcoding frame and combining video transcoding and low-illumination video enhancement; secondly, researching the influence of quantization factors, video content and illumination coefficients on PSNR-LY relation in the frame, finding out factors influencing the selection of the optimal bright structure quantization factors through theoretical analysis and experiments, and establishing a PSNR-LY model and an R-LY model of low-illumination video transcoding; thirdly, researching the mapping relation between the low-illumination video and the normal-illumination video quantization factors, deducing a low-illumination video transcoding bright-structure quantization theoretical formula, and correcting the theoretical formula through PSNR-LY, R-LY models and experimental analysis to obtain a final low-illumination video transcoding bright-structure quantization factor selection formula. The low-illumination video transcoding light structure quantization algorithm is obtained, and the transcoding code rate is the lowest under the condition that the quality of the video is ensured to be as high as possible by the light structure quantization factor selected by the algorithm.

Description

Low-illumination live video real-time transcoding enhancement bright structure quantization method
Technical Field
The application relates to a low-illumination video transcoding and bright-structure quantization method, in particular to a low-illumination live video real-time transcoding and enhancement bright-structure quantization method, and belongs to the technical field of low-illumination video transcoding and enhancement.
Background
With the development of network and multimedia technologies, video transcoding technology is increasingly widely applied in the fields of network live broadcast, learning and working, entertainment, military, security and the like, video transcoding refers to converting video compression code streams from one type to another type, and has important value for video application in heterogeneous network environments such as mobile video communication, video live broadcast, intelligent video monitoring and the like. For example, in the field of intelligent video monitoring, a short-term monitoring video needs to be stored by adopting a lower code rate, and transcoding by adopting a reduced code rate is needed. In mobile video communication and live broadcast, the terminal and network bandwidth used by the user are different, and how to convert the video code stream so as to adapt to different network bandwidths, different terminal processing capacities and different user demands brings great challenges to the video transcoding technology.
However, in many practical applications, such as a monitoring system, a large amount of video is captured under a low-light condition at night, and the video image is often blurred and has poor quality. For such video, low-illumination video transcoding is needed, namely video enhancement is performed in the transcoding process, and the initial low-illumination video is transcoded into normal-illumination video, so that the method has important application value for low-illumination video application scenes such as monitoring systems and the like. By applying the video enhancement processing technology, the quality of the low-illumination video content can be effectively improved, and the transcoding service comprising the video enhancement technology not only can flexibly change the transcoding parameters of the video, but also can make the video content with lower visibility clearer. Therefore, the exploration of low-illumination video transcoding technology with video enhancement function has great academic and application value.
The video transcoding and light-structure quantization refers to a process of inputting a coded stream which is subjected to primary coding quantization and then inputting the coded stream into a transcoding system, and then performing secondary coding quantization after decoding. The video quantization is a lossy process, quantization distortion can be brought, and the video transcoding quantization algorithm can reduce the bright quantization distortion by selecting the optimal bright quantization factor, so that the quality of the transcoded video is as high as possible, and the consumed code rate is as low as possible.
The video transcoding quantization algorithm in the prior art aims at a video with good illumination, but enhancement processing on a low-illumination video can influence the relation between a transcoding peak signal-to-noise ratio PSNR and a transcoding code Rate Rate and a bright structure quantization factor LY, so that the bright structure quantization algorithm in the enhancement transcoding process of the low-illumination video needs to be specially developed. The low-illumination video transcoding and reconstruction quantization algorithm has an important guiding function for selecting proper transcoding quantization factors, and the quantization factor with the lowest code rate is selected on the premise of ensuring that the PSNR of the low-illumination video transcoding is as high as possible.
The prior art video transcoding luma quantization technique is applied in two ways: the bright structure quantization technology under the same standard is applied to code rate reduction transcoding; the bright structure quantization technique under different standards is a mapping method of quantization factors under different video standards. The code rate reduction transcoding is widely applied to the transmission fields of wifi transmission, network video streaming, video live broadcast and the like, and the code rate reduction transcoding refers to performing bright structure quantization on a compressed video stream, and reducing the initial code rate of the compressed video stream from a larger initial code rate to a lower target code rate so as to adapt to the requirement of network bandwidth. The code rate is reduced, the transmission network condition is met, and the video quality is ensured. In the code rate reduction transcoding algorithm, research is focused on two aspects of light structure quantization and code rate control after light structure quantization. The quantization algorithm is a choice of the bright-structure quantization factor, which is not based on the rate-distortion model, and thus although the code rate can be reduced, the code rate cannot be precisely controlled to the target value.
Video enhancement is an essential operation for low-light blurred video sequences. The video transcoding and the light structure quantization are carried out simultaneously with the video enhancement, the corresponding relation between the light structure quantization factor and the peak signal-to-noise ratio and the code rate is explored after the enhancement treatment of the low-illumination video, and the optimal point of the light structure quantization factor is predicted to obtain the optimal peak signal-to-noise ratio and the code rate, so that the method is a brand new research and development with application significance.
The problems to be solved by the low-illumination video transcoding enhancement bright structure quantization method in the prior art and key technical difficulties of the application include:
(1) The video transcoding quantization algorithm in the prior art aims at a video with good illumination, but the enhancement processing of the low-illumination video can influence the relation between a transcoding peak signal-to-noise ratio PSNR and a transcoding code Rate Rate and a bright structure quantization factor LY, and the bright structure quantization algorithm in the special low-illumination video enhancement transcoding process is lacked, so that the proper transcoding quantization factor is not convenient to select, and the quantization factor with the lowest code Rate is selected on the premise that the high PSNR of the low-illumination video transcoding cannot be ensured. The selection of the prior art bright structure quantization factor is not based on a rate distortion model, and although the code rate can be reduced, the code rate cannot be accurately controlled to a target value. Aiming at a video sequence with unclear low illumination, the prior art lacks a method for simultaneously carrying out video transcoding, light structure quantization and video enhancement, lacks exploration of the corresponding relation between the light structure quantization factor and the peak signal to noise ratio and code rate after the enhancement processing of the explored low illumination video, and cannot predict the optimal point of the light structure quantization factor to obtain the optimal peak signal to noise ratio and code rate.
(2) Aiming at the situation that the video is insufficient in definition when being recorded under the low-illumination condition, the influence of video enhancement is not considered in the video transcoding technology in the prior art, and a new problem is generated when video transcoding and video enhancement are carried out separately. Because video enhancement is to process video in a spatial domain, if video enhancement is performed first and then video transcoding is performed, the coded stream after primary coding needs to be decoded first and then video enhancement is performed, and then the coded and quantized normal illumination code stream is input into a transcoding system, so that a decoding and coding process is added, time is wasted, and additional video quality loss is generated. If video transcoding is performed before video enhancement, the quantization model has a more significant effect on the quality of the low-luminance video than on the normal-luminance video, so that a decoding and encoding process is added, and a more serious video quality loss is caused. In view of the above, there is a need for a method of combining low-luminance video enhancement and video transcoding, performing video enhancement during video transcoding, and analyzing the relationship and distinction between low-luminance video transcoding and normal-luminance video transcoding to obtain a low-luminance video transcoding light-structure quantization algorithm, where the light-structure quantization factor selected by the algorithm can ensure that the transcoding code rate is the lowest under the condition that the video quality is as high as possible.
(3) The existing video transcoding and bright-structure quantization algorithm aims at normal-illumination video, and for low-illumination video transcoding, the video enhancement process can seriously influence the quality of transcoded video and the relation between the transcoding code rate and the bright-structure quantization factor, so that the normal-illumination video transcoding and bright-structure quantization algorithm is not applicable any more. The prior art lacks a low-illumination video transcoding framework, and cannot combine video transcoding with low-illumination video enhancement; the research of PSNR-LY (video quality-quantization factor) relation by lack of quantization factors, video content and illumination coefficients, and the lack of PSNR-LY model and R-LY (code rate-quantization factor) model of low-illumination video transcoding; the mapping relation between the low-illumination video and the normal-illumination video quantization factors is not clear, a low-illumination video transcoding bright-structure quantization theoretical formula is absent, PSNR-LY, R-LY models and experimental analysis are absent, the theoretical formula cannot be corrected, the optimal bright-structure quantization factor of the low-illumination video cannot be accurately calculated, the video transcoding code rate is high, and the video quality loss is serious.
Disclosure of Invention
The method aims at solving the problems that the prior art video transcoding and bright structure quantization algorithm is not applicable to normal illumination video, and for low illumination video transcoding, the video enhancement process can seriously influence the quality of transcoded video and the relation between the transcoding code rate and the bright structure quantization factor, so that the normal illumination video transcoding and bright structure quantization algorithm is not applicable. The application makes three improvements: firstly, a low-illumination video transcoding frame is constructed, and video transcoding and low-illumination video enhancement are combined; secondly, researching the influence of quantization factors, video content and illumination coefficients on PSNR-LY relation in the frame, finding out factors influencing the selection of the optimal bright structure quantization factors through theoretical analysis and experiments, and establishing a PSNR-LY model and an R-LY model of low-illumination video transcoding; thirdly, researching the mapping relation between the low-illumination video and the normal-illumination video quantization factors, deducing a low-illumination video transcoding bright-structure quantization theoretical formula, and correcting the theoretical formula through PSNR-LY, R-LY models and experimental analysis to obtain a final low-illumination video transcoding bright-structure quantization factor selection formula. The low-illumination video transcoding light structure quantization algorithm is obtained, and the transcoding code rate is the lowest under the condition that the quality of the video is ensured to be as high as possible by the light structure quantization factor selected by the algorithm.
In order to achieve the technical effects, the technical scheme adopted by the application is as follows:
The method combines low-illumination video enhancement with video transcoding, performs video enhancement in the video transcoding process, and ensures that the transcoding code rate is the lowest under the condition that the video quality is as high as possible by the bright structure quantization factor selected by the low-illumination video transcoding bright structure quantization algorithm;
1) Constructing a low-light video transcoding framework: after the low-illumination video is quantized once by an H.264 coder and then input into a transcoding system, decoding is carried out first, video enhancement is carried out, and finally, the enhancement transcoded code stream is output after transcoding and reconstruction quantization;
2) Resolving the effect of quantization factors, video content and luminance coefficients on PSNR-LY relationship: in a low-illumination video transcoding frame, analyzing the change rules of PSNR-LY curves under different quantization factors, different video contents and different illumination coefficients, and constructing PSNR-LY and R-LY models of low-illumination video transcoding;
3) The method comprises the steps of providing a low-illumination video transcoding and bright structure quantization algorithm: the method comprises the steps of analyzing a low-illumination video quantization model, obtaining a mapping rule of quantization factors in the low-illumination video and normal-illumination video coding quantization model, deducing a selection formula of a low-illumination video transcoding light-construction quantization factor under an ideal state, correcting a low-illumination video transcoding light-construction quantization algorithm formula according to the obtained PSNR-LY model, finally obtaining a low-illumination video transcoding light-construction quantization factor calculation formula, judging whether image enhancement and transcoding are necessary according to an illumination coefficient and a primary quantization factor in a transcoding system code stream, and if enhancement transcoding is needed, selecting a corresponding transcoding light-construction quantization factor to obtain an optimal transcoding effect, ensuring that the selected quantization factor enables light-construction quantization distortion to be minimum, ensuring that video quality is highest after transcoding, and selecting the transcoding light-construction quantization factor with a lower code rate as possible on the premise that video quality and highest quality are at the same level.
Preferably, the low-light video real-time transcoding framework: encoding a VIDEO sequence video_ly1.264 with unclear low illumination through an H.264 encoder to obtain a low illumination code stream video_LY1.264 for one time quantization, inputting the video_LY1.264 code stream into a transcoding system, performing inverse quantization through the H.264 decoder, obtaining a quantization factor LY1 and a decoded VIDEO sequence video_LY1.yuv during the first encoding through analyzing code stream information, performing VIDEO enhancement on the video_LY1.yuv to obtain an enhanced VIDEO sequence video_LY1_ENHANCE.yuv, performing quantization encoding on the enhanced sequence through LY2 to obtain a video_LY1_ ENHANCE _LY2.264, and performing secondary quantization encoding after the first quantization and the second encoding when the VIDEO sequence is encoded into a sequence video_LY1.264 input through transcoding in the whole process of converting the initial sequence video_LY1_ ENHANCE into a final output code stream video_LY1_ ENHANCE _ 2.264;
When low-illumination transcoding and light structure quantization is carried out, the PSNR value of the transcoding is calculated by taking an initial normal illumination video sequence as a reference, and under the premise of meeting the condition that the video quality is as good as possible, LY 2 calculated by the transcoding and light structure quantization algorithm needs to reduce the code rate after transcoding as much as possible, the relation between the transcoding PSNR and LY 2 and the relation between the transcoding code rate R and LY 2 needs to be optimized, and the relation between the transcoding PSNR and LY is determined by three factors: quantization model, video content, and low illumination level.
Preferably, the quantization factor acts to transcode the PSNR-LY relationship: the transcoding platform has two processes to quantize the video: the code stream is quantized once through an encoder, and the code stream is quantized by a light structure of a encoder; the quantization operation process is expressed as:
Wherein F (u, v) represents DCT coefficients of an initial video sequence before quantization, F Q (u, v) represents DCT coefficients of a video after quantization, Q (u, v) represents a quantization weighting matrix, Q is a quantization step length, the value of Q is associated with a quantization factor LY, round [ ] is a rounding function, and a peak signal-to-noise ratio PSNR value is adopted as an objective evaluation standard of video quality;
Under the low illumination condition, the value of the optimal LY 2 is larger than LY 1, and the value of the optimal LY 2 is increased along with the increase of LY 1, so that the inflection point gradually moves backwards.
Preferably, the low-luminance video transcodes PSNR-LY model: quantitatively analyzing PSNR-LY curves, constructing a PSNR-LY model of H.264 low-illumination video transcoding, and fitting the change rule of a transcoded PSNR value PSNR 2(LY1,LY2) along with 0P to form a cubic function curve, wherein the curve is expressed as:
PSNR2(LY1,LY2)=α·LY2 3+β·LY2 2+γ·LY2+δ 2,2
The values of the four parameters of alpha, beta, gamma and delta are related to a primary quantization factor LY 1, low-illumination degree i 1 and video content, wherein alpha is less than 0, beta is more than 0, gamma is less than 0 and delta is more than 0, the value range of alpha is [ -4 multiplied by 10 -4,-1×10-5 ], the value range of beta is [20, 35], and the values of the four parameters of alpha, beta, gamma and delta are greatly influenced by the low-illumination degree i;
after fitting the PSNR-LY curve to the cubic function curve, the value of PSNR 2(LY1,LY2) must have an extreme point, i.e. when LY 2=LY2 ', the derivative PSNR 2′(LY1,LY2)=0,PSNR2(LY1,LY2 of the cubic function curve takes on a maximum PSNR 2max(LY1,LY2), i.e. when LY 2>LY2', the value of the derivative PSNR 2′(LY1,LY2)=0,PSNR2(LY1,LY2) of the cubic function curve slowly rises or remains substantially unchanged: i.e., when LY 2>LY2', the derivative of the cubic curve PSNR 2′(LY1,LY2)<0,PSNR2(LY1,LY2) drops rapidly;
By analyzing the low illumination degree, primary quantization factor information carried by the input code stream and the video content of the initial code stream, a curve model of peak signal to noise ratio PSNR 2(LY1,LY2) and a bright-structure quantization factor LY 2 is established, and the optimal transcoding bright-structure quantization factor is predicted, so that the video quality after transcoding is optimal, and the transcoding code rate is lower.
Preferably, the low-intensity video transcoding R-LY model: introducing a rate distortion model into low-illumination video transcoding, establishing an R-LY model, wherein the greater the quantization factor LY is, the greater the degree of compression of video image information is, the more the image is compressed, and the lower the lost information is, the lower the consumed code rate is, the quantization factor LY and the code rate are in an inverse proportion relation, R-LY is an inverse proportion function, and the code streams of different low-illumination degrees, primary quantization factors and video contents are selected for calculation;
The relation model of the code rate and the bright structure quantization factor in the low-illumination video transcoding process is fitted by using a formula 3:
Wherein R represents a transcoding code rate, X 1 and X 2 are coefficients, Q step is a quantization step length, the value of the quantization step length corresponds to the quantization factors LY one by one, the larger the transcoding code rate R is in inverse proportion to the bright structure quantization factors LY, the smaller the transcoding code rate R is, so that the code rate consumed in the transcoding process is as low as possible, when the optimal bright structure quantization factors are selected, the larger quantization factors are preferably considered, and when the bright structure quantization factors meeting the conditions in the A interval are selected, the largest LY 2 in the range is directly selected, so that the requirements of transcoding quality and transcoding code rate are met.
Preferably, the low luminance and normal luminance video quantization factor map correlation model: firstly, decoding an input compressed code stream to obtain a primary quantization factor in code stream information, carrying out inverse quantization on the decoded information by using the primary quantization factor to reconstruct an initial yuv sequence, inputting the yuv sequence into a coding end, carrying out bright structure quantization by using a larger bright structure quantization factor by the coding end, carrying out primary coding, and finally outputting the coded code stream;
m and n represent gray values of two different pixels in a certain frame of image of the initial video, Q 1 and Q 2 represent two quantization processes respectively, and Q 1 -1 and Q 2 -1 represent two inverse quantization processes respectively;
For the input value m, after twice quantization and twice inverse quantization, the reconstructed value is:
If the quantization is not performed once, the Q 2 is directly used for quantization and inverse quantization, and the reconstruction value is as follows:
For the input gray value m, compared with the two times of quantization by Q 1、Q2 and the one time of quantization by Q 2, the reconstruction value is the same, and no error is introduced;
for the gray value n, the reconstructed value after twice quantization and inverse quantization is:
The reconstructed value after only one quantization and inverse quantization of Q 2 is:
For an input gray value N, the difference between the two times of quantization by Q 1、Q2 and the reconstruction value of one time of quantization by Q 2 is caused, the error delta=N-M is caused, the secondary quantization error becomes a bright structure quantization error, the gray value of an image is subjected to DCT, and then the transformed coefficient is marked on a horizontal coordinate axis according to the amplitude value.
Preferably, assuming that a quantization factor for performing primary coding quantization on the initial low-luminance video is Q 1 and a luma quantization factor for encoding the enhanced yuv sequence by the encoder during the transcoding is Q 2, distortion caused by primary coding quantization is expressed as formula 8:
where y represents the DCT coefficient, p (y) represents the probability density function of the DCT coefficient distribution, and after secondary quantization with the quantization factor Q 2, the distortion caused by the secondary quantization is expressed as formula 9:
The magnitude of the twice quantized distortion value is determined by DCT coefficients and twice quantized quantization step sizes, and based on the DCT coefficient relation between the normal illumination video and the video after illumination reduction, the relation of quantization factors between the normal illumination video and the low illumination video under the same distortion condition is obtained; establishing an 8X 8 image block, and performing DCT (discrete cosine transformation) on the image block to obtain an 8X 8DCT coefficient block;
The gray value of each pixel point is multiplied by 0.9, 0.8 …, 0.2 and 0.1 in sequence, the result is rounded and rounded, the obtained 9 groups of 8 x 8 image blocks are respectively subjected to DCT conversion to obtain 9 DCT coefficient blocks, the low-frequency components of the DCT coefficient blocks are drawn into curves according to the change relation of the coefficient multiplied by the gray value, the DCT coefficient changes linearly along with the change of the gray value of the pixel point, when the normal illumination video is subjected to illumination reduction treatment, if the adopted weighting coefficient for reducing illumination is i 1, under the absolute ideal condition, the conversion relation between the DCT coefficient y 'and the DCT coefficient y of the normal illumination video is y' =y i 1 after the DCT change of the low illumination video;
After the low-illumination video and the normal-illumination video are quantized for one time, the distortion degree is kept consistent, and the following relation is adopted for one-time quantization step length:
Where Q' step represents a primary quantization step of the low-luminance video, Q step represents a primary quantization step of the normal-luminance video, i 1 represents a weighting coefficient of luminance variation between the two videos, and qstep=t (LY 1) is given based on a mapping relationship between the quantization coefficient and the quantization step, where equation 10 is written as:
After obtaining the corresponding low-illumination video quantization step length, performing inverse table lookup to obtain a corresponding low-illumination video primary quantization factor LY 1':
T and T -1 respectively represent a lookup table and a reverse lookup table, the weighting coefficient i 1 reflects the low-illumination degree of the low-illumination video, and the mapping relation between the video with different low-illumination degrees and the normal-illumination video quantization factors in an ideal state is obtained through a formula 12;
introducing a correction coefficient sigma, and correcting the formula 12 to obtain a formula 13:
When the light-structure quantization factor LY 2=LY1 is adopted, the comprehensive performance of the transcoding video quality and the code rate is optimal, and the mapping relation of the combination type 13 is combined, obtaining a selection formula 14 of an optimal light structure quantization factor in low-illumination video transcoding light structure quantization:
Calculating a correction coefficient:
wherein LY 2( Theory of ) is represented by formula 16:
After the correction coefficients sigma i1 corresponding to different illumination coefficients i 1 and different sub-quantization factors LY 1 are calculated, the correction coefficients sigma i1 corresponding to different LY 1 and i 1 are determined, any low-illumination video code stream input into a transcoding system is decoded, the sub-quantization factors LY 1 and the illumination coefficients i 1 are obtained, the theoretical value LY 2( Theory of ) of the optimal bright-structure quantization factor is calculated, and the optimal bright-structure quantization factor of the low-illumination video is obtained by correcting the theoretical value LY 2( Theory of ) with the corresponding correction coefficients sigma i1.
Preferably, the low-illumination video transcoding light structure quantization algorithm flow comprises the following steps: when transcoding any input video, the following three decision rules are set:
Rule 1: inputting a video after primary encoding into a transcoding system, decoding by a decoder in the transcoding system to obtain a quantization factor LY 1 and an image gray value adopted in primary encoding, comparing the dynamic range of the gray value with the dynamic range (0-255) of the gray value of the video with normal illumination, and if the dynamic range of the gray value of the video image is i 1 times of the video with normal illumination and i 1 is more than 0.6, the video illumination is good and image enhancement and transcoding are not needed;
Rule 2: when i 1 is less than 0.6, the method accords with the judgment standard of the low-illumination video, enters the next flow, judges the degree of low illumination, and divides the low-illumination video into three steps according to the value of i 1: slight low illumination (0.6 is more than or equal to i 1 is more than 0.4), medium low illumination (0.4 is more than or equal to i 1 is more than 0.2) and serious low illumination (i is less than or equal to 0.2), for slight low illumination video, when LY 1 is more than 34, the video quality after primary encoding is poor, and after video enhancement and secondary encoding, the subjective quality of the video is poor, so that a viewer cannot accept the video, the image enhancement and transcoding are not necessary, for medium low illumination video, when LY 1 is more than 30, the image enhancement and transcoding are not necessary, for serious low illumination video, when LY 1 is more than 28;
Rule 3: in addition to the above two cases, the input video in other cases needs to be subjected to video enhancement and transcoding light structure quantization, a corresponding correction coefficient sigma i1 is obtained according to the luminance parameter i 1 of the video and the quantized factor LY 1 in one time, and then the optimal light structure quantized factor LY 2 is selected, and the transcoding code rate is lowest on the premise that the light structure quantized factor can ensure that the quality of the transcoded video is as good as possible.
Preferably, the low-illumination video transcoding and lightening quantization algorithm divides the video into three layers according to the low illumination degree, the values of the correction parameters are different for different illumination parameters and different primary quantization factors, and when LY 1 is less than or equal to 16 for a slight low-illumination video, the correction parameter sigma is 2; when LY 1 is 16 to 20, the correction parameter sigma is-2; when LY 1 is more than 20 and less than or equal to 34, the correction parameter sigma is-4; when LY 1 is more than 34, the video quality is intolerable, and video enhancement and transcoding operations are not needed; for medium low-illumination video, when LY 1 is less than or equal to 16, the correction parameter sigma is 0; when LY 1 is 16 to 20, the correction parameter sigma is-2; when LY 1 is more than 20 and less than or equal to 24, the correction parameter sigma is-4; when LY 1 is more than 24 and less than or equal to 30, the correction parameter sigma is-6; when LY 1 is more than 30, the video quality is intolerable, and video enhancement and transcoding operations are not needed; for severe low-illumination video, when LY 1 is less than or equal to 16, the correction parameter sigma is-2; when LY 1 is 16 to 22, the correction parameter sigma is-4; when LY 1 is more than 22 and less than or equal to 26, the correction parameter sigma is-6; when LY 1 is more than 26 and less than or equal to 28, the correction parameter sigma is-8; when LY 1 is more than 28, the video quality is intolerable, and video enhancement and transcoding operations are not needed;
When a certain low-illumination video is input into a transcoding system, a decoder in the transcoding system decodes the video, analyzes the decoded information to obtain values of a primary quantization factor LY 1 and an illumination parameter i 1, corrects a coefficient sigma based on different conditions, and finally obtains the value of the optimal brightness quantization factor of the low-illumination video according to a formula QP 2=T-1[T(QP1)/i1 ] +sigma.
Compared with the prior art, the application has the innovation points and advantages that:
(1) The method aims at solving the problems that the prior art video transcoding and bright structure quantization algorithm is not applicable to normal illumination video, and for low illumination video transcoding, the video enhancement process can seriously influence the quality of transcoded video and the relation between the transcoding code rate and the bright structure quantization factor, so that the normal illumination video transcoding and bright structure quantization algorithm is not applicable. The application makes three improvements: firstly, a low-illumination video transcoding frame is constructed, and video transcoding and low-illumination video enhancement are combined; secondly, researching the influence of quantized factors, video content and illumination coefficients on PSNR-LY (video quality-quantized factor) relation in the framework, finding out factors influencing the selection of the optimal bright structure quantized factors through theoretical analysis and experimental data, and establishing a PSNR-LY model and an R-LY (code rate-quantized factor) model of low-illumination video transcoding; thirdly, researching the mapping relation between the low-illumination video and the normal-illumination video quantization factors, deducing a low-illumination video transcoding bright-structure quantization theoretical formula, and correcting the theoretical formula through PSNR-LY, R-LY models and experimental analysis to obtain a final low-illumination video transcoding bright-structure quantization factor selection formula. The application combines low-illumination video enhancement and video transcoding, carries out video enhancement in the video transcoding process, analyzes the relation and distinction between the low-illumination video transcoding and normal-illumination video transcoding, obtains a low-illumination video transcoding light-structure quantization algorithm through theoretical analysis and experimental verification, and ensures that the transcoding code rate is the lowest under the condition that the light-structure quantization factor selected by the algorithm can ensure the video quality to be as high as possible.
(2) For the low-illumination video input into the transcoding system, firstly, the decoded information is analyzed to obtain the primary quantization factor and the value of the illumination coefficient, so that the corresponding correction coefficient is determined, and then the optimal illumination quantization factor of the low-illumination video can be calculated according to a selection formula of the low-illumination video transcoding illumination quantization factor. Experimental verification analysis shows that under the condition of slight low illumination (illumination coefficient is 0.5), the error between the selected bright structure quantization factor LY and the optimal bright structure quantization factor LY is 0; under the condition of medium low illumination (the illumination coefficient is 0.3), the error between LY and the optimal LY selected by the algorithm is 2 at most, the corresponding PSNR average error is-0.013db, and the corresponding Rate average error is-3.3%; under the condition of serious low illumination (the illumination coefficient is 0.2), the error between LY and the optimal LY selected by the application is 2 at maximum, the average error of corresponding PSNR is 0.01db, and the average error of corresponding Rate is 6.73%. The transcoding light structure quantization factor selected by the algorithm is very close to the optimal light structure quantization factor, and the requirements of high quality and low code rate can be simultaneously met.
(3) The application provides a low-illumination video transcoding and lightening quantization algorithm, which is characterized in that a mapping rule of quantization factors in a low-illumination video and normal-illumination video coding quantization model is obtained through analyzing a low-illumination video quantization model, a selection formula of the low-illumination video transcoding and lightening quantization factor in an ideal state is deduced, then the low-illumination video transcoding and lightening quantization algorithm formula is corrected according to the obtained PSNR-LY model, finally a low-illumination video transcoding and lightening quantization factor calculation formula is obtained, whether image enhancement and transcoding are necessary or not is judged according to the luminance coefficient and primary quantization factor size in a code stream of an input transcoding system, if enhancement transcoding is needed, the corresponding transcoding and lightening quantization factor is selected to obtain the optimal transcoding effect, the selected quantization factor is guaranteed to enable lightening quantization distortion to be minimum, video quality after transcoding is highest, and the transcoding and lightening quantization factor with a lower code rate is selected as possible on the premise that video quality and highest quality are guaranteed to be at the same level. The method comprises the steps of combining low-illumination video enhancement with video transcoding, carrying out video enhancement in the video transcoding process, and ensuring the lowest transcoding code rate under the condition that the video quality is as high as possible through a bright structure quantization factor selected by a low-illumination video transcoding bright structure quantization algorithm.
Figure determination
Fig. 1 is a schematic diagram of a fit relationship between PSNR 2(LY1,LY2) and LY 2.
Fig. 2 is a statistical diagram of the transcoding rate for medium low illumination sequence Avenuc.
Fig. 3 is a diagram of an initial code stream after undergoing two quantization and two reconstruction processes.
Fig. 4 is a diagram showing quantization values of the two quantization processes of the DCT coefficients.
Fig. 5 is a schematic diagram of an 8 x 8 block before and after DCT transformation.
Fig. 6 is a flowchart of a low-luminance video transcoding luma quantization algorithm.
FIG. 7 is a schematic diagram of a PSNR-Rate curve at slightly low illumination.
Fig. 8 is a schematic diagram of experimental subjective comparison of slightly low-light video.
FIG. 9 is a schematic diagram of PSNR-Rate curves at severe low illumination.
Fig. 10 is a schematic diagram of experimental subjective comparison of severe low-intensity video.
Detailed Description
The technical scheme of the low-illumination live video real-time transcoding enhancement bright structure quantization method provided by the application is further described below with reference to the accompanying drawings, so that the application can be better understood and implemented by those skilled in the art.
The video transcoding refers to converting the video code stream which is already coded and compressed into another video code stream, and can effectively solve the problem that video data are compatible on different platforms and terminal equipment. The video transcoding and light-structure quantization algorithm is to select the optimal light-structure quantization factor, so that the minimum code rate is consumed on the premise of ensuring the video quality after transcoding to be as high as possible. The existing video transcoding and bright-structure quantization algorithm aims at normal-illumination video, and for low-illumination video transcoding, the video enhancement process can seriously influence the quality of transcoded video and the relation between the transcoding code rate and the bright-structure quantization factor, so that the normal-illumination video transcoding and bright-structure quantization algorithm is not applicable any more.
The present application has been made in view of this problem, and works as follows: 1) Constructing a low-illumination video transcoding frame, and combining video transcoding and low-illumination video enhancement; 2) In the framework, researching the influence of quantized factors, video content and illumination coefficients on PSNR-LY (video quality-quantized factor) relation, finding out factors influencing the selection of the optimal bright structure quantized factors through theoretical analysis and experimental data, and establishing a PSNR-LY model and an R-LY (code rate-quantized factor) model of low-illumination video transcoding; 3) And (3) researching the mapping relation between the low-illumination video and the normal-illumination video quantization factors, deducing a low-illumination video transcoding bright-structure quantization theoretical formula, and correcting the theoretical formula through PSNR-LY, R-LY models and experimental analysis to obtain a final low-illumination video transcoding bright-structure quantization factor selection formula. The algorithm of the application firstly analyzes the decoded information of the low-illumination video input into the transcoding system to obtain the primary quantization factor and the value of the illumination coefficient, thereby determining the corresponding correction coefficient, and then according to the low-illumination video transcoding illumination quantization factor selection formula, the optimal illumination quantization factor of the low-illumination video can be calculated.
Experiment verification analysis on the algorithm shows that under the condition of slight low illumination (illumination coefficient is 0.5), the error between the selected bright structure quantization factor LY and the optimal bright structure quantization factor LY is 0; under the condition of medium low illumination (the illumination coefficient is 0.3), the error between LY and the optimal LY selected by the algorithm is 2 at most, the corresponding PSNR average error is-0.013db, and the corresponding Rate average error is-3.3%; under the condition of serious low illumination (the illumination coefficient is 0.2), the error between LY and the optimal LY selected by the application is 2 at maximum, the average error of corresponding PSNR is 0.01db, and the average error of corresponding Rate is 6.73%. The transcoding light structure quantization factor selected by the algorithm is very close to the optimal light structure quantization factor, and the requirements of high quality and low code rate can be simultaneously met.
1. Low-illumination video real-time transcoding framework
The method comprises the steps of encoding a VIDEO sequence VIDEO with unclear low illumination through an H.264 encoder to obtain a low illumination code stream video_LY1.264 (primary quantization), inputting the video_LY1.264 code stream into a transcoding system, performing inverse quantization through the H.264 decoder, obtaining a quantization factor LYl during primary encoding and a decoded VIDEO sequence video_LY1.yuv through analyzing code stream information, performing VIDEO enhancement on the video_LY1.yuv to obtain an enhanced VIDEO sequence video_LY1_ENHANCE.yuv, performing quantization encoding on the enhanced sequence through LY2 to obtain a video_LY1_ ENHANCE _LY2.264, and performing secondary quantization encoding after the video_LY1.264 in the whole process of converting the initial sequence video_LY into a final output code stream video_LY1_ ENHANCE _ 2.264 through a secondary quantization model and encoding into a sequence video_LY1.264 input through transcoding.
When the low-illumination transcoding and light-structure quantization is carried out, the PSNR value of the transcoding is calculated by taking the initial normal illumination video sequence as a reference, rather than taking the video sequence before the transcoding as a reference, and the LY 2 calculated by the transcoding and light-structure quantization algorithm is required to reduce the code rate after the transcoding as much as possible on the premise of meeting the condition that the video quality is as good as possible. Therefore, the relation between the transcoding PSNR and LY 2 and the relation between the transcoding code rate R and LY 2 are required to be optimized, and the relation between the transcoding PSNR-LY is determined by three factors: quantization model, video content, and low illumination level.
2. Low-illumination live video-to-PSNR-LY model
Quantization factor acting transcoding PSNR-LY relationship
The transcoding platform has two processes to quantize the video: the code stream is quantized once through an encoder, and the code stream is quantized by a light structure of a encoder; the quantization operation process is expressed as:
Wherein F (u, v) represents DCT coefficients of an initial video sequence before quantization, F Q (u, v) represents DCT coefficients of a video after quantization, Q (u, v) represents a quantization weighting matrix, Q is a quantization step length, the value of Q is associated with a quantization factor LY, round [ ] is a rounding function, and a peak signal-to-noise ratio PSNR value is adopted as an objective evaluation standard of video quality;
The determination rule of the best LY 2 under the normal illumination condition is not suitable for the low illumination video, the value of the best LY 2 is larger than that of LY 1 under the low illumination condition, and the value of the best LY 2 is increased along with the increase of LY 1, so that the inflection point gradually moves backwards.
(II) video content-acting transcoding PSNR-LY relationship
The method comprises the steps of carrying out transcoding, reconstruction and quantization analysis on three video sequences with different picture complexity and motion intensity to obtain a video sequence with flat and single picture and gentle motion, wherein most of energy is concentrated in a low-frequency part in the upper left corner after DCT conversion; the more complex the picture texture, the more intense the motion of the video sequence, the more the energy is distributed to the high frequency part of the lower right corner after the DCT transform. When the DCT coefficient is quantized, the information of the high-frequency part which is insensitive to human eyes is lost, and the information of the low-frequency part is well stored, so that the integral PSNR value is reduced along with the increase of the complexity and the movement intensity of the video. However, different video contents have little influence on the optimal transcoding light quantization factor LY 2, and the characteristic is similar to the normal illumination video transcoding process, and for normal illumination video, the optimal light quantization factor LY 2 is always equal to LY l and is not changed according to the different video contents. Therefore, the video content may not be considered as a determining factor when performing the transcoding and lighting quantization of the low-luminance video.
(III) luminance coefficient-acting transcoding PSNR-LY relation
The low illumination degree of the sequence can influence the distortion degree generated during primary quantization coding, thereby influencing the quality of the yuv sequence after primary decoding and enhancement, and finally influencing the PSNR-LY curve of secondary coding. Comparing the inflection point data obtained by the experiment with the theoretical value in the ideal state, the inflection point of the curve moves leftwards as a whole, and the amplitude of the leftward movement of the inflection point gradually increases along with the gradual aggravation of the low illumination degree. This is because, when the luminance reducing process is performed on the normal luminance video, the normal luminance video is superimposed on the RGB three-channel with the parameter i 1 and the solid black image, the effect of the superimposed video is better than the effect of directly multiplying the gray value of each pixel point by i 1, and the lower the luminance is, the more obvious the difference between the two effects is.
(IV) Low-illumination video transcoding PSNR-LY model
Quantitatively analyzing PSNR-LY curves, constructing a PSNR-LY model of H.264 low-illumination video transcoding, and fitting the change rule of a transcoded PSNR value PSNR 2(LY1,LY2) along with 0P to form a cubic function curve, wherein the curve is expressed as:
PSNR2(LY1,LY2)=α·LY2 3+β·LY2 2+γ·LY2+δ 2,2
The values of the four parameters of alpha, beta, gamma and delta are related to a primary quantization factor LY 1, low-illumination degree i 1 and video content, wherein alpha is less than 0, beta is more than 0, gamma is less than 0 and delta is more than 0, the value range of alpha is [ -4 multiplied by 10 -4,-1×10-5 ], the value range of beta is [20, 35], and the values of the four parameters of alpha, beta, gamma and delta are greatly influenced by the low-illumination degree i;
after fitting the PSNR-LY curve to the cubic function curve, the value of PSNR 2(LY1,LY2) must have an extreme point, i.e. when LY 2=LY2 ', the derivative PSNR 2′(LY1,LY2)=0,PSNR2(LY1,LY2 of the cubic function curve takes on a maximum PSNR 2max(LY1,LY2), i.e. when LY 2>LY2', the value of the derivative PSNR 2′(LY1,LY2)=0,PSNR2(LY1,LY2) of the cubic function curve slowly rises or remains substantially unchanged: i.e., when LY 2>LY2', the derivative of the cubic curve PSNR 2′(LY1,LY2)<0,PSNR2(LY1,LY2) drops rapidly;
as shown in fig. 1, the transcoded PSNR 2(LY1,LY2)-LY2 is divided into A, B, C areas, the peak PSNR 2max(LY1,LY2 of the RSNR is determined first, the value range of the a interval is [ PSNR 2max(LY1,LY2)-0.12dB,PSNR2max(LY1,LY2) ], in the I interval, the video quality is highest after transcoding, the LY 2 corresponding to the PSNR 2(LY1,LY2) in the interval is a better bright structure quantization factor, besides the transcoding quality is guaranteed, the size of the code rate is guaranteed, the scheme with the minimum occupied code rate is selected on the premise of guaranteeing the quality to be close, the value range of the a interval is only 0.12dB, the difference of the video quality is small in the small range, and the bright structure quantization factor with the lowest occupied code rate is selected as the best 2 in the a interval with little difference of the transcoded PSNR; the value range of the section B is [ PSNR 2max(LY1,LY2)-0.8dB,PSNR2max(LY1,LY2) -0.12dB ], in the section, the video quality starts to change obviously, and the subjective and objective quality of the transcoded video starts to be obviously reduced along with the increase of the bright structure quantization factor, but the quality is still in an acceptable range; the range of the C interval is [0, PSNR 2max(LY1,LY2) -0.8dB ], in the interval, along with the increase of the bright structure quantization factor, the subjective and objective quality of the video after transcoding is rapidly reduced, and the LY 2 value corresponding to PSNR 2(LY1,LY2 in the area is not suitable for being used as the transcoding bright structure quantization factor.
By analyzing the low illumination degree, primary quantization factor information carried by the input code stream and the video content of the initial code stream, a curve model of peak signal to noise ratio PSNR 2(LY1,LY2) and a bright-structure quantization factor LY 2 is established, and the optimal transcoding bright-structure quantization factor is predicted, so that the video quality after transcoding is optimal, and the transcoding code rate is lower.
3. Low-illumination video transcoding R-LY model
Introducing a rate distortion model into low-illumination video transcoding, establishing an R-LY model, wherein the quantization factor LY determines the compressed degree of video image information, the greater the LY is, the greater the compressed degree of the image is, the more information is lost, the lower the consumed code rate is, the quantization factor LY and the code rate are in an inverse proportion relation, R-LY is an inverse proportion function, and code streams with different low-illumination degrees, primary quantization factors and video contents are selected for calculation. Fig. 2 is a graph of the average low-light video sequence Avenue transcoding rate as a function of LY 2.
The relation model of the code rate and the bright structure quantization factor in the low-illumination video transcoding process is fitted by using a formula 3:
Wherein R represents a transcoding code rate, X 1 and X 2 are coefficients, Q step is a quantization step length, the value of the quantization step length corresponds to the quantization factors LY one by one, the larger the transcoding code rate R is in inverse proportion to the bright structure quantization factors LY, the smaller the transcoding code rate R is, so that the code rate consumed in the transcoding process is as low as possible, when the optimal bright structure quantization factors are selected, the larger quantization factors are preferably considered, and when the bright structure quantization factors meeting the conditions in the A interval are selected, the largest LY 2 in the range is directly selected, so that the requirements of transcoding quality and transcoding code rate are met.
4. H.264 low-illumination video transcoding and bright structure quantization algorithm
(One) Low-luminance and Normal-luminance video quantization factor mapping correlation model
The method comprises the steps of firstly decoding an input compressed code stream to obtain primary quantization factors in code stream information, carrying out inverse quantization on the decoded information by the primary quantization factors to reconstruct an initial yuv sequence, inputting the yuv sequence into a coding end, carrying out bright construction quantization by the coding end by using larger bright construction quantization factors, carrying out entropy coding, and finally outputting the coded code stream. Fig. 3 illustrates the cause of error generation in the initial code stream during the two quantization and two reconstruction processes.
In fig. 3, m and n represent gray values of two different pixels in a frame image of an initial video, Q 1 and Q 2 represent two quantization processes, and Q 1 -1 and Q 2 -1 represent two inverse quantization processes, respectively;
For the input value m, after twice quantization and twice inverse quantization, the reconstructed value is:
If the quantization is not performed once, the Q 2 is directly used for quantization and inverse quantization, and the reconstruction value is as follows:
For the input gray value m, compared with the two times of quantization by Q 1、Q2 and the one time of quantization by Q 2, the reconstruction value is the same, and no error is introduced;
For the gray value n in fig. 3, the reconstructed values after two times of quantization and inverse quantization are:
The reconstructed value after only one quantization and inverse quantization of Q 2 is:
/>
For the input gray value N, the two times of quantization by Q 1、Q2 are different from the reconstruction value which is only quantized once by Q 2, so that an error delta=n-M is caused, the two times of quantization errors become bright-structure quantization errors, the gray value of the image is subjected to DCT transformation, and then the transformed coefficient is marked on a horizontal coordinate axis according to the magnitude of the amplitude value, as shown in fig. 4.
In fig. 4 (a), the DCT coefficients of the rectangular region are located in the range of [ j (j+0.5) Q 2,(j+1.5)Q2 ], and are encoded only once by the encoder, the values are quantized to (j+1) Q 2, if the transcoding is performed according to the transcoding process, the first time of encoding is performed by the encoder of which Q 1 is quantized to iQ 1, and then the encoder of the transcoded portion is quantized to j Q 2(iQ1 in the [ (j-0.5) Q 2,(j+0.5)Q2 ] interval by the quantization factor Q 2), and the horizontal movement of fig. 4 (b) is performed to obtain fig. 4 (c), and the DCT coefficients in the rectangular block region are quantized to two different values after encoding and transcoding, so that errors are generated in the rectangular block region in fig. 4 (a) (c).
Assuming that the quantization factor for performing primary coding quantization on the initial low-luminance video is Q 1, and the luma quantization factor for encoding the enhanced yuv sequence by the encoder in the transcoding process is Q 2, the distortion caused by primary coding quantization is expressed as formula 8:
where y represents the DCT coefficient, p (y) represents the probability density function of the DCT coefficient distribution, and after secondary quantization with the quantization factor Q 2, the distortion caused by the secondary quantization is expressed as formula 9:
The magnitude of the twice quantized distortion value is determined by DCT coefficients and twice quantized quantization step sizes, and based on the DCT coefficient relation between the normal illumination video and the video after illumination reduction, the relation of quantization factors between the normal illumination video and the low illumination video under the same distortion condition is obtained; establishing an 8×8 image block shown in fig. 5 (a), and performing DCT transformation on the image block to obtain an 8×8DCT coefficient block shown in fig. 5 (b);
The gray value of each pixel in fig. 5 (a) is multiplied by 0.9, 0.8 … 0.2.2 and 0.1 in turn, the result is rounded and rounded, then the obtained 9 groups of 8×8 image blocks are respectively subjected to DCT transformation to obtain 9 DCT coefficient blocks, the low-frequency component of the DCT coefficient blocks is plotted as a curve according to the change relation of the coefficient multiplied by the gray value, the DCT coefficient changes linearly with the change of the gray value of the pixel, when the normal illuminance video is subjected to the illuminance reduction treatment, if the adopted weighting coefficient for reducing the illuminance is i 1, under the absolute ideal condition, the conversion relation of the DCT coefficient y 'and the DCT coefficient y of the normal illuminance video is y' =y i 1 after the DCT change of the low illuminance video.
After the low-illumination video and the normal-illumination video are quantized for one time, the distortion degree is kept consistent, and the following relation is adopted for one-time quantization step length:
Where Q' step represents a primary quantization step of the low-luminance video, Q step represents a primary quantization step of the normal-luminance video, i 1 represents a weighting coefficient of luminance variation between the two videos, and qstep=t (LY 1) is given based on a mapping relationship between the quantization coefficient and the quantization step, where equation 10 is written as:
After obtaining the corresponding low-illumination video quantization step length, performing inverse table lookup to obtain a corresponding low-illumination video primary quantization factor LY 1':
t and T -1 respectively represent a lookup table and a reverse lookup table, the weighting coefficient i 1 reflects the low-illumination degree of the low-illumination video, and the mapping relation between the video with different low-illumination degrees and the normal-illumination video quantization factors in an ideal state is obtained through a formula 12.
Equation 12 is proposed for an 8×8 block with a single gray value and a simple change rule, and the quantization factor calculated under ideal conditions is not fully applicable in practical application, and a correction coefficient σ needs to be introduced to correct equation 12, so as to obtain equation 13:
When the light-structure quantization factor LY 2=LY1 is adopted, the comprehensive performance of the transcoding video quality and the code rate is optimal, and the mapping relation of the combination type 13 is combined, obtaining a selection formula 14 of an optimal light structure quantization factor in low-illumination video transcoding light structure quantization:
Calculating a correction coefficient:
wherein LY 2( Theory of ) is represented by formula 16:
LY 2( Theory of ) is calculated by primary quantization factor LY 1 and illuminance coefficient i 1, the value is irrelevant to video content, different video contents can cause the whole rising or decreasing of transcoding PSNR value, but the inflection point of PSNR-LY curve is not affected, namely, the optimal bright structure quantization factor LY 2( Actual practice is that of ) for low-illuminance video transcoding is determined by primary quantization factor and illuminance coefficient, and is irrelevant to video content, LY 2( Theory of ) and LY 2( Actual practice is that of ) are both irrelevant to video content, and the difference sigma i1 is also irrelevant to video content.
After the correction coefficients sigma i1 corresponding to different illumination coefficients i 1 and different sub-quantization factors LY 1 are calculated, the correction coefficients sigma i1 corresponding to different LY 1 and i 1 are determined, any low-illumination video code stream input into a transcoding system is decoded, the sub-quantization factors LY 1 and the illumination coefficients i 1 are obtained, the theoretical value LY 2( Theory of ) of the optimal bright-structure quantization factor is calculated, and the optimal bright-structure quantization factor of the low-illumination video is obtained by correcting the theoretical value LY 2( Theory of ) with the corresponding correction coefficients sigma i1.
(II) low-illumination video transcoding and bright structure quantization algorithm flow
When transcoding any input video, the following three decision rules are set:
Rule 1: after a certain video is coded for one time, inputting the video into a transcoding system, decoding the video by a decoder in the transcoding system to obtain a quantization factor LY 1 and an image gray value (Y channel value) adopted in the primary coding, comparing the dynamic range of the gray value with the dynamic range (0-255) of the gray value of the video with normal illumination, and if the dynamic range of the gray value of the video image is i 1 times of the video with normal illumination and i 1 is more than 0.6, obtaining good video illumination without image enhancement and transcoding;
Rule 2: when i 1 is less than 0.6, the method accords with the judgment standard of the low-illumination video, enters the next flow, judges the degree of low illumination, and divides the low-illumination video into three steps according to the value of i 1: slight low illumination (0.6 is more than or equal to i 1 is more than 0.4), medium low illumination (0.4 is more than or equal to i 1 is more than 0.2) and serious low illumination (i is less than or equal to 0.2), for slight low illumination video, when LY 1 is more than 34, the video quality after primary encoding is poor, and after video enhancement and secondary encoding, the subjective quality of the video is poor, so that a viewer cannot accept the video, the image enhancement and transcoding are not necessary, for medium low illumination video, when LY 1 is more than 30, the image enhancement and transcoding are not necessary, for serious low illumination video, when LY 1 is more than 28;
Rule 3: in addition to the above two cases, the input video in other cases needs to be subjected to video enhancement and transcoding light structure quantization, a corresponding correction coefficient sigma i1 is obtained according to the luminance parameter i 1 of the video and the quantized factor LY 1 in one time, and then the optimal light structure quantized factor LY 2 is selected, and the transcoding code rate is lowest on the premise that the light structure quantized factor can ensure that the quality of the transcoded video is as good as possible.
The H.264 low-illumination video transcoding and reconstruction quantization algorithm is an optimization selection algorithm of quantization factors, and the flow is shown in fig. 6.
The low-illumination video transcoding and lightening quantization algorithm divides the video into three layers according to the low illumination degree, values of correction parameters are different for different illumination parameters and different primary quantization factors, and for slight low-illumination video, when LY 1 is less than or equal to 16, correction parameter sigma is 2; when LY 1 is 16 to 20, the correction parameter sigma is-2; when LY 1 is more than 20 and less than or equal to 34, the correction parameter sigma is-4; when LY 1 is more than 34, the video quality is intolerable, and video enhancement and transcoding operations are not needed; for medium low-illumination video, when LY 1 is less than or equal to 16, the correction parameter sigma is 0; when LY 1 is 16 to 20, the correction parameter sigma is-2; when LY 1 is more than 20 and less than or equal to 24, the correction parameter sigma is-4; when LY 1 is more than 24 and less than or equal to 30, the correction parameter sigma is-6; when LY 1 is more than 30, the video quality is intolerable, and video enhancement and transcoding operations are not needed; for severe low-illumination video, when LY 1 is less than or equal to 16, the correction parameter sigma is-2; when LY 1 is 16 to 22, the correction parameter sigma is-4; when LY 1 is more than 22 and less than or equal to 26, the correction parameter sigma is-6; when LY 1 is more than 26 and less than or equal to 28, the correction parameter sigma is-8; when LY 1 > 28, the video quality is intolerable and no video enhancement and transcoding operations are required.
When a certain low-illumination video is input into a transcoding system, a decoder in the transcoding system decodes the video, analyzes the decoded information to obtain values of a primary quantization factor LY 1 and an illumination parameter i 1, corrects a coefficient sigma based on different conditions, and finally obtains the value of the optimal brightness quantization factor of the low-illumination video according to a formula QP 2=T-1[T(QP1)/i1 ] +sigma.
The low-illumination video transcoding light structure quantization ensures the best quality of the transcoded video by selecting the optimal transcoding light structure quantization factor, and simultaneously ensures the minimum code rate under the condition of the maximum PSNR value of the transcoded video. Analyzing the influence of the quantization factors, the video content and the illumination coefficients on the PSNR-LY relation of the low-illumination video transcoding, establishing a PSNR-LY model and an R-LY model of the low-illumination video transcoding, deducing a theoretical formula of the low-illumination video transcoding light structure quantization factor according to the mapping relation of the low-illumination video and the normal-illumination video quantization factor, correcting the theoretical formula, and finally obtaining a calculation formula of the low-illumination video transcoding light structure quantization factor. Judging whether image enhancement and transcoding are necessary according to the luminance coefficient and primary quantization factor in the code stream of the input transcoding system, and if enhancement transcoding is necessary, selecting the corresponding transcoding light quantization factor to obtain the optimal transcoding effect.
5. Experiment and result analysis
The H.264 low-illumination video transcoding and reconstruction quantization algorithm is subjected to experimental verification and analysis, transcoding is performed in a full-resolution full-encoding mode, a Retinex video enhancement module is connected in series after a decoder of a transcoding system, and then a code stream subjected to video enhancement is encoded by an H.264 encoder. The video sequence adopted in the experiment is IPPPP structure, and each frame of the image group is used as a reference when PSNR value calculation and objective evaluation are carried out on video quality. In all experiments, PSNR value units reflecting objective quality of the video are dB, and code rate units are kb/s.
Experimental verification of slightly Low-light video
The experiment verification part selects three groups of video sequences which are randomly shot, wherein the three groups of video sequences comprise grassland GRASSLAND sequences with single picture content and static most of background; the background of the picture is complex, the track of the moving object is consistent, and the picture is a smooth road traffic Traffiec sequence; the picture background is complex, the character moves violently and the intersection pedestrian Crossroad sequence with irregular motion trail. The resolution of the three test videos is 640 to 480, the sequence length is 50 frames, and the illumination coefficient i 1 adopted in illumination reduction processing is 0.5 (slight low illumination).
The low-illumination code stream is subjected to first quantization coding by LY 1 = {20, 24} and then is input into a transcoding system for decoding, the decoded low-illumination yuv sequence is subjected to video enhancement processing and is transcoded by a light-structure quantization factor LY 2, wherein the value of LY 2 is LY 2 = {14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 36, 40}. And comparing the transcoded video sequence with the initial normal illumination, calculating the transcoded PSNR, and recording the peak signal-to-noise ratio PSNR and the transcoded code Rate. The PSNR and Rate errors refer to the peak signal-to-noise ratio PSNR and the code Rate errors corresponding to LY2 selected by the two methods.
The data is plotted into a Rate distortion curve, as shown in fig. 7, where the transcoding Rate is on the abscissa, the transcoding peak signal-to-noise ratio PSNR is on the ordinate, and each point on the curve represents, in order from left to right, the corresponding Rate and PSNR values for the light-structure quantization factors ly2= {40, 36, 32, 30, 28, 26, 24, 22, 20, 18, 16, 14 }.
As can be seen from fig. 7, LY 2 selected by the algorithm of the present application coincides with the optimal OP2, which is indicated by "×". LY2 selected by the algorithm is located on the inflection point of the curve, so that the transcoding code rate is as small as possible while the video quality is ensured to be as high as possible. At slightly low illumination, the error between LY 2 and the optimal LY 2 selected by the algorithm of the application is 0, and the corresponding error between PSNR and Rate is 0.
Since the PSNR value corresponding to the best LY 2 selected by the algorithm of the present application is not the maximum PSNR value, fig. 8 shows subjective image effects before and after quantization and transcoding by the LY 2 algorithm of the present application and the LY 2 corresponding to the maximum PSNR point, in order to analyze the subjective effect of the video quantized by the LY 2 selected by the algorithm of the present application, when the selected code streams are LY 1 =24, the 10 th frame of the code stream GRASSLAND, the 8 th frame of the code stream Traffic, and the 42 th frame of the code stream Crossroad, respectively. The comparison of the images before and after transcoding shows that the subjective quality of the video image is obviously improved after the low-illumination video transcoding. Comparing the subjective quality of the images in fig. 8 (b) and fig. 8 (c), the quality of the images is hardly different, which shows that the bright structure quantization factor selected by the algorithm of the application can ensure that the video quality after transcoding is as high as possible.
(II) Experimental verification of severe Low-illumination video
The experiment in this part selects a grassland GRASSLAND sequence with single picture content and static background in most parts; the background of the picture is complex, the track of the moving object is consistent, and the picture is a gentle Traffic sequence of the road Traffic; the picture background is complex, the character moves violently and the intersection pedestrian Crossroad sequence with irregular motion trail. The resolutions of the three test videos are 640 multiplied by 480, the sequence lengths are 50 frames, and the weight coefficient i adopted when the illumination reduction treatment is carried out is 0.2.
The low-illumination code stream is subjected to first quantization coding by LY 1 = {20, 24} and then is input into a transcoding system for decoding, the decoded low-illumination yuv sequence is subjected to video enhancement processing and is transcoded by a light-structure quantization factor LY 2, wherein the value of LY 2 is LY 2= {20, 22, 24, 26, 28, 30, 32, 34, 36, 38}. And comparing the transcoded video sequence with the initial normal illumination, calculating transcoded PSNR, and recording the peak signal-to-noise ratio PSNR and the transcoded code Rate.
The data is plotted into a Rate distortion curve, as shown in fig. 9, where the Rate of the transcoding Rate is on the abscissa, the signal-to-noise ratio PSNR of the transcoding peak is on the ordinate, and each point on the curve represents, in order from left to right, the values of the corresponding Rate and PSNR when the light-structure quantization factors LY 2 = {38, 36, 34, 32, 30, 28, 26, 24, 22, 20 }.
In fig. 9, if LY 2 selected by the algorithm of the present application matches best LYz, it is indicated by "×", and if both do not match, LY 2 selected is indicated by "×", and best LY2 is indicated by ". DELTA.". From the data, the video sequences cross road0.2_20 and traffic_0.2_24 are different from the best LY 2 by 2 through LY 2 selected by the algorithm of the application, and the corresponding PSNR values are respectively different from 0.026dB and 0.036dB, and the code rates are respectively different from 22% and 18.4%. The average PSNR error of the six experimental data groups was 0.01dB and the average Rate error was 6.73%.
Fig. 10 shows the subjective image effect (LY 1 =24) after quantization with the inventive algorithm LY 2 and quantization with the PSNR maximum point corresponding to LY 2. The comparison of the images before and after transcoding shows that the subjective quality of the video image is obviously improved after the low-illumination video transcoding. Comparing the subjective quality of the images in fig. 10 (b) and fig. 10 (c), the quality of the images is hardly different, which shows that the bright structure quantization factor selected by the algorithm of the application can ensure that the video quality after transcoding is as high as possible.
The sequences with different contents are subjected to illumination reduction processing with different degrees so as to simulate the video shot under the actual low illumination condition. And respectively selecting a certain primary quantization factor LY 1 and a bright structure quantization factor LY 2 for quantizing videos with different contents and different low illumination degrees, and recording corresponding PSNR and Rate. Through a plurality of experiments, the relation between PSNR-LY and R-LY accords with a PSNR-LY model and an R-LY model of the low-illumination video transcoding provided by the application. And then, a PSNR-Rate curve after enhanced transcoding is drawn by taking the code Rate as an abscissa and the peak signal-to-noise ratio PSNR as an ordinate. From the curve it can be seen that LY 2 predicted by the algorithm of the present application is mostly at the inflection point of the curve and is the same as the optimal LY 2. The average PSNR error under different low illumination degrees is within 0.05db of soil, and the average Rate error is within +/-10%. The code stream transcoded by the optimal LY 2 selected by the algorithm of the present application is not different from the code stream transcoded by the optimal LY 2 in subjective effect. The algorithm of the application has good effect on transcoding the truly shot low-illumination video.

Claims (9)

1. The method is characterized in that the low-illumination live video enhancement is combined with video transcoding, video enhancement is carried out in the video transcoding process, and the transcoding code rate is the lowest under the condition that the quality of the video is as high as possible by the bright structure quantization factor selected by the low-illumination video transcoding bright structure quantization algorithm;
1) Constructing a low-light video transcoding framework: after the low-illumination video is quantized once by an H.264 coder and then input into a transcoding system, decoding is carried out first, video enhancement is carried out, and finally, the enhancement transcoded code stream is output after transcoding and reconstruction quantization;
2) Resolving the effect of quantization factors, video content and luminance coefficients on PSNR-LY relationship: in a low-illumination video transcoding frame, analyzing the change rules of PSNR-LY curves under different quantization factors, different video contents and different illumination coefficients, and constructing PSNR-LY and R-LY models of low-illumination video transcoding;
3) The method comprises the steps of providing a low-illumination video transcoding and bright structure quantization algorithm: the method comprises the steps of analyzing a low-illumination video quantization model, obtaining a mapping rule of quantization factors in the low-illumination video and normal-illumination video coding quantization model, deducing a selection formula of a low-illumination video transcoding light-construction quantization factor under an ideal state, correcting a low-illumination video transcoding light-construction quantization algorithm formula according to the obtained PSNR-LY model, finally obtaining a low-illumination video transcoding light-construction quantization factor calculation formula, judging whether image enhancement and transcoding are necessary according to an illumination coefficient and a primary quantization factor in a transcoding system code stream, and if enhancement transcoding is needed, selecting a corresponding transcoding light-construction quantization factor to obtain an optimal transcoding effect, ensuring that the selected quantization factor enables light-construction quantization distortion to be minimum, ensuring that video quality is highest after transcoding, and selecting the transcoding light-construction quantization factor with a lower code rate as possible on the premise that video quality and highest quality are at the same level.
2. The method for real-time transcoding and enhancing bright structure quantization of low-luminance live video according to claim 1, wherein the low-luminance live video is transcoded in real time: encoding the VIDEO sequence video_ly1.yuv with unclear low illumination through an H.264 encoder to obtain a low illumination code stream video_LY1.264 for one time quantization, inputting the video_LY1.264 code stream into a transcoding system, performing inverse quantization through the H.264 decoder, obtaining a quantization factor LYl during the first encoding and a decoded VIDEO sequence video_LY1.yuv through analyzing code stream information, performing VIDEO enhancement on the video_LY1.yuv to obtain an enhanced VIDEO sequence video_LY1_ENHANCE.yuv, performing quantization encoding on the enhanced sequence through LY2 to obtain a video_LY1_ ENHANCE _LY2.264, and performing secondary quantization encoding after the VIDEO sequence is subjected to the first quantization and the transcoding into a transcoded input sequence video_LY1.264 in the whole process of transcoding the initial sequence video_LY1_ ENHANCE _ 2.264 by using a two quantization models;
When low-illumination transcoding and light structure quantization is carried out, the PSNR value of the transcoding is calculated by taking an initial normal illumination video sequence as a reference, and under the premise of meeting the condition that the video quality is as good as possible, LY 2 calculated by the transcoding and light structure quantization algorithm needs to reduce the code rate after transcoding as much as possible, the relation between the transcoding PSNR and LY 2 and the relation between the transcoding code rate R and LY 2 needs to be optimized, and the relation between the transcoding PSNR and LY is determined by three factors: quantization model, video content, and low illumination level.
3. The method for real-time transcoding and enhancing bright structure quantization of low-illumination live video according to claim 1, wherein the quantization factor acts on the transcoding PSNR-LY relationship: the transcoding platform has two processes to quantize the video: the code stream is quantized once through an encoder, and the code stream is quantized by a light structure of a encoder; the quantization operation process is expressed as:
Wherein F (u, v) represents DCT coefficients of an initial video sequence before quantization, F Q (u, v) represents DCT coefficients of a video after quantization, Q (u, v) represents a quantization weighting matrix, Q is a quantization step length, the value of Q is associated with a quantization factor LY, round [ ] is a rounding function, and a peak signal-to-noise ratio PSNR value is adopted as an objective evaluation standard of video quality;
Under the low illumination condition, the value of the optimal LY 2 is larger than LY 1, and the value of the optimal LY 2 is increased along with the increase of LY 1, so that the inflection point gradually moves backwards.
4. The method for real-time transcoding and enhancing bright structure quantization of low-luminance live video according to claim 1, wherein the low-luminance video transcoding PSNR-LY model: quantitatively analyzing PSNR-LY curves, constructing a PSNR-LY model of the H.264 low-illumination video transcoding, and fitting the change rule of a transcoded PSNR value PSNR 2(LY1,LY2) along with the OP to form a cubic function curve, wherein the curve is expressed as:
The values of the four parameters of alpha, beta, gamma and delta are related to a primary quantization factor LY 1, low-illumination degree i l and video content, wherein alpha is less than 0, beta is more than 0, gamma is less than 0 and delta is more than 0, the value range of alpha is [ -4 multiplied by 10 -4,-1×10-5 ], the value range of beta is [20, 35], and the values of the four parameters of alpha, beta, gamma and delta are greatly influenced by the low-illumination degree i;
After fitting the PSNR-LY curve to the cubic function curve, the value of PSNR 2(LY1,LY2) must have an extreme point, i.e. when LY 2=LY2 ', the derivative PSNR 2′(LY1,LY2)=0,PSNR2(LY1,LY2 of the cubic function curve takes on a maximum PSNR 2max(LY1,LY2), i.e. when LY 2>LY2', the value of the derivative PSNR 2′(LY1,LY2)=0,PSNR2(LY1,LY2) of the cubic function curve slowly rises or remains substantially unchanged: i.e., when LY 2>LY2', the derivative of the cubic curve PSNR 2′(LY1,LY2)<0,PSNR2(LY1,LY2) drops rapidly;
By analyzing the low illumination degree, primary quantization factor information carried by the input code stream and the video content of the initial code stream, a curve model of peak signal to noise ratio PSNR 2(LY1,LY2) and a bright-structure quantization factor LY 2 is established, and the optimal transcoding bright-structure quantization factor is predicted, so that the video quality after transcoding is optimal, and the transcoding code rate is lower.
5. The method for real-time transcoding and enhancing the bright structure quantization of low-luminance live video according to claim 1, wherein the low-luminance video transcoding R-LY model: introducing a rate distortion model into low-illumination video transcoding, establishing an R-LY model, wherein the greater the quantization factor LY is, the greater the degree of compression of video image information is, the more the image is compressed, and the lower the lost information is, the lower the consumed code rate is, the quantization factor LY and the code rate are in an inverse proportion relation, R-LY is an inverse proportion function, and the code streams of different low-illumination degrees, primary quantization factors and video contents are selected for calculation;
The relation model of the code rate and the bright structure quantization factor in the low-illumination video transcoding process is fitted by using a formula 3:
Wherein R represents a transcoding code rate, X 1 and X 2 are coefficients, Q step is a quantization step length, the value of the quantization step length corresponds to the quantization factors LY one by one, the larger the transcoding code rate R is in inverse proportion to the bright structure quantization factors LY, the smaller the transcoding code rate R is, so that the code rate consumed in the transcoding process is as low as possible, when the optimal bright structure quantization factors are selected, the larger quantization factors are preferably considered, and when the bright structure quantization factors meeting the conditions in the A interval are selected, the largest LY 2 in the range is directly selected, so that the requirements of transcoding quality and transcoding code rate are met.
6. The method for real-time transcoding and enhancing bright structure quantization of low-luminance live video according to claim 1, wherein the low-luminance and normal-luminance video quantization factor mapping association model: firstly, decoding an input compressed code stream to obtain a primary quantization factor in code stream information, carrying out inverse quantization on the decoded information by using the primary quantization factor to reconstruct an initial yuv sequence, inputting the yuv sequence into a coding end, carrying out bright structure quantization by using a larger bright structure quantization factor by the coding end, carrying out primary coding, and finally outputting the coded code stream;
m and n represent gray values of two different pixels in a certain frame of image of the initial video, Q 1 and Q 2 represent two quantization processes respectively, and Q 1 -1 and Q 2 -1 represent two inverse quantization processes respectively;
For the input value m, after twice quantization and twice inverse quantization, the reconstructed value is:
If the quantization is not performed once, the Q 2 is directly used for quantization and inverse quantization, and the reconstruction value is as follows:
For the input gray value m, compared with the two times of quantization by Q 1、Q2 and the one time of quantization by Q 2, the reconstruction value is the same, and no error is introduced;
for the gray value n, the reconstructed value after twice quantization and inverse quantization is:
The reconstructed value after only one quantization and inverse quantization of Q 2 is:
For an input gray value N, the difference between the two times of quantization by Q 1、Q2 and the reconstruction value of one time of quantization by Q 2 is caused, the error delta=N-M is caused, the secondary quantization error becomes a bright structure quantization error, the gray value of an image is subjected to DCT, and then the transformed coefficient is marked on a horizontal coordinate axis according to the amplitude value.
7. The method for real-time transcoding and enhancing a bright structure quantization of a low-luminance live video according to claim 6, wherein assuming that a quantization factor for performing primary coding quantization on an initial low-luminance video is Q 1 and a bright structure quantization factor for encoding an enhanced yuv sequence by an encoder in a transcoding process is Q 2, distortion caused by primary coding quantization is expressed as formula 8:
where y represents the DCT coefficient, p (y) represents the probability density function of the DCT coefficient distribution, and after secondary quantization with the quantization factor Q 2, the distortion caused by the secondary quantization is expressed as formula 9:
The magnitude of the twice quantized distortion value is determined by DCT coefficients and twice quantized quantization step sizes, and based on the DCT coefficient relation between the normal illumination video and the video after illumination reduction, the relation of quantization factors between the normal illumination video and the low illumination video under the same distortion condition is obtained; establishing an 8X 8 image block, and performing DCT (discrete cosine transformation) on the image block to obtain an 8X 8DCT coefficient block;
The gray value of each pixel point is multiplied by 0.9, 0.8 …, 0.2 and 0.1 in sequence, the result is rounded and rounded, the obtained 9 groups of 8 x 8 image blocks are respectively subjected to DCT conversion to obtain 9 DCT coefficient blocks, the low-frequency components of the DCT coefficient blocks are drawn into curves according to the change relation of the coefficient multiplied by the gray value, the DCT coefficient changes linearly along with the change of the gray value of the pixel point, when the normal illumination video is subjected to illumination reduction treatment, if the adopted weighting coefficient for reducing illumination is i 1, under the absolute ideal condition, the conversion relation between the DCT coefficient y 'and the DCT coefficient y of the normal illumination video is y' =y i 1 after the DCT change of the low illumination video;
After the low-illumination video and the normal-illumination video are quantized for one time, the distortion degree is kept consistent, and the following relation is adopted for one-time quantization step length:
Where Q' step represents a primary quantization step of the low-luminance video, Q step represents a primary quantization step of the normal-luminance video, i 1 represents a weighting coefficient of luminance variation between the two videos, and qstep=t (LY 1) is given based on a mapping relationship between the quantization coefficient and the quantization step, where equation 10 is written as:
After obtaining the corresponding low-illumination video quantization step length, performing inverse table lookup to obtain a corresponding low-illumination video primary quantization factor LY 1':
T and T -1 respectively represent a lookup table and a reverse lookup table, the weighting coefficient i 1 reflects the low-illumination degree of the low-illumination video, and the mapping relation between the video with different low-illumination degrees and the normal-illumination video quantization factors in an ideal state is obtained through a formula 12;
introducing a correction coefficient sigma, and correcting the formula 12 to obtain a formula 13:
When the light-structure quantization factor LY 2=LY1 is adopted, the comprehensive performance of the transcoding video quality and the code rate is optimal, and the mapping relation of the combination type 13 is combined, obtaining a selection formula 14 of an optimal light structure quantization factor in low-illumination video transcoding light structure quantization:
Calculating a correction coefficient:
wherein LY 2( Theory of ) is represented by formula 16:
After the correction coefficients sigma i1 corresponding to different illumination coefficients i 1 and different sub-quantization factors LY 1 are calculated, the correction coefficients sigma i1 corresponding to different LY 1 and i 1 are determined, any low-illumination video code stream input into a transcoding system is decoded, the sub-quantization factors LY 1 and the illumination coefficients i 1 are obtained, the theoretical value LY 2( Theory of ) of the optimal bright-structure quantization factor is calculated, and the optimal bright-structure quantization factor of the low-illumination video is obtained by correcting the theoretical value LY 2( Theory of ) with the corresponding correction coefficients sigma i1.
8. The method for real-time transcoding and enhancing the bright structure quantization of the low-illumination live video according to claim 1, wherein the low-illumination live video transcoding and bright structure quantization algorithm flow is as follows: when transcoding any input video, the following three decision rules are set:
Rule 1: inputting a video after primary encoding into a transcoding system, decoding by a decoder in the transcoding system to obtain a quantization factor LY 1 and an image gray value adopted in primary encoding, comparing the dynamic range of the gray value with the dynamic range (0-255) of the gray value of the video with normal illumination, and if the dynamic range of the gray value of the video image is i 1 times of the video with normal illumination and i 1 is more than 0.6, the video illumination is good and image enhancement and transcoding are not needed;
Rule 2: when i 1 is less than 0.6, the method accords with the judgment standard of the low-illumination video, enters the next flow, judges the degree of low illumination, and divides the low-illumination video into three steps according to the value of i 1: slight low illumination (0.6 is more than or equal to i 1 is more than 0.4), medium low illumination (0.4 is more than or equal to i 1 is more than 0.2) and serious low illumination (i is less than or equal to 0.2), for slight low illumination video, when LY 1 is more than 34, the video quality after primary encoding is poor, and after video enhancement and secondary encoding, the subjective quality of the video is poor, so that a viewer cannot accept the video, the image enhancement and transcoding are not necessary, for medium low illumination video, when LY 1 is more than 30, the image enhancement and transcoding are not necessary, for serious low illumination video, when LY 1 is more than 28;
Rule 3: in addition to the above two cases, the input video in other cases needs to be subjected to video enhancement and transcoding light structure quantization, a corresponding correction coefficient sigma i1 is obtained according to the luminance parameter i 1 of the video and the quantized factor LY 1 in one time, and then the optimal light structure quantized factor LY 2 is selected, and the transcoding code rate is lowest on the premise that the light structure quantized factor can ensure that the quality of the transcoded video is as good as possible.
9. The method for real-time transcoding and enhancing the bright structure quantization of the low-illumination live video according to claim 1, wherein the low-illumination video transcoding and bright structure quantization algorithm divides the video into three layers according to the low-illumination degree, the values of the correction parameters are different for different illumination parameters and different primary quantization factors, and the correction parameter sigma is 2 when LY 1 is less than or equal to 16 for a slight low-illumination video; when LY 1 is 16 to 20, the correction parameter sigma is-2; when LY 1 is more than 20 and less than or equal to 34, the correction parameter sigma is-4; when LY 1 is more than 34, the video quality is intolerable, and video enhancement and transcoding operations are not needed; for medium low-illumination video, when LY 1 is less than or equal to 16, the correction parameter sigma is 0; when LY 1 is 16 to 20, the correction parameter sigma is-2; when LY 1 is more than 20 and less than or equal to 24, the correction parameter sigma is-4; when LY 1 is more than 24 and less than or equal to 30, the correction parameter sigma is-6; when LY 1 is more than 30, the video quality is intolerable, and video enhancement and transcoding operations are not needed; for severe low-illumination video, when LY 1 is less than or equal to 16, the correction parameter sigma is-2; when LY 1 is 16 to 22, the correction parameter sigma is-4; when LY 1 is more than 22 and less than or equal to 26, the correction parameter sigma is-6; when LY 1 is more than 26 and less than or equal to 28, the correction parameter sigma is-8; when LY 1 is more than 28, the video quality is intolerable, and video enhancement and transcoding operations are not needed;
When a certain low-illumination video is input into a transcoding system, a decoder in the transcoding system decodes the video, analyzes the decoded information to obtain values of a primary quantization factor LY 1 and an illumination parameter i 1, corrects a coefficient sigma based on different conditions, and finally obtains the value of the optimal brightness quantization factor of the low-illumination video according to a formula QP 2=T-1[T(QP1)/i1 ] +sigma.
CN202311856203.6A 2023-12-29 2023-12-29 Low-illumination live video real-time transcoding enhancement bright structure quantization method Pending CN117979006A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311856203.6A CN117979006A (en) 2023-12-29 2023-12-29 Low-illumination live video real-time transcoding enhancement bright structure quantization method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311856203.6A CN117979006A (en) 2023-12-29 2023-12-29 Low-illumination live video real-time transcoding enhancement bright structure quantization method

Publications (1)

Publication Number Publication Date
CN117979006A true CN117979006A (en) 2024-05-03

Family

ID=90858860

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311856203.6A Pending CN117979006A (en) 2023-12-29 2023-12-29 Low-illumination live video real-time transcoding enhancement bright structure quantization method

Country Status (1)

Country Link
CN (1) CN117979006A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080175503A1 (en) * 2006-12-21 2008-07-24 Rohde & Schwarz Gmbh & Co. Kg Method and device for estimating image quality of compressed images and/or video sequences
CN104092924A (en) * 2014-04-30 2014-10-08 武汉博睿达信息技术有限公司 VMS video sharpening processing network system framework under low illumination and pre-detection method
CN111385577A (en) * 2020-04-07 2020-07-07 广州市百果园信息技术有限公司 Video transcoding method, device, computer equipment and computer readable storage medium
CN111510722A (en) * 2020-04-27 2020-08-07 王程 High-quality transcoding method for video image with excellent error code resistance

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080175503A1 (en) * 2006-12-21 2008-07-24 Rohde & Schwarz Gmbh & Co. Kg Method and device for estimating image quality of compressed images and/or video sequences
CN104092924A (en) * 2014-04-30 2014-10-08 武汉博睿达信息技术有限公司 VMS video sharpening processing network system framework under low illumination and pre-detection method
CN111385577A (en) * 2020-04-07 2020-07-07 广州市百果园信息技术有限公司 Video transcoding method, device, computer equipment and computer readable storage medium
CN111510722A (en) * 2020-04-27 2020-08-07 王程 High-quality transcoding method for video image with excellent error code resistance

Similar Documents

Publication Publication Date Title
US6275527B1 (en) Pre-quantization in motion compensated video coding
US7924923B2 (en) Motion estimation and compensation method and device adaptive to change in illumination
Chin et al. A software-only videocodec using pixelwise conditional differential replenishment and perceptual enhancements
US5369439A (en) Orthogonal transform encoder using DC component to control quantization step size
KR100643819B1 (en) Parameterization for fading compensation
US20240236313A9 (en) Systems and methods for image filtering
KR100578433B1 (en) Fading estimation/compensation
CN1366778A (en) Video compression
US7177356B2 (en) Spatially transcoding a video stream
CN1695381A (en) Sharpness enhancement in post-processing of digital video signals using coding information and local spatial features
CA2886995C (en) Rate-distortion optimizers and optimization techniques including joint optimization of multiple color components
CN1096182C (en) Video signal decoding apparatus capable of reducing blocking effects
CN1347621A (en) Reducing 'blocking picture' effects
CN109874012B (en) Video coding method, encoder, electronic device and medium
JP2007521740A (en) How to find zeros in the transformation domain early
Kim et al. Fixed-ratio compression of an RGBW image and its hardware implementation
Lauga et al. Segmentation-based optimized tone mapping for high dynamic range image and video coding
Boitard et al. Motion-guided quantization for video tone mapping
CN1174636C (en) Method and apparatus for coding and for decoding picture sequence
KR20010019704A (en) Macroblock-based object-oriented coding method of image sequence having a stationary background
CN117979006A (en) Low-illumination live video real-time transcoding enhancement bright structure quantization method
KR20020026189A (en) Efficient video data access using fixed ratio compression
CN112040231B (en) Video coding method based on perceptual noise channel model
Al-Khafaji¹ et al. Color image compression of inter-prediction base
Mir et al. Adaptive residual mapping for an efficient extension layer coding in two-layer HDR video coding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination