CN111698503A - Video high-power compression method based on preprocessing - Google Patents
Video high-power compression method based on preprocessing Download PDFInfo
- Publication number
- CN111698503A CN111698503A CN202010578517.4A CN202010578517A CN111698503A CN 111698503 A CN111698503 A CN 111698503A CN 202010578517 A CN202010578517 A CN 202010578517A CN 111698503 A CN111698503 A CN 111698503A
- Authority
- CN
- China
- Prior art keywords
- video
- texture
- filter
- region
- sampling
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/107—Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/13—Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/625—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using discrete cosine transform [DCT]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Discrete Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The invention provides a video high-power compression method based on preprocessing, aiming at the problem that the subjective experience and the objective evaluation of a video with low code rate and high compression multiple are poor. The algorithm carries out relevant preprocessing before the video enters the coding and decoding module, and removes redundant information and noise so as to improve the subjective and objective quality of the video. The method comprises the following specific steps: dividing a video processing unit; detecting the complexity of the video and dividing the area; video denoising preprocessing; video self-adaptive down-sampling; down-sampling mode coding; and (5) video coding. The embodiment of the invention particularly aims at the video with low code rate and high compression ratio, and improves the subjective visual experience and objective evaluation of the compressed video under the conditions of not increasing any time delay and using less resources.
Description
Technical Field
The invention relates to the technical field of video compression coding and decoding, in particular to a video high-power compression method based on preprocessing.
Background
With the rapid development of computers and the internet, multimedia data communication technology taking images and videos as important expression forms is rapidly flying, and the simple text and voice communication forms no longer meet the daily requirements of people. Multimedia communication is popular in all industries, is widely applied to the fields of remote education, teleconferencing, video telephone, security monitoring and the like, and changes the living, learning and working modes of people. However, when multimedia comes across the sky, the amount of data in communication becomes very large. For example, a 24-bit true color map with 1280 × 720 raw data size of about 18.9Mb requires about 4.7 seconds to transmit a map at 4Mbps over a network. It follows that multimedia technology puts tremendous pressure on storage medium capacity, channel bandwidth, and computer processing speed. Although the problem can be alleviated by producing larger capacity storage media, increasing communication bandwidth, and developing higher performance computers, it is at a substantial cost. Therefore, it is imperative that video be compression encoded prior to storage to reduce the amount of data in the communication.
Many application scenes have fixed requirements on channel bandwidth and transmission delay, and particularly under the condition of low target code rate, the traditional method is to directly perform video compression, but product compression is directly performed, and obvious block effect and ringing effect appear in decoded and reconstructed images, so that serious discomfort is caused in vision. This phenomenon is mainly due to the high compression rate that must be achieved with coarse quantization at low code rates. However, low bit rate transmission is very widespread in multimedia communication applications, such as video transmission over PSTN, IP networks, and wireless networks. The visual effect is improved in order to improve the image quality. For the transmission of high resolution video, especially high definition video, at a fixed low bit rate, pre-processing of the video prior to compression is essential. With filtered and interpolated samples being the most commonly used algorithms. The commonly used filtering methods are gaussian filtering, bilateral filtering and the like, and the interpolation algorithm is generally nearest neighbor interpolation, linear interpolation and double cubic interpolation. However, the conventional method generally processes the whole frame image, that is, the whole frame image is sampled by a filter, and 1/2 or 1/4 is realized on the whole frame image. Similar to the problems with conventional algorithms: for filtering, the detail is desirably preserved, otherwise the image is blurred. The flat region is desired to reduce redundancy as much as possible, providing conditions for improving compression efficiency. But it is difficult to compromise both detail and smoothing effects with the same filter for all regions. Therefore, it is an urgent need to develop a method for adaptively selecting a suitable filter according to the texture and complexity of an image.
In order to improve the video compression efficiency and the reconstructed video quality, downsampling the video at a low bit rate is indispensable. However, the conventional all-area scaling causes problems that vertical texture horizontal sampling or strong edge details, whether horizontal or vertical sampling, have a very obvious saw-tooth effect when the up-sampling is restored after decoding, and a ringing effect is diffused due to interpolation.
Disclosure of Invention
The invention mainly aims to provide a video high-magnification compression method based on preprocessing, which divides an image into processing units which are not overlapped with each other, reduces the time delay of the image to a certain extent and reduces the image storage, and provides the processing units and the area division for adaptive filtering and adaptive down-sampling; improving the quality of the reconstructed video.
In order to achieve the above object, the present invention provides a high power video compression method based on preprocessing, which includes the following steps:
s1, dividing one frame of image into non-overlapping N M processing units according to the resolution of the original video;
s2, detecting edge information by using a Sobel operator template, and performing complexity detection and texture region division by using the processing unit as a unit;
s3, namely, adaptively filtering and denoising the processing unit according to the different complexity marked in the S2;
s4, namely according to the different complexity marked in S2, adaptively down-sampling the processing unit;
s5, performing downsampling mode encoding on each processing unit;
s6, carrying out video coding on the video content of each processing unit;
wherein N and M are customized according to the image resolution and the compression multiple.
Further, N and M are each multiples of 2.
Further, two thresholds T1 and T2 are set using the Value abs (sumx) + abs (sumy) calculated by Sobel operator, T1< T2, and the counts satisfying the above threshold conditions are Counter1 and Counter2, respectively; value > T2, Counter2 adds 1 for a strong edge pixel count; when T1< Value < T2, Counter1 adds 1 to obtain the texel Value; when Value < T1, the pixel is a flat pixel Value.
Two thresholds T3 and T4 are set to determine the type of the current region, i.e., whether the current region is a flat region, a texture region, or an edge region. Wherein, when the connector 2 is greater than T3, the region is divided into edge regions; when the connector 1> T4, dividing the texture area; otherwise the area is marked as flat. For the texture trend of the texture area, the direction is counted, and the calculation formula of the direction angle is as follows:
θ=arctan(abs(SumY)/abs(SumX))
the texture strike determination counts are CounterH and CounterV, respectively, and the CounterH count conditions are as follows:
the counting conditions for their counter v are as follows:
if CounterH > CounterV, the texture region orientation is marked as horizontal orientation, otherwise marked as vertical orientation.
Further, the detection template of the Sobel operator is as follows:
sx represents a vertical detection template, and Sy represents a horizontal detection template; SumX ═ Sx a, SumY ═ Sy ═ a; where a is a 3 x 3 matrix of pixels.
Further, a 9-tap weighted mean filter is adopted for adaptive filtering and denoising of the flat region, and a bilateral filter is adopted for adaptive filtering and denoising of the texture region and the edge region.
further, down-sampling both the horizontal and vertical of the flat area by a sampling multiple of 4; the down-sampling filter is a 12-tap filter for MPEG 4; adopting a double-cubic linear filter with higher complexity to perform down-sampling on the texture area, and selecting horizontal down-sampling or vertical down-sampling according to the trend of the texture; the sampling multiple is 2.
Further, the 12-tap Filter is Filter _ down ═ {2, -4, -3,5,19,26,19,5, -3, -4,2,64 }. The filter can determine the number of tap coefficients and the coefficient value according to the division size of the processing unit; wherein the number of tap coefficients is less than half of the minimum of the number of rows or columns of the processing unit.
Compared with the prior art, the invention has the beneficial effects that: and performing adaptive processing on the video before encoding by utilizing different texture characteristics. The method can well protect the texture details of the image and remove the redundancy of the image. In high power video compression applications, the human visual perception can be improved to a great extent.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
FIG. 1 is a schematic flow chart of the steps of the present invention;
FIG. 2 is a flow chart of the video processing unit partitioning according to the present invention;
FIG. 3 is a schematic view of a video complexity detection and region division process according to the present invention;
FIG. 4 is a schematic flow chart of video denoising pre-processing in the present invention;
FIG. 5 is a flow chart illustrating a sampling mode according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of video encoding according to the present invention;
FIG. 7 is a schematic flow chart of video complexity detection according to the present invention;
the implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The present invention provides a high-power video compression method based on preprocessing, which includes the following steps as shown in fig. 1:
s1, dividing a video processing unit, namely dividing a frame of image into non-overlapping N x M processing units according to the resolution of the original video;
s2, detecting the complexity of the video and dividing the video into regions, namely detecting edge information by using a Sobel operator template, and performing complexity detection and texture region division by using the processing unit as a unit;
s3, video denoising preprocessing, namely, adaptively filtering and denoising the processing unit according to different complexities marked in S2; in order to reduce complexity, the flat area adopts 3 × 3 horizontal smooth filtering, the edge area and the texture area adopt bilateral filtering, and filtering parameters can be adjusted. In one embodiment, the pixel difference parameter delta of the filter is 20, and the distance difference parameter delta _ c is 20.
S4, video adaptive down-sampling, namely, the processing unit is adaptively down-sampled according to the different complexity marked in the S2;
s5, performing downsampling mode coding, i.e. performing downsampling mode coding on each processing unit; a bypass coding mode is used. The step is that the decoding end can carry out corresponding up-sampling according to the sampling mode and finally restore the original resolution image;
s6, video coding, namely carrying out video coding on the video content of each processing unit; . In an embodiment, intra coding of H265 is used. The compression multiple is 30 times;
wherein N and M are customized according to the image resolution and the compression multiple.
Further, N and M are each multiples of 2. As shown in fig. 2, PUx (x ═ 0,1,2.. n-1 …) denotes a divided processing unit, H denotes the number of pixels in the horizontal direction of resolution of the video, and V denotes the number of pixels in the vertical direction of the video.
Further, the Value abs (sumx) + abs (sumy) calculated by using Sobel operator is set to two thresholds T1 and T2 as shown in fig. 3, T1< T2, and the counts satisfying the above threshold conditions are respectively set to Counter1 and Counter 2; value > T2, Counter2 adds 1 for a strong edge pixel count; when T1< Value < T2, Counter1 adds 1 to obtain the texel Value; when Value < T1, the pixel is a flat pixel Value; setting two thresholds T3 and T4 to judge the type of the current region, namely whether the current region is a flat region, a texture region or an edge region; wherein, when T3< T4, Conuter2> T3, dividing into edge regions; when the connector 1> T4, dividing the texture area; otherwise the area is marked as flat. For the texture trend of the texture area, the direction is counted, and the calculation formula of the direction angle is as follows:
θ=arctan(abs(SumY)/abs(SumX))
the texture strike determination counts are CounterH and CounterV, respectively, and the CounterH count conditions are as follows:
the counting conditions for their counter v are as follows:
if CounterH > CounterV, the texture region orientation is marked as horizontal orientation, otherwise marked as vertical orientation.
As shown in fig. 4, the pre-processing of denoising for video filtering is to adaptively select the type of the filter according to the category of the processing unit, i.e., texture region, edge region, or flat region, where the flat region selects a simpler 3 × 3 smooth template, and the texture region and the edge region select gaussian filtering or bilateral filtering with higher complexity, as follows:
the video adaptive down-sampling selects the down-sampling mode according to the type of the processing unit, namely texture area, marginal area or flat area. That is, no downsampling is performed for the edge regions, 1/4 downsampling is performed for the flat regions, and vertical or horizontal 1/2 downsampling is performed for the texture regions. The texture area adopts a bicubic interpolation algorithm, the flat area adopts a modified linear filter of MPEG4, the edge area does not sample, and the edge information is reserved.
The following were used:
the sampling pattern is shown in fig. 5, and fig. 5 illustrates 8 × 8 blocks as an example. The flat area is downsampled by using an interpolation filter commonly used by MPEG4, the texture area adopts double-cube interpolation, the image details are effectively protected, and the edge area is used for preventing a sawtooth effect and is not downsampled.
The video coding input unit is a processing unit after sampling, and the encoder can adopt H264 or H265. Fig. 6 is a schematic diagram of video encoding.
The invention adopts the self-adaptive filtering and the self-adaptive down-sampling, plays a great role in improving the quality of the reconstructed video of the low-bit-rate high-power compressed video, and particularly aims at the application scene of the video compressed by more than 25 times. Such as fixed low bandwidth wireless wired video transmission.
Firstly, utilizing Sobel operator window and pixel window convolution to obtain edge Value and direction, then according to the Value determining that said point is edge point, texture point or flat region point, if said point is texture point, further judging that the texture of said point is horizontal texture or vertical texture. Until all pixels within the processing unit have been traversed.
Finally, whether the current region is an edge region, a vertical texture region, a horizontal texture region or a flat region is determined according to the statistical results, i.e., Counter2, Counter1, Counter h and Counter v. In the examples, T1-150, T2-300, T3-2000, and T4-6000. As shown in fig. 7, an exemplary diagram of a flat region, a vertical texture, a horizontal texture, and an edge region is shown. Where P denotes a flat area, CH denotes a horizontal texture area, CV denotes a vertical texture area, and E denotes an edge area.
Further, the detection template of the Sobel operator is as follows:
sx represents a vertical detection template, and Sy represents a horizontal detection template; SumX ═ Sx a, SumY ═ Sy ═ a; where a is a 3 x 3 matrix of pixels.
Further, a 9-tap weighted mean filter is adopted for adaptive filtering and denoising of the flat region, and a bilateral filter is adopted for adaptive filtering and denoising of the texture region and the edge region.
Further, the method can be used for preparing a novel materialAnd the Filter of the 9-tap weighted average Filter is as follows:
further, down-sampling both the horizontal and vertical of the flat area by a sampling multiple of 4; the down-sampling filter is a 12-tap filter for MPEG 4; adopting a double-cubic linear filter with higher complexity to perform down-sampling on the texture area, and selecting horizontal down-sampling or vertical down-sampling according to the trend of the texture; the sampling multiple is 2.
Further, the 12-tap Filter is Filter _ down ═ {2, -4, -3,5,19,26,19,5, -3, -4,2,64 }. The filter can determine the number of tap coefficients and the coefficient value according to the division size of the processing unit; wherein the number of tap coefficients is less than half of the minimum of the number of rows or columns of the processing unit.
The invention has the beneficial effects that: and performing adaptive processing on the video before encoding by utilizing different texture characteristics. The method can well protect the texture details of the image and remove the redundancy of the image. In high power video compression applications, the human visual perception can be improved to a great extent.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The embodiment of the invention can be applied to high-definition video wireless real-time transmission, such as fields of unmanned aerial vehicles, FPV, VR, medical image processing, remote sensing image processing, traffic transportation systems, high-definition televisions, image compression, image restoration and the like. The invention is especially applied to the use environment and the requirement of high compression multiple and low code rate.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (8)
1. A high power video compression method based on preprocessing is characterized by comprising the following steps:
s1, dividing one frame of image into non-overlapping N M processing units according to the resolution of the original video;
s2, detecting edge information by using a Sobel operator template, and performing complexity detection and texture region division by using the processing unit as a unit;
s3, namely, adaptively filtering and denoising the processing unit according to the different complexity marked in the S2;
s4, namely according to the different complexity marked in S2, adaptively down-sampling the processing unit;
s5, performing downsampling mode encoding on each processing unit;
s6, carrying out video coding on the video content of each processing unit;
wherein N and M are customized according to the image resolution and the compression multiple.
2. The pre-processing based high power video compression method of claim 1, wherein N and M are both multiples of 2.
3. The pre-processing based high power video compression method of claim 2, wherein the step S2 is as follows: setting two thresholds T1 and T2, T1< T2, and setting the counts satisfying the threshold conditions as Counter1 and Counter2, respectively, using the Value abs (sumx) + abs (sumy) calculated by Sobel operator; value > T2, Counter2 adds 1 for a strong edge pixel count; when T1< Value < T2, Counter1 adds 1 to obtain the texel Value; when Value < T1, the pixel is a flat pixel Value; setting two thresholds T3 and T4 to judge the type of the current region, namely whether the current region is a flat region, a texture region or an edge region; wherein, when the connector 2 is greater than T3, the region is divided into edge regions; when the connector 1> T4, dividing the texture area; otherwise the area is marked as flat. For the texture trend of the texture area, the direction is counted, and the calculation formula of the direction angle is as follows:
θ=arctan(abs(SumY)/abs(SumX))
the texture strike determination counts are CounterH and CounterV, respectively, and the CounterH count conditions are as follows:
the counting conditions for their counter v are as follows:
if CounterH > CounterV, the texture region orientation is marked as horizontal orientation, otherwise marked as vertical orientation.
5. The pre-processing based high power video compression method as claimed in claim 3, wherein the flat region is adaptively filtered and denoised by a 9-tap weighted mean filter, and the texture region and the edge region are adaptively filtered and denoised by a bilateral filter.
7. the pre-processing based high power video compression method of claim 5, wherein the horizontal and vertical of the flat area are down sampled by a factor of 4; the down-sampling filter is a 12-tap filter for MPEG 4; adopting a double-cubic linear filter with higher complexity to perform down-sampling on the texture area, and selecting horizontal down-sampling or vertical down-sampling according to the trend of the texture; the sampling multiple is 2.
8. The pre-processing based high power video compression method of claim 7, wherein the 12-tap Filter is Filter _ down {2, -4, -3,5,19,26,19,5, -3, -4,2,64 }. The filter can determine the number of tap coefficients and the coefficient value according to the division size of the processing unit; wherein the number of tap coefficients is less than half of the minimum of the number of rows or columns of the processing unit.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010578517.4A CN111698503B (en) | 2020-06-22 | 2020-06-22 | Video high-power compression method based on preprocessing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010578517.4A CN111698503B (en) | 2020-06-22 | 2020-06-22 | Video high-power compression method based on preprocessing |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111698503A true CN111698503A (en) | 2020-09-22 |
CN111698503B CN111698503B (en) | 2022-09-09 |
Family
ID=72483191
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010578517.4A Active CN111698503B (en) | 2020-06-22 | 2020-06-22 | Video high-power compression method based on preprocessing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111698503B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112312132A (en) * | 2020-10-23 | 2021-02-02 | 深圳市迪威码半导体有限公司 | HEVC intra-frame simplified algorithm based on histogram statistics |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1999059343A1 (en) * | 1998-05-12 | 1999-11-18 | Hitachi, Ltd. | Method and apparatus for video decoding at reduced cost |
EP1917813A2 (en) * | 2005-08-26 | 2008-05-07 | Electrosonic Limited | Image data processing |
CN101710993A (en) * | 2009-11-30 | 2010-05-19 | 北京大学 | Block-based self-adaptive super-resolution video processing method and system |
CN102281439A (en) * | 2011-06-16 | 2011-12-14 | 杭州米加科技有限公司 | Streaming media video image preprocessing method |
US20140369613A1 (en) * | 2013-06-14 | 2014-12-18 | Nvidia Corporation | Adaptive filtering mechanism to remove encoding artifacts in video data |
CN106960416A (en) * | 2017-03-20 | 2017-07-18 | 武汉大学 | A kind of video satellite compression image super-resolution method of content complexity self adaptation |
CN111314711A (en) * | 2020-03-31 | 2020-06-19 | 电子科技大学 | Loop filtering method based on self-adaptive self-guided filtering |
-
2020
- 2020-06-22 CN CN202010578517.4A patent/CN111698503B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1999059343A1 (en) * | 1998-05-12 | 1999-11-18 | Hitachi, Ltd. | Method and apparatus for video decoding at reduced cost |
EP1917813A2 (en) * | 2005-08-26 | 2008-05-07 | Electrosonic Limited | Image data processing |
CN101710993A (en) * | 2009-11-30 | 2010-05-19 | 北京大学 | Block-based self-adaptive super-resolution video processing method and system |
CN102281439A (en) * | 2011-06-16 | 2011-12-14 | 杭州米加科技有限公司 | Streaming media video image preprocessing method |
US20140369613A1 (en) * | 2013-06-14 | 2014-12-18 | Nvidia Corporation | Adaptive filtering mechanism to remove encoding artifacts in video data |
CN106960416A (en) * | 2017-03-20 | 2017-07-18 | 武汉大学 | A kind of video satellite compression image super-resolution method of content complexity self adaptation |
CN111314711A (en) * | 2020-03-31 | 2020-06-19 | 电子科技大学 | Loop filtering method based on self-adaptive self-guided filtering |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112312132A (en) * | 2020-10-23 | 2021-02-02 | 深圳市迪威码半导体有限公司 | HEVC intra-frame simplified algorithm based on histogram statistics |
CN112312132B (en) * | 2020-10-23 | 2022-08-12 | 深圳市迪威码半导体有限公司 | HEVC intra-frame simplified algorithm based on histogram statistics |
Also Published As
Publication number | Publication date |
---|---|
CN111698503B (en) | 2022-09-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8208565B2 (en) | Pre-processing method and system for data reduction of video sequences and bit rate reduction of compressed video sequences using temporal filtering | |
US8615042B2 (en) | Pre-processing method and system for data reduction of video sequences and bit rate reduction of compressed video sequences using spatial filtering | |
US7526142B2 (en) | Enhancement of decompressed video | |
CN1253009C (en) | Spatial scalable compression | |
CN111726633B (en) | Compressed video stream recoding method based on deep learning and significance perception | |
CN102158712B (en) | Multi-viewpoint video signal coding method based on vision | |
US7936824B2 (en) | Method for coding and decoding moving picture | |
JPH08186714A (en) | Noise removal of picture data and its device | |
CN112150400A (en) | Image enhancement method and device and electronic equipment | |
CN113674165A (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
US20020150166A1 (en) | Edge adaptive texture discriminating filtering | |
CN104780383B (en) | A kind of 3D HEVC multi-resolution video coding methods | |
CN111698503B (en) | Video high-power compression method based on preprocessing | |
CN110536138B (en) | Lossy compression coding method and device and system-on-chip | |
CN116437102B (en) | Method, system, equipment and storage medium for learning universal video coding | |
WO2006131866A2 (en) | Method and system for image processing | |
CN1926878A (en) | System and method for global indication of MPEG impairments in compressed digital video | |
JP4065287B2 (en) | Method and apparatus for removing noise from image data | |
JP3800435B2 (en) | Video signal processing device | |
WO2022120809A1 (en) | Virtual view drawing method and apparatus, rendering method and apparatus, and decoding method and apparatus, and devices and storage medium | |
JPH07240924A (en) | Device and method for encoding image | |
Jacquin et al. | Content-adaptive postfiltering for very low bit rate video | |
Kim et al. | Adaptive edge-preserving smoothing and detail enhancement for video preprocessing of H. 263 | |
CN114513662B (en) | QP (quantization parameter) adaptive in-loop filtering method and system, electronic equipment and storage medium | |
US20230269380A1 (en) | Encoding method, decoding method, encoder, decoder and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |