WO2022257759A1 - 一种图像带状伪影去除方法、装置、设备和介质 - Google Patents

一种图像带状伪影去除方法、装置、设备和介质 Download PDF

Info

Publication number
WO2022257759A1
WO2022257759A1 PCT/CN2022/094771 CN2022094771W WO2022257759A1 WO 2022257759 A1 WO2022257759 A1 WO 2022257759A1 CN 2022094771 W CN2022094771 W CN 2022094771W WO 2022257759 A1 WO2022257759 A1 WO 2022257759A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
value
matrix
target
pixel
Prior art date
Application number
PCT/CN2022/094771
Other languages
English (en)
French (fr)
Inventor
李昆明
宋秉一
Original Assignee
百果园技术(新加坡)有限公司
李昆明
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 百果园技术(新加坡)有限公司, 李昆明 filed Critical 百果园技术(新加坡)有限公司
Publication of WO2022257759A1 publication Critical patent/WO2022257759A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Definitions

  • the present application relates to the technical field of image processing, and in particular to a method, device, device and medium for removing image banding artifacts.
  • the image coding standard is the Joint Photographic Experts Group (JPEG), and the video coding standard can be based on digital storage media movement.
  • Image and voice compression standard (Moving Picture Experts Group, MPEG-2), can be a highly compressed digital video coding standard (H264), can be a highly compressed digital video coding standard (H265), can also be a source coding standard (Audio Video coding Standard, AVS).
  • the quantization method is to quantize the residual signal or a large-scale input signal into a small-range output signal, so as to achieve the purpose of bit compression.
  • the quantization method can effectively improve the image compression efficiency, due to the existence of quantization errors, it is also Artifacts are introduced, typically blocking artifacts and banding artifacts.
  • the main manifestation of the block effect in the image is the discontinuity of the brightness at the edge of the coding block, and the block effect will appear anywhere in the entire image.
  • Banding artifacts are mainly caused by the loss of low-amplitude detail information in the quantization process. Therefore, banding artifacts in the image mainly appear in areas with relatively gentle gradient changes, such as flat areas and brightness gradient areas. In scenarios where bandwidth costs are sensitive, due to the use of a large compression ratio, it will lead to more serious banding artifacts. In addition, after obtaining an image including banding artifacts, if the subsequent contrast enhancement process is continued, then It will lead to more serious banding artifacts.
  • the prior art can cause image edge blurring when removing banding artifacts, and the removal speed of image banding artifacts is slow.
  • the present application provides a method, device, device and medium for removing banding artifacts in an image to solve how to avoid image edge blurring when removing banding artifacts in the prior art, and to improve the efficiency when removing image banding artifacts. A matter of speed.
  • the application provides a method for removing image banding artifacts, the method comprising:
  • the first preset threshold and the second preset threshold determine the element value and the first preset threshold and the a target size relationship for a second predetermined threshold, wherein the second predetermined threshold is greater than the first predetermined threshold;
  • the target size relationship corresponding to the element value of each element point and the pre-saved value function of the pixel value corresponding to each size relationship determine the target corresponding to the position of each element point in the target image after the edge texture is preserved.
  • the target pixel value of the pixel point is used to obtain the target image after the edge texture is preserved.
  • the present application provides a device for removing image banding artifacts, the device comprising:
  • the first determination module is configured to perform a filtering operation on the image to be processed, and determine the filtered image after banding artifacts have been removed from the image to be processed, based on the filtered image and the pixels of each corresponding pixel point in the image to be processed value difference, determine the difference matrix and difference absolute value matrix between the filtered image and the image to be processed;
  • the second determination module is configured to, for each element point in the difference absolute value matrix, determine the relationship between the element value and the second preset threshold according to the element value of the element point, the first preset threshold, and the second preset threshold a target size relationship between a predetermined threshold and the second predetermined threshold, wherein the second predetermined threshold is greater than the first predetermined threshold;
  • the third determination module is configured to determine each element in the target image after the edge texture is preserved according to the target size relationship corresponding to the element value of each element point and the pre-saved value function of the pixel value corresponding to each size relationship.
  • the target pixel value of the target pixel point corresponding to the position of each element point is obtained to obtain the target image after the edge texture is preserved.
  • the present application provides an electronic device, including: a processor, a communication interface, a memory, and a communication bus, wherein, the processor, the communication interface, and the memory complete mutual communication through the communication bus;
  • a program when the program is executed by the processor, causes the processor to execute the steps of any one of the methods for removing image banding artifacts above.
  • the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the steps of any one of the above methods for removing banding artifacts in an image are implemented.
  • FIG. 1 is a schematic diagram of the process of an image banding artifact removal method provided in an embodiment of the present application
  • FIG. 2 is a schematic diagram of a preset error transfer matrix provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of removing banding artifacts from a video frame image provided by an embodiment of the present application
  • FIG. 4 is a schematic diagram of the process of an image banding artifact removal method provided by an embodiment of the present application.
  • FIG. 5 is a process schematic diagram of a method for removing image banding artifacts provided by an embodiment of the present application
  • FIG. 6 is a schematic structural diagram of an image banding artifact removal device provided in an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • embodiments of the present application provide a method, device, device, and medium for removing image banding artifacts.
  • Fig. 1 is a process schematic diagram of a method for removing banding artifacts in an image provided by an embodiment of the present application, the process includes the following steps:
  • S601 Perform a filtering operation on the image to be processed, determine the filtered image of the image to be processed after removing banding artifacts, and according to the difference between the filtered image and the pixel value of each corresponding pixel point in the image to be processed, determining a difference matrix and a difference absolute value matrix between the filtered image and the image to be processed;
  • S602 For each element point in the difference absolute value matrix, determine the element value and the first preset threshold and the second preset threshold according to the element value of the element point, the first preset threshold, and the second preset threshold a target size relationship of the second predetermined threshold, wherein the second predetermined threshold is greater than the first predetermined threshold;
  • S603 According to the target size relationship corresponding to the element value of each element point and the pre-saved value function of the pixel value corresponding to each size relationship, determine the corresponding position of each element point in the target image after the edge texture is preserved The target pixel value of the target pixel point is obtained to obtain the target image after the edge texture is preserved.
  • a method for removing image banding artifacts provided in the embodiment of the present application is applied to an electronic device, where the electronic device may be a smart terminal device such as a PC, a tablet computer, or a smart phone, or may be a local server or a cloud server.
  • the electronic device may be a smart terminal device such as a PC, a tablet computer, or a smart phone, or may be a local server or a cloud server.
  • the banding artifacts of the image to be processed due to the encoding and quantization of the encoder for example, in the field of short videos, the videos uploaded by users may be videos compressed by other third-party encoders, or videos compressed with a high compression ratio, resulting in Video frame images in videos include banding artifacts.
  • the electronic device performs a filtering operation on the image to be processed, wherein the image to be processed refers to an image with banding artifacts, and the image to be processed may be a single image , can also be a video frame image after video data decoding, the image to be processed does not limit the color space, can be a red green blue color space (red green blue, RGB) image, or can be a brightness chromaticity color space (Luminance Chrominance) , YUV) image, or a color model (Lab) image, and the filtering operation may be low-pass filtering or edge-preserving filtering, which is not limited in this embodiment of the present application.
  • the image to be processed refers to an image with banding artifacts
  • the image to be processed may be a single image , can also be a video frame image after video data decoding
  • the image to be processed does not limit the color space
  • can be a red green blue color space (red green blue, RGB) image or can be a brightness chromaticity color
  • the existing box filter (boxfilter), Gaussian filter (gaussian filter), etc. can be used for low-pass filtering operations
  • the existing bilateral filter bilateral filter
  • non-local mean filter non-local mean filter
  • Local filter guided filter
  • side window filter side windows filter
  • the method for electronic equipment to perform filtering operations belongs to the prior art, and the embodiment of the present application does not do this repeat.
  • the filtered image after banding artifacts are removed from the image to be processed can be determined, wherein the number of pixels of the filtered image is the same as that of the image to be processed, because after the filtering operation, the The edge blur problem will appear in the image edge of the filtering operation.
  • the difference matrix and the difference absolute value matrix between the filtered image and the image to be processed can also be determined.
  • the electronic device determines the difference absolute value matrix, specifically, the existing absolute value function is used to determine the difference absolute value matrix.
  • the electronic device In order to make the edge of the filtered image clear, the electronic device also stores a first preset threshold and a second preset threshold, wherein both the first preset threshold and the second preset threshold are preset, and the second preset The threshold is set to be greater than the first preset threshold.
  • the determination method of the pixel value corresponding to the element point is also different, so for the difference
  • the first preset threshold and the second preset threshold determine the target size of the element value and the first preset threshold and the second preset threshold relation.
  • the element value may be smaller than the first preset threshold, may be larger than the first preset threshold, or may be larger than the second preset threshold.
  • a value function of the pixel value corresponding to each size relationship is also stored in advance, wherein the size relationship includes element values greater than the first preset threshold, the element value is greater than the first preset threshold and less than the second preset threshold, and the element value is not less than the second preset threshold.
  • the pre-saved value function of the pixel value corresponding to each size relationship and the target size relationship corresponding to the element value of each element point determine the target value function of the pixel value of the corresponding position of each element point, and according to each The target value function of the pixel point corresponding to each element point determines the target pixel value of the target pixel point corresponding to each element point in the target image after the edge texture is preserved, so that according to the target pixel value of each target pixel point, The target image after edge texture preservation is obtained.
  • each corresponding The difference between the pixel values of the position pixels determines the difference matrix and the difference absolute value matrix between the filtered image and the image to be processed; for each element point in the difference absolute value matrix, according to the element value of the element point, the first a preset threshold and a second preset threshold, determining a target size relationship between the element value and the first preset threshold and the second preset threshold, wherein the second preset threshold is greater than the first preset threshold Set a threshold; according to the target size relationship corresponding to the element value of each element point and the value function of the pixel value corresponding to each size relationship stored in advance, determine the corresponding value of each element point in the target image after the edge texture is preserved.
  • the target pixel value of the target pixel point at the position to obtain the target image after the edge texture is preserved; thus the target image has removed banding artifacts and avoided image edge blur, and only simple subtraction and multiplication are used in this method , the computational complexity is low, which improves the speed of determining the target image.
  • the size relationship includes the element value is not greater than the first preset threshold, the element value is greater than the first preset threshold and smaller than the second preset threshold, and the element value is not less than the second preset threshold, according to each
  • the target size relationship corresponding to the element value of each element point, and the value function of the pixel value corresponding to each size relationship stored in advance determine the target pixel point corresponding to each element point in the target image after the edge texture is preserved.
  • the target pixel values include:
  • the target size relationship corresponding to the element value of each element point if the target size relationship is that the element value of the element point is not greater than the first preset threshold or not less than the second preset threshold, Respectively determine the pixel value of the pixel point corresponding to the element point in the filtered image or the image to be processed, and determine the pixel value as the target pixel value of the target pixel point corresponding to the element point in the target image ; If the target size relationship is also that the element value of the element point is greater than the first preset threshold and smaller than the second preset threshold, determine the second preset threshold and the element value of the element point A difference, the second difference between the second preset threshold and the first preset threshold, and determine the ratio of the first difference to the second difference, and determine the difference matrix The product value of the element value at the corresponding position of the element point in the element point and the product value of the ratio value, and determine the sum of the product value and the pixel value of the pixel point at the corresponding position of the element point in the image to be processed
  • the target pixel value of each target pixel point in the target image in the embodiment of the present application, for the target size relationship corresponding to the element value of each element point, according to the target size relationship corresponding to the element value of the element point and The image to be processed and the filtered image determine the target pixel value of the target pixel point corresponding to the element point in the target image.
  • the target size relationship corresponding to the element value of the element point if the target size relationship is that the element value of the element point is not greater than the first preset threshold, according to the predetermined filter image, determine the element in the filter image The pixel value of the pixel point corresponding to the point.
  • the position of the element point in the difference absolute value matrix is the first row and the second column
  • the pixel value of the pixel point in the first row and the second column is determined in the filtered image, wherein the number of rows and columns of the difference absolute value matrix
  • the number of rows and columns is the same as that of the filtered image, so according to the position of the element point in the difference absolute value matrix, the pixel value of the pixel at the corresponding position can be found in the filtered image.
  • the pixel value of the pixel point corresponding to the element point in the image to be processed is determined according to the predetermined image to be processed.
  • the number of rows and columns of the difference absolute value matrix is also the same as the number of rows and columns of the image to be processed, so according to the position of the element point in the difference absolute value matrix, the pixel value of the pixel at the corresponding position can be found in the image to be processed.
  • the pixel value is determined as the target pixel value of the target pixel point corresponding to the element point in the target image.
  • the second preset threshold and the second preset threshold are determined according to the element value of the element point and the second preset threshold.
  • the first difference of the element value of the element point, according to the second preset threshold and the first preset threshold determine the second difference between the second preset threshold and the first preset threshold, according to the first difference and the first preset threshold
  • the second difference is to determine the ratio between the first difference and the second difference. According to the determined ratio and the element value corresponding to the element point in the difference matrix, the ratio value is compared with the element point in the difference matrix.
  • the element values at corresponding positions are multiplied to obtain the product value. Determine the sum of the product value and the pixel value of the pixel point corresponding to the element point in the image to be processed, and determine the sum value as the target pixel value of the target pixel point corresponding to the element point in the target image.
  • the target image textPresImg after the edge texture is preserved can be determined, and the determination formula of the target image textPresImg is:
  • the first preset threshold thrLow and the second preset threshold thrHigh if the element value is not greater than the first preset threshold thrLow, that is, the element value ⁇ thrLow, then Obtain the pixel value of the pixel point corresponding to the element point in the filtered image, and determine the pixel value as the target pixel value of the target pixel point corresponding to the element point in the target image textPresImg after retaining the edge texture.
  • the element value is not less than the second preset threshold thrHigh, that is, the element value ⁇ thrHigh, then obtain the pixel value of the pixel point corresponding to the element point in the image to be processed, and determine the pixel value as the target after the edge texture is preserved The target pixel value of the target pixel point corresponding to the element point in the image textPresImg.
  • the first difference is expressed in the above formula as thrHigh- textDiffAbs
  • determine the second difference between the second preset threshold thrHigh and the first preset threshold thrLow that is, the second difference is expressed as thrHigh-thrLow in the above formula
  • determine the proportional value corresponding to the first difference and the second difference the ratio value is expressed in the above formula as
  • the eps is a preset very small positive number
  • the second difference between the second preset threshold thrHigh and the first preset threshold thrLow is 0, the sum of the eps and the second difference is paired
  • the second difference value is updated to determine the product value of the element value and the proportional value of the element point corresponding to the position in the difference matrix, and the product value is expressed in the above formula as And determine the sum value of the thrHigh- textDiffAbs
  • determine the second difference between the second preset threshold thrHigh and the first preset threshold thrLow that is, the second difference is expressed
  • the method further includes:
  • a sum matrix of the first product matrix and the second product matrix is determined, and the target image is updated according to an element value of each element point in the sum matrix.
  • the electronic device in order to make the target image more realistic, can also retain the texture of the original banding artifact area on the basis of the target image, so as to update the target image, so that the updated target image is more realistic .
  • the electronic device In order to preserve the texture of the original banding artifact area on the basis of the target image, the electronic device also detects and determines the banding artifact edge image and non-flat area image corresponding to the image to be processed, wherein the banding artifact edge area is detected
  • the image to be processed may be an image with original resolution, a down-sampled image, or an up-sampled image, which is not limited in this embodiment of the present application.
  • An edge detection algorithm can be used to determine the banding artifact edge area image corresponding to the image to be processed, for example, an edge detection algorithm based on canny, an edge detection algorithm based on Sobel, or an edge detection algorithm based on Laplacian; can be used
  • the variance determines the non-flat area image corresponding to the image to be processed, and the non-flat area image can also be determined based on the gradient.
  • denoising processing may be performed on the image to be processed, for example, by means of mean filtering, edge-preserving filtering, etc., so as to improve the accuracy of subsequent detection.
  • thresholds are preset, which are cannyThLow, cannyThMid and cannyThHigh, wherein cannyThLow is smaller than cannyThMid, and cannyThMid is smaller than cannyThHigh.
  • Use threshold cannyThLow and cannyThMid to form a threshold range perform edge detection on the image to be processed, and obtain the first edge map edgeMapLow
  • use thresholds cannyThMid, cannyThHigh to form a threshold range perform edge detection on the image to be processed, and obtain the second edge map edgeMapHigh, specifically, That is, the gradient value of each pixel of the image to be processed is calculated, and the pixels within the threshold range are determined according to the gradient value and the threshold range, thereby determining the first edge map edgeMapLow and the second edge map edgeMapHigh.
  • edgeMapLow canny(Img, [cannyThLow, cannyThMid]
  • edgeMapHigh canny(Img, [cannyThMid, cannyThHigh])
  • the banding artifact edge image After detecting and determining the banding artifact edge image and non-flat area image corresponding to the image to be processed, the banding artifact edge image is subjected to an edge-preserving filtering operation to determine the banding artifact weight matrix of the banding artifact edge image , perform an edge-preserving filtering operation on the non-flat region image, and determine the non-flat region weight matrix of the non-flat region image.
  • any one of the joint bilateral filter (joint bilateral filter), guided filter (guided filter), guided side window filter (guide side window filter) and its improved filter in the prior art is used.
  • the device performs an edge-preserving filtering operation on the banding artifact edge image and the non-flat area image to determine a banding artifact weight matrix of the banding artifact edge image and a non-flat area weight matrix of the non-flat area image.
  • the banding artifact weight matrix and the non-flat area weight matrix are substituted into the fusion formula to determine the banding artifact weight matrix and the non-flat area The fused weight matrix of the region weight matrix.
  • a first matrix is formed, wherein the number of columns of the first matrix is the number of pixels in the width of the image to be processed, and the number of rows of the first matrix is the number of pixels in the height of the image to be processed.
  • the number of points according to the fusion weight matrix and the first matrix of the image to be processed, multiply the fusion weight matrix and the first matrix to determine the first product matrix of the fusion weight matrix and the first matrix.
  • the preset value is subtracted from the element value of each element point to obtain each difference, and according to the difference of the corresponding position of each element point, determine the The first difference matrix.
  • the pixel value of each pixel is formed into a second matrix, wherein the number of columns of the second matrix is the number of pixels wide in the target image, and the number of rows of the second matrix is the target The pixel height of the image.
  • the first difference matrix is multiplied by the second matrix to obtain the second product matrix of the first difference matrix and the second matrix, wherein the multiplication of the matrix is a prior art, the present invention
  • the embodiment of the application does not repeat this.
  • the determination of the fusion weight matrix according to the pre-saved fusion formula includes:
  • a product matrix of the second difference matrix and the non-flat area weight matrix is determined as a fusion weight matrix.
  • the first preset value is subtracted from the element value of each element point in the band-shaped area weight to determine The difference value of the corresponding position of each element point is obtained, and the second difference value matrix is determined according to the difference value of the corresponding position of each element point.
  • the first preset value is a positive integer value, preferably, the first preset value is 1.
  • the second difference matrix is multiplied by the weight matrix of the non-flat area to obtain a product matrix of the second difference matrix and the weight matrix of the non-flat area, and the product matrix is determined as the fusion weight matrix.
  • the banding artifact edge image is guided by edge-preserving filtering to obtain the banding artifact area weight matrix bandEdgeWeight
  • the non-flat area image is guided by edge-preserving filtering to obtain the non-flat area weight matrix notFlatWeight, according to the banding artifact area weight matrix bandEdgeWeight
  • the (1-bandEdgeWeight) represents the second difference matrix obtained by the difference between 1 and the element value of each element point in the banding artifact area weight matrix bandEdgeWeight.
  • joint bilateral filter joint bilateral filter
  • guided filter guided filter
  • guide side window filter guide side window filter
  • the detection determines the banding artifact edge image corresponding to the image to be processed
  • the The method also includes:
  • a filtering operation is performed on the banding artifact edge image, and an image after removing target edges whose edge length is less than a preset length threshold is determined to be an updated image of the banding artifact edge image.
  • the electronic device performs a filtering operation on the banding artifact edge image, and the The object edge removal is to obtain the image after removing the object edge, and use the image after removing the object edge as an updated image of the banding artifact edge image.
  • each banding artifact edge in the banding artifact edge image and the preset length threshold determine the target edge in the banding artifact edge marking image whose edge length is less than the preset length threshold.
  • banding artifact edge label image bandEdgeMapRough For example, taking any pixel point Ap1 on the banding artifact edge A in the banding artifact edge image bandEdgeMapRough as the center, in the area win with a fixed radius r, if there is another banding artifact edge B
  • the pixel point Bp1 of the banding artifact is determined to be the same edge as the edge A and the edge B, and the above operation is performed on all the pixels on the banding artifact edge to determine the banding artifact edge label image bandEdgeLabelMap
  • the banding artifact edge label image bandEdgeLabelMap compares the banding artifact edge image bandEdgeMapRough, adding the label i of each banding artifact edge.
  • x and y refer to the x-axis coordinates and y-axis coordinates of the pixels respectively
  • len[] indicates the length statistics
  • edgeLenTh indicates the preset length threshold
  • the pixel at the position should be on the edge of the banding artifact whose length is greater than the preset length threshold
  • region expansion may also be performed on the image bandEdge after removing the target edge to obtain the banding artifact region image bandEdgeMap.
  • the region expansion method includes a dilation method, a post-filtering binarization method, and the like.
  • the determining the non-flat area image corresponding to the image to be processed includes:
  • Performing a binary filtering operation on the difference image determining a binary difference image of the difference image, performing area filtering on the binary difference image, and determining that the image after removing an area whose area is smaller than a preset area threshold is non-flat area image.
  • the image to be processed is subjected to a dilation operation and an erosion operation to obtain the dilation image and the erosion image.
  • the method for performing image dilation operation may use the existing imdilate function to perform image dilation operation, and use the existing imerode function to perform image erosion operation.
  • the pixel value of each corresponding pixel point in the expansion image and corrosion image is subtracted to determine the difference between the pixel values of each corresponding pixel point, and according to each corresponding The difference between the pixel values of the pixel points at the position determines the difference image between the dilated image and the corroded image.
  • Embodiment 6 is a diagrammatic representation of Embodiment 6
  • the method further includes:
  • the target abscissa value For each pixel in the target image, perform a bitwise AND operation on the abscissa value of the pixel and the difference between the width of the preset error transfer matrix and the second preset value to determine the target corresponding to the pixel
  • For the abscissa value perform a bitwise AND operation on the ordinate value of the pixel point and the difference between the height of the preset error transfer matrix and the second preset value to determine the target ordinate value corresponding to the pixel point, according to the pixel
  • the target abscissa value and the target ordinate value corresponding to the point determine the element value of the element point corresponding to the target abscissa value and the target ordinate value from the error transfer matrix, according to each pixel point
  • the element value, the abscissa value and the ordinate value of the element point determine the third matrix;
  • the electronic device also stores a preset error transfer matrix, which is used to pass errors in the target image so that the target image will not be too smooth, so that The realism of the target image is higher.
  • the abscissa value For each pixel of the target image, determine the abscissa value according to the abscissa value of the pixel and the width of the error transfer matrix, and determine the difference between the width of the error transfer matrix and the second preset value, and convert the A bitwise AND operation is performed on the abscissa value and the difference; wherein the second preset value can be any positive integer value, preferably, the second preset value is 1.
  • the ordinate value of the pixel point and the height of the error transfer matrix is determined, and the difference between the height of the error transfer matrix and the second preset value is determined, and the ordinate value and the difference are compared according to Bit AND operation, so as to determine the target abscissa value and target total coordinate value corresponding to each pixel point, and determine the position of each pixel point according to the target abscissa value and target total coordinate value corresponding to each pixel point position
  • the corresponding pixel values of the pixels in the third matrix are used to determine the third matrix.
  • FIG. 2 is a schematic diagram of a preset error transfer matrix provided by the embodiment of the present application. As shown in FIG. 2 , the width dw of the error transfer matrix is 8, and the height dh is also 8.
  • the determined third matrix and the first difference matrix multiply the third matrix and the first difference matrix to obtain the fourth product matrix of the first difference matrix and the third matrix, according to each pixel in the target image
  • the pixel value of the point determine the fourth matrix, add the fourth matrix and the third product matrix to obtain the third sum value matrix of the fourth matrix and the third product matrix, according to each element point in the third sum value matrix
  • the element value of the target image is updated, and the pixel value of the pixel point at each position in the updated target image is the element value of the element point at the corresponding position in the third sum value matrix.
  • Fig. 3 is a schematic diagram of removing banding artifacts from a video frame image provided by the embodiment of the present application.
  • the banding artifacts in the flat area can be effectively removed or alleviated by using the method in the embodiment of the present application.
  • Artifacts at the same time, can also play a good role in protecting the edges of the image, and in the texture-rich areas, the image details are hardly affected too much.
  • Embodiment 7 is a diagrammatic representation of Embodiment 7:
  • FIG. 4 is a process schematic diagram of an image banding artifact removal method provided in the embodiment of the present application. , as shown in Figure 4, the process includes the following steps:
  • S901 Obtain a video frame image after video data decoding as an image to be processed, and perform S902, S903, and S904 in parallel.
  • S902 Perform a filtering operation on the image to be processed, determine a filtered image of the image to be processed after removing banding artifacts, perform an edge texture preserving filtering operation according to the filtered image and the image to be processed, determine a target image after retaining the edge texture, and proceed to S906.
  • S903 Detect and determine the banding artifact edge image corresponding to the image to be processed, and proceed to S905.
  • S904 Detect and determine the non-flat area image corresponding to the image to be processed, and proceed to S905.
  • S905 Perform an edge-preserving filtering operation on the banding artifact edge image and the non-flat area image, determine the banding artifact area weight matrix and the non-flat area weight matrix, and determine the fusion weight matrix according to the pre-saved fusion formula, and perform S906 .
  • S906 According to the first matrix composed of the fusion weight matrix and the pixel value of each pixel in the image to be processed, determine the first product matrix of the fusion weight matrix and the first matrix; according to the first preset value and each pixel value in the fusion weight matrix The second matrix formed by the first difference matrix determined by the difference of the element values of the element points and the pixel value of each pixel point in the target image, and determine the second product matrix of the first difference matrix and the second matrix; determine The sum matrix of the first product matrix and the second product matrix updates the target image according to the element value of each element point in the sum matrix.
  • S907 Perform a Dither operation on the target image, and determine a final output target image after removing banding artifacts.
  • FIG. 5 is a schematic diagram of the process of an image banding artifact removal method provided in the embodiment of the present application. As shown in FIG. 5 , the process includes the following step:
  • S1001 Acquire video frame images after video data decoding, and perform S1002 and S1003 in parallel.
  • S1002 Perform a filtering operation on the image to be processed, determine a filtered image of the image to be processed after removing banding artifacts, perform an edge texture preserving filtering operation according to the filtered image and the image to be processed, determine a target image after retaining the edge texture, and proceed to S1008.
  • S1003 Perform down-sampling processing on the image to be processed, and perform S1004 and S1005 in parallel.
  • S1004 Detect and determine the banding artifact edge image corresponding to the image to be processed, and proceed to S1006.
  • S1005 Detect and determine the non-flat area image corresponding to the image to be processed, and proceed to S1006.
  • S1006 Perform an edge-preserving filtering operation on the banding artifact edge image and the non-flat area image, determine the banding artifact area weight matrix and the non-flat area weight matrix, and determine the fusion weight matrix according to a pre-stored fusion formula.
  • S1007 Perform upsampling processing on the fused weight matrix, and perform S1008.
  • S1008 According to the first matrix composed of the fusion weight matrix and the pixel value of each pixel in the image to be processed, determine the first product matrix of the fusion weight matrix and the first matrix; according to the first preset value and each of the fusion weight matrix The second matrix formed by the first difference matrix determined by the difference of the element values of the element points and the pixel value of each pixel point in the target image, and determine the second product matrix of the first difference matrix and the second matrix; determine The sum matrix of the first product matrix and the second product matrix updates the target image according to the element value of each element point in the sum matrix.
  • S1009 Perform a Dither operation on the target image to determine a final output target image after banding artifact removal.
  • Embodiment 8 is a diagrammatic representation of Embodiment 8
  • FIG. 6 is a schematic structural diagram of an image banding artifact removal device provided in an embodiment of the present application, and the device includes:
  • the first determining module 1101 is configured to perform a filtering operation on the image to be processed, and determine the filtered image of the image to be processed after removing banding artifacts, according to the relationship between the filtered image and each corresponding pixel point in the image to be processed The difference between the pixel values determines the difference matrix and the difference absolute value matrix between the filtered image and the image to be processed;
  • the second determination module 1102 is configured to, for each element point in the difference absolute value matrix, determine the difference between the element value and the a target size relationship between a first predetermined threshold and said second predetermined threshold, wherein said second predetermined threshold is greater than said first predetermined threshold;
  • the third determination module 1103 is configured to determine the target image after the edge texture is preserved according to the target size relationship corresponding to the element value of each element point and the pre-saved value function of the pixel value corresponding to each size relationship.
  • the target pixel value of the target pixel point corresponding to each element point is used to obtain the target image after edge texture is preserved.
  • the size relationship includes that the element value is not greater than the first preset threshold, the element value is greater than the first preset threshold and smaller than the second preset threshold, and the element value is not less than
  • the second preset threshold and the third determination module are specifically used for the target size relationship corresponding to the element value of each element point, if the target size relationship is that the element value of the element point is not greater than The first preset threshold value or not less than the second preset threshold value, respectively determine the pixel value of the pixel point corresponding to the element point in the filtered image or the image to be processed, and determine the pixel value as The target pixel value of the target pixel point corresponding to the element point in the target image; if the target size relationship is also that the element value of the element point is greater than the first preset threshold and smaller than the second preset threshold , determine the first difference between the second preset threshold and the element value of the element point, the second difference between the second preset threshold and the first preset threshold, and determine the first difference value and the proportional value corresponding to
  • the device also includes:
  • a detection module configured to detect and determine the banding artifact edge image and the non-flat area image corresponding to the image to be processed
  • the fusion module is configured to perform an edge-preserving filtering operation on the banding artifact edge image and the non-flat area image, determine the banding artifact area weight matrix and the non-flat area weight matrix, and according to the pre-saved fusion formula, Determine the fusion weight matrix;
  • the fourth determination module is configured to determine a first product matrix of a first matrix composed of the fusion weight matrix and the pixel value of each pixel in the image to be processed; according to the first preset value and the fusion weight matrix The first difference matrix determined by the difference of the element value of each element point in the target image and the second matrix composed of the pixel value of each pixel point in the target image, determine the first difference matrix and the second the second product matrix of matrices; and
  • the update module is configured to determine a sum matrix of the first product matrix and the second product matrix, and update the target image according to the element value of each element point in the sum matrix.
  • the fusion module is specifically configured to determine a second difference matrix according to the difference between the first preset value and the element value of each element point in the banding artifact area weight matrix;
  • the product matrix of the two-difference matrix and the non-flat area weight matrix is determined as the fusion weight matrix.
  • the update module is also configured to perform a filtering operation on the banding artifact edge image, and determine that the image after removing the target edge whose edge length is less than a preset length threshold is the update of the banding artifact edge image after the image.
  • the detection module is specifically configured to perform an expansion operation and an erosion operation on the image to be processed, and determine the expansion image and the erosion image; according to the pixels of each corresponding pixel point in the expansion image and the erosion image value difference, determine the difference image between the expansion image and the corrosion image; perform a binarization filtering operation on the difference image, determine a binary difference image of the difference image, and perform a binary difference image on the difference image Area filtering, which determines that the image after removing the area whose area is smaller than the preset area threshold is an image of a non-flat area.
  • the fourth determining module is also configured to, for each pixel in the target image, calculate the difference between the abscissa value of the pixel and the width of the preset error transfer matrix and the second preset value Perform a bitwise AND operation to determine the target abscissa value corresponding to the pixel point, and perform a bitwise AND operation on the difference between the vertical coordinate value of the pixel point and the height of the preset error transfer matrix and the second preset value to determine The target ordinate value corresponding to the pixel point, according to the target abscissa value and the target ordinate value corresponding to the pixel point, determine from the error transfer matrix that the target abscissa value and the target ordinate value correspond to The element value of the element point, according to the element value and the abscissa value and the ordinate value of the element point corresponding to each pixel point, determine the third matrix;
  • the update module is further configured to determine a fourth product matrix of the first difference matrix and the third matrix, and determine the composition of the fourth product matrix and the pixel value of each pixel in the target image
  • the third sum value matrix of the fourth matrix according to the element value of each element point in the third sum value matrix, determine the updated pixel value of the corresponding pixel point of the target image.
  • Embodiment 9 is a diagrammatic representation of Embodiment 9:
  • FIG. 7 is a schematic structural diagram of an electronic device provided by the embodiment of the present application.
  • an electronic device is also provided in the embodiment of the present application, including a processor 1201, a communication interface 1202, a memory 1203 and A communication bus 1204, wherein, the processor 1201, the communication interface 1202, and the memory 1203 complete mutual communication through the communication bus 1204;
  • a computer program is stored in the memory 1203, and when the program is executed by the processor 1201, the processor 1201 is made to perform the following steps:
  • the first preset threshold and the second preset threshold determine the element value and the first preset threshold and the a target size relationship for a second preset threshold, wherein the second preset threshold is greater than the first preset threshold;
  • the target size relationship corresponding to the element value of each element point and the pre-saved value function of the pixel value corresponding to each size relationship determine the target corresponding to the position of each element point in the target image after the edge texture is preserved.
  • the target pixel value of the pixel point is used to obtain the target image after the edge texture is preserved.
  • the processor 1201 is specifically configured such that the size relationship includes that the element value is not greater than the first preset threshold, and the element value is greater than the first preset threshold and smaller than the second preset threshold.
  • the threshold and the element value are not less than the second preset threshold, and the value function according to the target size relationship corresponding to the element value of each element point and the pre-saved pixel value corresponding to each size relationship , determine the target pixel value of the target pixel at the corresponding position of each element point in the target image after edge texture preservation includes:
  • the target size relationship corresponding to the element value of each element point if the target size relationship is that the element value of the element point is not greater than the first preset threshold or not less than the second preset threshold, Respectively determine the pixel value of the pixel point corresponding to the element point in the filtered image or the image to be processed, and determine the pixel value as the target pixel value of the target pixel point corresponding to the element point in the target image ; If the target size relationship is also that the element value of the element point is greater than the first preset threshold and smaller than the second preset threshold, determine the second preset threshold and the element value of the element point A difference, the second difference between the second preset threshold and the first preset threshold, and determine the ratio of the first difference to the second difference, and determine the difference matrix The product value of the element value at the corresponding position of the element point in the element point and the product value of the ratio value, and determine the sum of the product value and the pixel value of the pixel point at the corresponding position of the element point in the image to be processed
  • the processor 1201 is further configured to detect and determine the banding artifact edge image and the non-flat area image corresponding to the image to be processed;
  • the first difference matrix determined according to the difference between the first preset value and the element value of each element point in the fusion weight matrix and the second matrix composed of the pixel value of each pixel point in the target image, determine a second product matrix of the first difference matrix and the second matrix;
  • a sum matrix of the first product matrix and the second product matrix is determined, and the target image is updated according to an element value of each element point in the sum matrix.
  • the processor 1201 is specifically configured to determine a second difference matrix according to the difference between the first preset value and the element value of each element point in the banding artifact area weight matrix;
  • a product matrix of the second difference matrix and the non-flat area weight matrix is determined as a fusion weight matrix.
  • the processor 1201 is also configured to perform a filtering operation on the banding artifact edge image after the detection determines the banding artifact edge image corresponding to the image to be processed, and determine that the length of the removed edge is less than a preset
  • the image after the target edge of the length threshold is the updated image of the banding artifact edge image.
  • processor 1201 is specifically configured to perform an expansion operation and an erosion operation on the image to be processed, and determine the expansion image and the erosion image;
  • Performing a binary filtering operation on the difference image determining a binary difference image of the difference image, performing area filtering on the binary difference image, and determining that the image after removing an area whose area is smaller than a preset area threshold is non-flat area image.
  • the processor 1201 is also configured to, for each pixel in the target image, perform an abscissa value of the pixel and a difference between a preset width of the error transfer matrix and a second preset value according to Bitwise AND operation, determine the target abscissa value corresponding to the pixel point, perform bitwise AND operation on the difference between the vertical coordinate value of the pixel point and the height of the preset error transfer matrix and the second preset value, and determine the pixel point
  • the target ordinate value corresponding to the point according to the target abscissa value and the target ordinate value corresponding to the pixel point, determine the element corresponding to the target abscissa value and the target ordinate value from the error transfer matrix
  • the element value of the point according to the element value, abscissa value and ordinate value of the element point corresponding to each pixel point, determine the third matrix;
  • the communication bus mentioned above for the electronic device may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus or the like.
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the communication bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is used in the figure, but it does not mean that there is only one bus or one type of bus.
  • the communication interface 1202 is configured for communication between the above-mentioned electronic device and other devices.
  • the memory may include a random access memory (Random Access Memory, RAM), and may also include a non-volatile memory (Non-Volatile Memory, NVM), such as at least one disk memory.
  • RAM Random Access Memory
  • NVM non-Volatile Memory
  • the memory may also be at least one storage device located away from the aforementioned processor.
  • processor can be general-purpose processor, comprises central processing unit, network processor (Network Processor, NP) etc.; Other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • network processor Network Processor, NP
  • NP Network Processor
  • Other programmable logic devices discrete gate or transistor logic devices, discrete hardware components, etc.
  • the embodiment of the present application also provides a computer-readable storage medium, which stores a computer program, and the computer program is executed by a processor as follows:
  • the first preset threshold and the second preset threshold determine the element value and the first preset threshold and the a target size relationship for a second preset threshold, wherein the second preset threshold is greater than the first preset threshold;
  • the target size relationship corresponding to the element value of each element point and the pre-saved value function of the pixel value corresponding to each size relationship determine the target corresponding to the position of each element point in the target image after the edge texture is preserved.
  • the target pixel value of the pixel point is used to obtain the target image after the edge texture is preserved.
  • the size relationship includes that the element value is not greater than the first preset threshold, the element value is greater than the first preset threshold and smaller than the second preset threshold, and the element value is not less than
  • the second preset threshold according to the target size relationship corresponding to the element value of each element point, and the pre-saved value function of the pixel value corresponding to each size relationship, determine the target after the edge texture is preserved
  • the target pixel value of the target pixel point corresponding to each element point in the image includes:
  • the target size relationship corresponding to the element value of each element point if the target size relationship is that the element value of the element point is not greater than the first preset threshold or not less than the second preset threshold, Respectively determine the pixel value of the pixel point corresponding to the element point in the filtered image or the image to be processed, and determine the pixel value as the target pixel value of the target pixel point corresponding to the element point in the target image ; If the target size relationship is also that the element value of the element point is greater than the first preset threshold and smaller than the second preset threshold, determine the second preset threshold and the element value of the element point A difference, the second difference between the second preset threshold and the first preset threshold, and determine the ratio of the first difference to the second difference, and determine the difference matrix The product value of the element value at the corresponding position of the element point in the element point and the product value of the ratio value, and determine the sum of the product value and the pixel value of the pixel point at the corresponding position of the element point in the image to be processed
  • the method also includes:
  • the first difference matrix determined according to the difference between the first preset value and the element value of each element point in the fusion weight matrix and the second matrix composed of the pixel value of each pixel point in the target image, determine a second product matrix of the first difference matrix and the second matrix;
  • a sum matrix of the first product matrix and the second product matrix is determined, and the target image is updated according to an element value of each element point in the sum matrix.
  • the determining the fusion weight matrix according to the pre-saved fusion formula includes:
  • a product matrix of the second difference matrix and the non-flat area weight matrix is determined as a fusion weight matrix.
  • the method further includes:
  • a filtering operation is performed on the banding artifact edge image, and an image after removing target edges whose edge length is less than a preset length threshold is determined to be an updated image of the banding artifact edge image.
  • determining the non-flat region image corresponding to the image to be processed includes:
  • Performing a binary filtering operation on the difference image determining a binary difference image of the difference image, performing area filtering on the binary difference image, and determining that the image after removing an area whose area is smaller than a preset area threshold is non-flat area image.
  • the method also includes:
  • the target abscissa value For each pixel in the target image, perform a bitwise AND operation on the abscissa value of the pixel and the difference between the width of the preset error transfer matrix and the second preset value to determine the target corresponding to the pixel
  • For the abscissa value perform a bitwise AND operation on the ordinate value of the pixel point and the difference between the height of the preset error transfer matrix and the second preset value to determine the target ordinate value corresponding to the pixel point, according to the pixel
  • the target abscissa value and the target ordinate value corresponding to the point determine the element value of the element point corresponding to the target abscissa value and the target ordinate value from the error transfer matrix, according to each pixel point
  • the element value, the abscissa value and the ordinate value of the element point determine the third matrix;

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

一种图像带状伪影去除方法、装置、设备和介质,对待处理图像进行滤波操作,确定去除带状伪影后的滤波图像,根据滤波图像与待处理图像中每个对应位置像素点的像素值的差值,确定滤波图像与待处理图像的差异矩阵和差异绝对值矩阵;针对差异绝对值矩阵中的每个元素点,确定元素值与第一预设阈值和第二预设阈值的目标大小关系,第二预设阈值大于第一预设阈值,并根据预先保存的每种大小关系对应的像素值的取值函数,确定目标图像中每个元素点对应位置的目标像素点的目标像素值,得到边缘纹理保留后的目标图像,目标图像去除了带状伪影并避免了图像边缘模糊,并且提高了确定目标图像的速度。

Description

一种图像带状伪影去除方法、装置、设备和介质 技术领域
本申请涉及图像处理技术领域,尤其涉及一种图像带状伪影去除方法、装置、设备和介质。
背景技术
基于当前的图像编码标准和视频编码标准,在对图像进行压缩时普遍采用量化手段,其中图像编码标准为联合图像专家小组(Joint Photographic Experts Group,JPEG),视频编码标准可以是基于数字存储媒体运动图像和语音的压缩标准(Moving Picture Experts Group,MPEG-2),可以是高度压缩数字视频编码标准(H264),可以是高度压缩数字视频编码标准(H265),还可以是信源编码标准(Audio Video coding Standard,AVS)。
量化手段是通过将残差信号或者是大范围的输入信号量化为小范围的输出信号,从而达到压缩比特的目的,虽然量化手段能够有效提高图像的压缩效率,但是,由于量化误差的存在,也会引入人工噪声,典型的比如块效应和带状伪影。
图像中块效应的主要表现是编码块边缘亮度不连续,块效应会出现在整个图像的任何位置。而带状伪影主要是由于量化过程中,丢失了低幅值的细节信息造成,因此,图像中带状伪影主要出现在梯度变化比较平缓的区域,比如平坦区域,亮度渐变区域等。在带宽成本敏感的场景中,由于采用较大的压缩比,会导致较为严重的带状伪影现象,此外,在获得包括带状伪影现象的图像后,若后续继续进行对比度增强处理,则会导致带状伪影现象更加严重。
概括而言,现有技术在去除带状伪影时会导致图像的边缘模糊,并且去除图像带状伪影时的速度缓慢。
因此,如何在去除带状伪影时避免图像边缘模糊,并提高去除图像带状伪影时的速度就成为亟待解决的问题。
发明内容
本申请提供了一种图像带状伪影去除方法、装置、设备和介质,用以解决现有技术中如何在去除带状伪影时避免图像边缘模糊,并提高去除图像带状伪影时的速度的问题。
本申请提供了一种图像带状伪影去除方法,所述方法包括:
对待处理图像进行滤波操作,确定所述待处理图像去除带状伪影后的滤波图像,根据所述滤波图像与所述待处理图像中每个对应位置像素点的像素值的差值,确定所述滤波图像与所述待处理图像的差异矩阵和差异绝对值矩阵;
针对所述差异绝对值矩阵中的每个元素点,根据该元素点的元素值、第一预设阈值和第二预设阈值,确定所述元素值与所述第一预设阈值和所述第二预设阈值的目标大小关系,其中所述第二预设阈值大于所述第一预设阈值;及
根据每个元素点的元素值对应的所述目标大小关系、以及预先保存的每种大小关系对应的像素值的取值函数,确定边缘纹理保留后的目标图像中每个元素点对应位置的目标像素点的目标像素值,得到边缘纹理保留后的所述目标图像。
相应地,本申请提供了一种图像带状伪影去除装置,所述装置包括:
第一确定模块,设置为对待处理图像进行滤波操作,确定所述待处理图像去除带状伪影后的滤波图像,根据所述滤波图像与所述待处理图像中每个对应位置像素点的像素值的差值,确定所述滤波图像与所述待处理图像的差异矩阵和差异绝对值矩阵;
第二确定模块,设置为针对所述差异绝对值矩阵中的每个元素点,根据该元素点的元素值、第一预设阈值和第二预设阈值,确定所述元素值与所述第一预设阈值和所述第二预设阈值的目标大小关系,其中所述第二预设阈值大于所 述第一预设阈值;及
第三确定模块,设置为根据每个元素点的元素值对应的所述目标大小关系、以及预先保存的每种大小关系对应的像素值的取值函数,确定边缘纹理保留后的目标图像中每个元素点对应位置的目标像素点的目标像素值,得到边缘纹理保留后的所述目标图像。
相应地,本申请提供了一种电子设备,包括:处理器、通信接口、存储器和通信总线,其中,处理器,通信接口,存储器通过通信总线完成相互间的通信;所述存储器中存储有计算机程序,当所述程序被所述处理器执行时,使得所述处理器执行上述图像带状伪影去除方法中任一所述方法的步骤。
相应地,本申请提供了一种计算机可读存储介质,其存储有计算机程序,所述计算机程序被处理器执行时实现上述图像带状伪影去除方法中任一所述方法的步骤。
附图说明
图1为本申请实施例提供的一种图像带状伪影去除方法的过程示意图;
图2为本申请实施例提供的一种预设的误差传递矩阵的示意图;
图3为本申请实施例提供的一种对视频帧图像进行带状伪影去除后的示意图;
图4为本申请实施例提供的一种图像带状伪影去除方法的过程示意图;
图5为本申请实施例提供的一种图像带状伪影去除方法的过程示意图;
图6为本申请实施例提供的一种图像带状伪影去除装置的结构示意图;及
图7为本申请实施例提供的一种电子设备结构示意图。
具体实施方式
为了去除带状伪影并避免图像边缘模糊,提高去除图像带状伪影时的速度, 本申请实施例提供了一种图像带状伪影去除方法、装置、设备和介质。
实施例1:
图1为本申请实施例提供的一种图像带状伪影去除方法的过程示意图,该过程包括以下步骤:
S601:对待处理图像进行滤波操作,确定所述待处理图像去除带状伪影后的滤波图像,根据所述滤波图像与所述待处理图像中每个对应位置像素点的像素值的差值,确定所述滤波图像与所述待处理图像的差异矩阵和差异绝对值矩阵;
S602:针对所述差异绝对值矩阵中的每个元素点,根据该元素点的元素值、第一预设阈值和第二预设阈值,确定所述元素值与所述第一预设阈值和所述第二预设阈值的目标大小关系,其中所述第二预设阈值大于所述第一预设阈值;及
S603:根据每个元素点的元素值对应的所述目标大小关系、以及预先保存的每种大小关系对应的像素值的取值函数,确定边缘纹理保留后的目标图像中每个元素点对应位置的目标像素点的目标像素值,得到边缘纹理保留后的所述目标图像。
本申请实施例提供的一种图像带状伪影去除方法应用于电子设备,其中该电子设备可以是PC、平板电脑、智能手机等智能终端设备,也可以是本地服务器和云端服务器等。
由于编码器编码量化导致的待处理图像的带状伪影,例如在短视频领域,用户上传的视频可能是其他第三方编码器压缩后的视频,或者是采用高压缩比压缩后的视频,导致视频中的视频帧图像包括带状伪影。
为了去除待处理图像中的带状伪影,该电子设备对该待处理图像进行滤波操作,其中该待处理图像是指存在带状伪影的图像,该待处理图像可以是单独的一张图像,也可以是视频数据解码后的视频帧图像,该待处理图像不限定色彩空间,可以是红绿蓝颜色空间(red green blue,RGB)图像、也可以是明 亮度色度颜色空间(Luminance Chrominance,YUV)图像、还可以是颜色模型(Lab)图像,该滤波操作可以是低通滤波,可以是保边滤波,本申请实施例对此不做限制。
具体的,可以使用现有的盒式滤波器(boxfilter)、高斯滤波器(gaussian filter)等进行低通滤波操作,可以使用现有的双边滤波器(bilateral filter)、非局部均值滤波器(non local filter)、指导滤波器(guided filter),侧窗滤波器(side windows filter)等进行双边滤波操作,具体的,电子设备进行滤波操作的方法属于现有技术,本申请实施例对此不做赘述。
对该待处理图像进行滤波操作后,可以确定出该待处理图像去除带状伪影后的滤波图像,其中该滤波图像与该待处理图像的像素点数量相同,由于在进行滤波操作后,该滤波操作的图像边缘会出现边缘模糊问题,为了解决该图像边缘模糊问题,还可以确定出滤波图像与待处理图像的差异矩阵和差异绝对值矩阵。
具体的,针对滤波图像中每个位置的像素点,根据该像素点的像素值和待处理图像中对应位置像素点的像素值,确定像素值的差值;根据每个位置的像素点的像素值的差值,可以确定出该滤波图像与待处理图像的差异矩阵;其中该差异矩阵的列数为待处理图像宽的像素点数量,该差异矩阵的行数为待处理图像高的像素点数量;根据确定出的差异矩阵,该电子设备还确定出差异绝对值矩阵,具体的,采用现有的取绝对值函数确定差异绝对值矩阵。
为了使滤波图像的边缘清晰,该电子设备还保存有第一预设阈值和第二预设阈值,其中该第一预设阈值和该第二预设阈值均为预先设置的,该第二预设阈值大于第一预设阈值。
由于该差异绝对值矩阵的每个元素点的元素值与第一预设阈值和第二预设阈值的大小不同时,该元素点对应位置的像素值的确定方法也不相同,因此针对该差异绝对值矩阵中的每个元素点,根据该元素点的元素值、第一预设阈 值和第二预设阈值,判断该元素值与第一预设阈值的和第二预设阈值的目标大小关系。其中该元素值可能小于第一预设阈值、可能大于第一预设阈值、还可能大于第二预设阈值。
为了确定出每个元素点对应位置的目标像素点的目标像素值,在本申请实施例中,还预先保存有每种大小关系对应的像素值的取值函数,其中该大小关系包括元素值不大于第一预设阈值、元素值大于第一预设阈值且小于第二预设阈值和元素值不小于第二预设阈值。
根据预先保存的每种大小关系对应的像素值的取值函数、以及每个元素点的元素值对应的目标大小关系,确定每个元素点对应位置的像素值的目标取值函数,并根据每个元素点对应位置的像素点的目标取值函数,确定边缘纹理保留后的目标图像中每个元素点对应位置的目标像素点的目标像素值,从而根据每个目标像素点的目标像素值,得到了边缘纹理保留后的目标图像。
由于在本申请实施例中对待处理图像进行滤波操作,确定待处理图像去除带状伪影后的滤波图像,为了解决滤波图像的边缘模糊的问题,还根据滤波图像与待处理图像中每个对应位置像素点的像素值的差值,确定滤波图像与待处理图像的差异矩阵和差异绝对值矩阵;针对所述差异绝对值矩阵中的每个元素点,根据该元素点的元素值、第一预设阈值和第二预设阈值,确定所述元素值与所述第一预设阈值和所述第二预设阈值的目标大小关系,其中所述第二预设阈值大于所述第一预设阈值;根据每个元素点的元素值对应的所述目标大小关系、以及预先保存的每种大小关系对应的像素值的取值函数,确定边缘纹理保留后的目标图像中每个元素点对应位置的目标像素点的目标像素值,得到边缘纹理保留后的所述目标图像;从而使目标图像去除了带状伪影并避免了图像边缘模糊,并且该方法中仅采用了简单的减法和乘法,计算复杂度较低,从而提高了确定目标图像的速度。
实施例2:
为了确定出边缘纹理保留后的目标图像中每个元素点对应位置的目标像 素点的目标像素值,在上述实施例的基础上,在本申请实施例中,所述大小关系包括所述元素值不大于所述第一预设阈值、所述元素值大于所述第一预设阈值且小于所述第二预设阈值和所述元素值不小于所述第二预设阈值,所述根据每个元素点的元素值对应的所述目标大小关系、以及预先保存的每种大小关系对应的像素值的取值函数,确定边缘纹理保留后的目标图像中每个元素点对应位置的目标像素点的目标像素值包括:
针对每个元素点的元素值对应的所述目标大小关系,若所述目标大小关系分别为该元素点的元素值不大于所述第一预设阈值或不小于所述第二预设阈值,分别确定所述滤波图像或所述待处理图像中该元素点对应位置的像素点的像素值,将所述像素值确定为所述目标图像中该元素点对应位置的目标像素点的目标像素值;若所述目标大小关系还为该元素点的元素值大于所述第一预设阈值且小于所述第二预设阈值,确定所述第二预设阈值和该元素点的元素值的第一差值,所述第二预设阈值和所述第一预设阈值的第二差值,并确定所述第一差值与所述第二差值对应的比例值,确定所述差异矩阵中该元素点对应位置的元素值与所述比例值的乘积值,并确定所述乘积值与所述待处理图像中该元素点对应位置的像素点的像素值的和值,将所述和值确定为所述目标图像中该元素点对应位置的目标像素点的目标像素值。
为了确定出目标图像中每个目标像素点的目标像素值,在本申请实施例中,针对每个元素点的元素值对应的目标大小关系,根据该元素点的元素值对应的目标大小关系和待处理图像、滤波图像确定出目标图像中该元素点对应位置的目标像素点的目标像素值。
具体的,根据该元素点的元素值对应的目标大小关系,若该目标大小关系为该元素点的元素值不大于第一预设阈值时,根据预先确定的滤波图像,确定滤波图像中该元素点对应位置的像素点的像素值。
例如,该元素点在差异绝对值矩阵中的位置为第一行第二列,则在滤波图像中确定出第一行第二列的像素点的像素值,其中差异绝对值矩阵的行数列数 与滤波图像的行数列数相同,因此根据该元素点在差异绝对值矩阵的位置,均可以在滤波图像中找到对应位置的像素点的像素值。
若该目标大小关系为该元素点的元素值不小于第二预设阈值时,根据预先确定的待处理图像,确定待处理图像中该元素点对应位置的像素点的像素值。其中差异绝对值矩阵的行数列数与待处理图像的行数列数也相同,因此根据该元素点在差异绝对值矩阵的位置,均可以在待处理图像中找到对应位置的像素点的像素值。
确定出滤波图像或待处理图像中该元素点对应位置的像素点的像素值后,将该像素值确定为目标图像中该元素点对应位置的目标像素点的目标像素值。
若该目标大小关系为该元素点的元素值大于第一预设阈值且小于第二预设阈值时,根据该元素点的元素值和第二预设阈值,确定该第二预设阈值和该元素点的元素值的第一差值,根据该第二预设阈值和第一预设阈值,确定该第二预设阈值和第一预设阈值的第二差值,根据第一差值和第二差值,确定出该第一差值与第二差值的比例值,根据确定出的比例值和差异矩阵中该元素点对应位置的元素值,将比例值与差异矩阵中该元素点对应位置的元素值相乘得到乘积值。确定该乘积值与待处理图像中该元素点对应位置的像素点的像素值的和值,将该和值确定为目标图像中该元素点对应位置的目标像素点的目标像素值。
具体的,在本申请实施例中,根据待处理图像Img和待处理图像去除带状伪影后的滤波图像blurFilterImg,确定出滤波图像blurFilterImg与待处理图像Img的差异矩阵textDiff,其中textDiff=blurFilterImg-Img,即确定差异矩阵中每个元素点的元素值与待处理图像中对应位置的元素点的元素值的差值,根据每个元素点对应的差值,确定出差异矩阵textDiff。采用现有的取绝对值函数确定出差异矩阵textDiff对应的差异绝对值矩阵textDiffAbs,即textDiffAbs=abs(textDiff)。
具体的,根据确定出的差异矩阵textDiff和差异绝对值矩阵textDiffAbs, 可以确定出边缘纹理保留后的目标图像textPresImg,该目标图像textPresImg的确定公式为:
Figure PCTCN2022094771-appb-000001
根据差异绝对值矩阵textDiffAbs中每个元素点的元素值、第一预设阈值thrLow和第二预设阈值thrHigh,若该元素值不大于第一预设阈值thrLow,即该元素值≤thrLow,则获取滤波图像中该元素点对应位置的像素点的像素值,将该像素值确定为边缘纹理保留后的目标图像textPresImg中该元素点对应位置的目标像素点的目标像素值。
若该元素值不小于第二预设阈值thrHigh,即该元素值≥thrHigh,则获取待处理图像中该元素点对应位置的像素点的像素值,将该像素值确定为边缘纹理保留后的目标图像textPresImg中该元素点对应位置的目标像素点的目标像素值。
若该元素值大于第一预设阈值小于第二预设阈值时,则确定第二预设阈值thrHigh和该元素点的元素值的第一差值,第一差值在上述公式表示为thrHigh-textDiffAbs,确定第二预设阈值thrHigh和第一预设阈值thrLow的第二差值,即第二差值在上述公式表示为thrHigh-thrLow,确定第一差值与第二差值对应的比例值,该比例值在上述公式中表示为
Figure PCTCN2022094771-appb-000002
其中该eps为预先设置的很小的正数,在该第二预设阈值thrHigh和第一预设阈值thrLow的第二差值为0时,将该eps与该第二差值的和值对该第二差值进行更新,确定差异矩阵中该元素点对应位置的元素值与比例值的乘积值,该乘积值在上述公式中表示为
Figure PCTCN2022094771-appb-000003
并确定乘积值与待处理图像中该元素点对应位置的像素点的像素值的和值,该和值在上述公式中表示为
Figure PCTCN2022094771-appb-000004
将该和值确定为目标图像textPresImg 中该元素点对应位置的目标像素点的目标像素值。
实施例3:
为了使目标图像更加真实,在上述各实施例的基础上,在本申请实施例中,所述方法还包括:
检测确定所述待处理图像对应的带状伪影边缘图像和非平坦区域图像;
对所述带状伪影边缘图像和所述非平坦区域图像进行保边滤波操作,确定带状伪影区域权重矩阵和非平坦区域权重矩阵,并根据预先保存的融合公式,确定融合权重矩阵;
确定所述融合权重矩阵与所述待处理图像中每个像素点的像素值组成的第一矩阵的第一乘积矩阵;
根据预设数值与所述融合权重矩阵中每个元素点的元素值的差值确定的第一差值矩阵和所述目标图像中每个像素点的像素值组成的第二矩阵,确定所述第一差值矩阵与所述第二矩阵的第二乘积矩阵;
确定所述第一乘积矩阵与所述第二乘积矩阵的和值矩阵,根据所述和值矩阵中每个元素点的元素值,对所述目标图像进行更新。
在本申请实施例中,为了使目标图像更加真实,电子设备还可以在目标图像的基础上保留原带状伪影区域的纹理,实现对目标图像的更新,从而使更新后的目标图像更加真实。
为了实现在目标图像的基础上保留原带状伪影区域的纹理,该电子设备还检测确定待处理图像对应的带状伪影边缘图像和非平坦区域图像,其中在检测带状伪影边缘区域图像和非平坦区域图像时,该待处理图像可以是原始分辨率的图像,也可以下采样后的图像,还可以是上采样后的图像,本申请实施例对此不做限制。
可以采用边缘检测算法确定待处理图像对应的带状伪影边缘区域图像,例如可以是基于canny的边缘检测算法,可以是基于Sobel的边缘检测算法,还可以是基于Laplacian的边缘检测算法;可以采用方差确定待处理图像对应的 非平坦区域图像,也可以基于梯度确定非平坦区域图像。
作为一种可能的实施方式,在进行检测之前,还可以对该待处理图像进行去噪处理,例如采用均值滤波、保边滤波等进行去噪处理,从而提高后续进行检测的准确度。
具体的,采用基于canny的边缘检测算法确定待处理图像的带状伪影边缘区域图像时,预先设置有阈值,分别是cannyThLow、cannyThMid和cannyThHigh,其中cannyThLow小于cannyThMid,cannyThMid小于cannyThHigh。
采用阈值cannyThLow和cannyThMid组成阈值范围,对待处理图像进行边缘检测,获得第一边缘图edgeMapLow,采用阈值cannyThMid,cannyThHigh组成的阈值范围,对待处理图像进行边缘检测,获得第二边缘图edgeMapHigh,具体的,即计算待处理图像的每个像素点的梯度值,根据梯度值和阈值范围确定出处于阈值范围内的像素点,从而确定出第一边缘图edgeMapLow和第二边缘图edgeMapHigh。
具体的edgeMapLow=canny(Img,[cannyThLow,cannyThMid]),edgeMapHigh=canny(Img,[cannyThMid,cannyThHigh]),其中上述公式标识该边缘图中的像素点为待处理图像中梯度值位于阈值范围的像素点。
采用第一边缘图edgeMapLow和第二边缘图edgeMapHigh获得待处理图像对应的带状伪影边缘图bandEdgeMapRough,具体的,即根据第一边缘图edgeMapLow中每个像素点的像素值与第二边缘图edgeMapHigh中对应位置的像素点的像素值的差值,确定出处理图像对应的带状伪影边缘图bandEdgeMapRough,即bandEdgeMapRough=edgeMapLow-edgeMapHigh。
在检测确定出待处理图像对应的带状伪影边缘图像和非平坦区域图像后,对带状伪影边缘图像进行保边滤波操作,确定出带状伪影边缘图像的带状伪影权重矩阵,对非平坦区域图像进行保边滤波操作,确定出非平坦区域图像的非平坦区域权重矩阵。
具体的,采用现有技术中的联合双边滤波器(joint bilateral filter),指导滤波器(guided filter),导向侧窗式滤波器(guide side window filter)以及其改进滤波器中的任意一种滤波器,对带状伪影边缘图像和非平坦区域图像进行保边滤波操作,确定出带状伪影边缘图像的带状伪影权重矩阵、和非平坦区域图像的非平坦区域权重矩阵。
根据带状伪影权重矩阵和非平坦区域权重矩阵,根据预先保存的融合公式,将带状伪影权重矩阵与非平坦区域权重矩阵代入融合公式中,确定出带状伪影权重矩阵和非平坦区域权重矩阵的融合权重矩阵。
根据待处理图像中每个像素点的像素值,组成第一矩阵,其中该第一矩阵的列数为待处理图像宽的像素点数量,该第一矩阵的行数为待处理图像高的像素点数量;根据融合权重矩阵与待处理图像的第一矩阵,将融合权重矩阵与第一矩阵相乘,确定出融合权重矩阵与第一矩阵的第一乘积矩阵。
根据预设数值与融合权重矩阵中每个元素点的元素值,将预设数值与每个元素点的元素值相减得到每个差值,根据每个元素点对应位置的差值,确定出第一差值矩阵。
根据确定出目标图像中每个像素点的像素值,将每个像素点的像素值组成第二矩阵,其中第二矩阵的列数为目标图像宽的像素点数,第二矩阵的行数为目标图像高的像素点数。
根据第一差值矩阵和第二矩阵,将第一差值矩阵与第二矩阵相乘,得到第一差值矩阵与第二矩阵的第二乘积矩阵,其中矩阵的乘法为现有技术,本申请实施例对此不做赘述。
将第一乘积矩阵与第二乘积矩阵相加,得到第一乘积矩阵与第二乘积矩阵的和值矩阵,根据和值矩阵中每个元素点的元素值,将目标图像中对应位置的像素点的像素值替换为元素值,从而实现对目标图像的更新。
为了确定出融合权重矩阵,在上述各实施例的基础上,在本申请实施例中,所述根据预先保存的融合公式,确定出融合权重矩阵包括:
根据第一预设数值与所述带状伪影区域权重矩阵中每个元素点的元素值的差值,确定第二差值矩阵;
将所述第二差值矩阵与所述非平坦区域权重矩阵的乘积矩阵确定为融合权重矩阵。
为了准确地确定出融合权重矩阵,在本申请实施例中,根据预先保存的带状区域权重矩阵,将第一预设数值与带状位于区域权重中每个元素点的元素值相减,确定出每个元素点对应位置的差值,根据每个元素点对应位置的差值,确定出第二差值矩阵。其中该第一预设数值为正整数值,较佳的,该第一预设数值为1。
将第二差值矩阵与非平坦区域权重矩阵相乘,得到第二差值矩阵与非平坦区域权重矩阵的乘积矩阵,将该乘积矩阵确定为融合权重矩阵。
具体的,对带状伪影边缘图像进行引导保边滤波,获得带状伪影区域权重矩阵bandEdgeWeight,对非平坦区域图像进行引导保边滤波,获得非平坦区域权重矩阵notFlatWeight,根据带状伪影区域权重矩阵bandEdgeWeight和非平坦区域权重矩阵notFlatWeight、以及预先保存的融合公式,生成最终的融合权重矩阵blendWeight,其中该融合公式为blendWeight=(1-bandEdgeWeight)×notFlatWeight。
其中该(1-bandEdgeWeight)表示为1与带状伪影区域权重矩阵bandEdgeWeight中每个元素点的元素值的差值得到的第二差值矩阵。进行保边滤波可以采用现有技术中的联合双边滤波器(joint bilateral filter),指导滤波器(guided filter),导向侧窗式滤波器(guide side window filter)以及其改进滤波器。
实施例4:
为了使检测得到的带状伪影边缘图像更加准确,在上述各实施例的基础上,在本申请实施例中,所述检测确定所述待处理图像对应的带状伪影边缘图像之后,所述方法还包括:
对所述带状伪影边缘图像进行滤波操作,确定去除边缘长度小于预设长度阈值的目标边缘后的图像为所述带状伪影边缘图像的更新后的图像。
为了使检测得到的带状伪影边缘图像更加准确,在本申请实施例中,电子设备对带状伪影边缘图像进行滤波操作,将带状伪影边缘图像中边缘长度小于预设长度阈值的目标边缘去除,得到去除目标边缘后的图像,将去除目标边缘后的图像作为带状伪影边缘图像的更新后的图像。
具体的,针对该带状伪影边缘图像中的每一带状伪影边缘,以该带状伪影边缘上的任一像素点为中心,设定距离为半径的区域范围内,若存在另一带状伪影边缘的像素点,则认为该两个带状伪影边缘为同一带状伪影边缘,确定出该带状伪影边缘图像中的相互独立的带状伪影边缘,并标记为i,其中,i={0,1,...,n},n是带状伪影边缘数量。
根据带状伪影边缘图像中每个带状伪影边缘的长度和预设的长度阈值,确定出带状伪影边缘标记图像中边缘长度小于预设长度阈值的目标边缘。
例如,以带状伪影边缘图像bandEdgeMapRough中的带状伪影边缘A上的任一像素点Ap1为中心,在固定半径为r的区域范围win中,如果存在另一个带状伪影边缘B上的像素点Bp1,确定边缘A和边缘B是同一个边缘,对带状伪影边缘上的所有像素点进行上述操作,确定出带状伪影边缘标记图像bandEdgeLabelMap,该带状伪影边缘标记图像bandEdgeLabelMap相比较带状伪影边缘图像bandEdgeMapRough,增加了每个带状伪影边缘的标记i。
确定出带状伪影边缘标记图像bandEdgeLabelMap中边缘长度小于预设长度阈值的目标边缘,并将除目标边缘外的其他边缘中的像素点作为去除目标边缘后的图像bandEdge中的像素点。
Figure PCTCN2022094771-appb-000005
中,x和y分别是指像素点的x轴坐标和y轴坐标,len[]表示长度统计, edgeLenTh表示预设长度阈值,bandEdge(x,y)=1表示带状伪影边缘图中对应该位置的像素点在长度大于预设长度阈值的带状伪影边缘上,bandEdge(x,y)=0表示带状伪影边缘图中对应该位置的像素点在长度不大于预设长度阈值的带状伪影边缘上、或是像素点在非带状伪影边缘上。
作为一种可能的实施方式,在本申请实施例中,还可以对去除目标边缘后的图像bandEdge进行区域扩展,得到带状伪影区域图像bandEdgeMap。具体的,区域扩展方式包括膨胀方式和滤波后二值化处理等方式。
实施例5:
为了确定待处理图像对应的非平坦区域图像,在上述各实施例的基础上,在本申请实施例中,所述确定所述待处理图像对应的非平坦区域图像包括:
对所述待处理图像进行膨胀操作和腐蚀操作,确定膨胀图像和腐蚀图像;
根据所述膨胀图像与所述腐蚀图像中每个对应位置像素点的像素值的差值,确定所述膨胀图像与所述腐蚀图像的差异图像;
对所述差异图像进行二值化滤波操作,确定所述差异图像的二值差异图像,对所述二值差异图像进行面积滤波,确定去除面积小于预设面积阈值的区域后的图像为非平坦区域图像。
为了确定出待处理图像对应的非平坦区域图像,在本申请实施例中,对待处理图像进行膨胀操作和腐蚀操作,得到膨胀图像和腐蚀图像。其中对图像进行膨胀操作的方法可以采用现有的imdilate函数进行图像膨胀操作,采用现有的imerode函数进行图像腐蚀操作。
根据确定出的膨胀图像和腐蚀图像,将膨胀图像和腐蚀图像中每个对应位置的像素点的像素值相减,确定出每个对应位置的像素点的像素值的差值,根据每个对应位置的像素点的像素值的差值,确定出膨胀图像与腐蚀图像的差异图像。
对差异图像进行二值化滤波操作,确定出差异图像的二值化差异图像,具体的,根据差异图像中每个像素点的像素值和确定的预设纹理差异阈值,确定 出像素值不小于预设纹理差异阈值的像素点,并将其像素值设置为255,确定出像素值小于预设纹理差异阈值的像素点,并将其像素值设置为0。
对确定出的二值化差异图像进行面积滤波,将二值化差异图像中区域面积小于预设面积阈值的目标区域去除,得到去除目标区域后的图像,并将去除目标区域后的图像作为非平坦区域图像。
具体的,对待处理图像进行膨胀操作,得到膨胀图像dilateImg,对待处理图像进行腐蚀操作,得到腐蚀图像erodeImg,根据膨胀图像dilateImg和腐蚀图像erodeImg,确定出差异图像diffImg,其中,diffImg=dilateImg-erodeImg,表示为将膨胀图像dilateImg和腐蚀图像erodeImg中每个对应位置的像素点的像素值相减的差值作为差异图像diffImg中对应位置的像素点的像素值。
对差异图像diffImg进行二值化处理,获得二值差异图像binaryDiffImg,其中
Figure PCTCN2022094771-appb-000006
其中,textThLow表示预设纹理差异阈值。
对二值差异图像binaryDiffImg进行面积滤波,获得非平坦区域图notFlatMap,
Figure PCTCN2022094771-appb-000007
其中,j={0,1,2,...,m}是二值差异图像binaryDiffImg的区域标识,m表示二值差异图像binaryDiffImg中的区域数量,x和y分别是指像素点的x轴坐标和y轴坐标,S()表示计算区域面积的函数,areaTh为预设面积阈值。
实施例6:
为了使确定出的目标图像更加真实,在上述各实施例的基础上,在本申请实施例中,所述方法还包括:
针对所述目标图像中每个像素点,将该像素点的横坐标值和预设的误差传递矩阵的宽与第二预设数值的差值进行按位与操作,确定该像素点对应的目标横坐标值,将该像素点的纵坐标值和预设的误差传递矩阵的高与第二预设数值 的差值进行按位与操作,确定该像素点对应的目标纵坐标值,根据该像素点对应的目标横坐标值和目标纵坐标值,从所述误差传递矩阵中确定出由所述目标横坐标值和所述目标纵坐标值对应的元素点的元素值,根据每个像素点对应的元素点的元素值和横坐标值及纵坐标值,确定出第三矩阵;
确定所述第一差值矩阵与所述第三矩阵的第四乘积矩阵,并确定所述第四乘积矩阵与所述目标图像中每个像素点的像素值组成的第四矩阵的第三和值矩阵,根据所述第三和值矩阵中每个元素点的元素值,确定更新后的所述目标图像的对应位置的像素点的像素值。
为了使确定出的目标图像更加真实,在本申请实施例,电子设备还保存有预设的误差传递矩阵,误差传递矩阵是用于在目标图像中通过误差使得目标图像不会过于平滑,从而使目标图像的真实性更高。
针对目标图像的每个像素点,根据该像素点的横坐标值和误差传递矩阵的宽,确定出横坐标值,并确定误差传递矩阵的宽与第二预设数值的差值,并将该横坐标值与该差值进行按位与操作;其中该第二预设数值可以是任意正整数值,较佳的,该第二预设数值为1。
根据该像素点的纵坐标值和误差传递矩阵的高,确定出纵坐标值,并确定误差传递矩阵的高与第二预设数值的差值,并将该纵坐标值与该差值进行按位与操作,从而确定出每个像素点对应的目标横坐标值和目标总坐标值,根据确定出每个像素点位置对应的目标横坐标值和目标总坐标值,确定出每个像素点位置对应的第三矩阵的像素点的像素值,从而确定出第三矩阵。
例如,图2为本申请实施例提供的一种预设的误差传递矩阵的示意图,如图2所示,误差传递矩阵的宽dw为8,高dh也为8。
根据确定出的第三矩阵和第一差值矩阵,将第三矩阵与第一差值矩阵相乘,得到第一差值矩阵与第三矩阵的第四乘积矩阵,根据目标图像中每个像素点的像素值,确定出第四矩阵,将第四矩阵与第三乘积矩阵相加,得到第四矩阵与第三乘积矩阵的第三和值矩阵,根据第三和值矩阵中每个元素点的元素值,对 目标图像进行更新,更新后的目标图像中每个位置的像素点的像素值即为第三和值矩阵中对应位置的元素点的元素值。
具体的,根据预设的如图2所示的误差传递矩阵,对目标图像blendImgOut进行掩膜dither操作,得到更新后的目标图像ImgOut,ImgOut=blendImgOut+(1-blendWeight)×ditherMatrix[x&(dw-1),y&(dh-1)]。
图3为本申请实施例提供的一种对视频帧图像进行带状伪影去除后的示意图,如图3所示,采用本申请实施例中的方法后可以有效去除或缓解平坦区域的带状伪影,同时,对图像边缘也能够起到很好的保护作用,在纹理丰富区域,图像细节几乎没有受到太多的影响。
实施例7:
下面通过一个具体的实施例对本申请的图像带状伪影去除方法进行说明,该待处理图像为视频帧图像,图4为本申请实施例提供的一种图像带状伪影去除方法的过程示意图,如图4所示,该过程包括以下步骤:
S901:获取视频数据解码后的视频帧图像为待处理图像,并列进行S902、S903和S904。
S902:对待处理图像进行滤波操作,确定待处理图像去除带状伪影后的滤波图像,根据滤波图像与待处理图像,进行边缘纹理保留滤波操作,确定边缘纹理保留后的目标图像,进行S906。
S903:检测确定待处理图像对应的带状伪影边缘图像,进行S905。
S904:检测确定待处理图像对应的非平坦区域图像,进行S905。
S905:对带状伪影边缘图像和非平坦区域图像进行保边滤波操作,确定带状伪影区域权重矩阵和非平坦区域权重矩阵,并根据预先保存的融合公式,确定融合权重矩阵,进行S906。
S906:根据融合权重矩阵与待处理图像中每个像素点的像素值组成的第一矩阵,确定融合权重矩阵与第一矩阵的第一乘积矩阵;根据第一预设数值与融 合权重矩阵中每个元素点的元素值的差值确定的第一差值矩阵和目标图像中每个像素点的像素值组成的第二矩阵,确定第一差值矩阵与第二矩阵的第二乘积矩阵;确定第一乘积矩阵与第二乘积矩阵的和值矩阵,根据和值矩阵中每个元素点的元素值,对目标图像进行更新。
S907:对目标图像进行Dither操作,确定最终输出的去除带状伪影后的目标图像。
为了提高确定最终输出的去除带状伪影后的目标图像的速度,图5为本申请实施例提供的一种图像带状伪影去除方法的过程示意图,如图5所示,该过程包括以下步骤:
S1001:获取视频数据解码后的视频帧图像,并列进行S1002和S1003。
S1002:对待处理图像进行滤波操作,确定待处理图像去除带状伪影后的滤波图像,根据滤波图像与待处理图像,进行边缘纹理保留滤波操作,确定边缘纹理保留后的目标图像,进行S1008。
S1003:对待处理图像进行下采样处理,并列进行S1004和S1005。
S1004:检测确定待处理图像对应的带状伪影边缘图像,进行S1006。
S1005:检测确定待处理图像对应的非平坦区域图像,进行S1006。
S1006:对带状伪影边缘图像和非平坦区域图像进行保边滤波操作,确定带状伪影区域权重矩阵和非平坦区域权重矩阵,并根据预先保存的融合公式,确定融合权重矩阵。
S1007:对融合权重矩阵进行上采样处理,进行S1008。
S1008:根据融合权重矩阵与待处理图像中每个像素点的像素值组成的第一矩阵,确定融合权重矩阵与第一矩阵的第一乘积矩阵;根据第一预设数值与融合权重矩阵中每个元素点的元素值的差值确定的第一差值矩阵和目标图像中每个像素点的像素值组成的第二矩阵,确定第一差值矩阵与第二矩阵的第二乘积矩阵;确定第一乘积矩阵与第二乘积矩阵的和值矩阵,根据和值矩阵中每个元素点的元素值,对目标图像进行更新。
S1009:对目标图像进行Dither操作,确定最终输出的去除带状伪影后的目标图像。
实施例8:
在上述各实施例的基础上,图6为本申请实施例提供的一种图像带状伪影去除装置的结构示意图,所述装置包括:
第一确定模块1101,设置为对待处理图像进行滤波操作,确定所述待处理图像去除带状伪影后的滤波图像,根据所述滤波图像与所述待处理图像中每个对应位置像素点的像素值的差值,确定所述滤波图像与所述待处理图像的差异矩阵和差异绝对值矩阵;
第二确定模块1102,设置为针对所述差异绝对值矩阵中的每个元素点,根据该元素点的元素值、第一预设阈值和第二预设阈值,确定所述元素值与所述第一预设阈值和所述第二预设阈值的目标大小关系,其中所述第二预设阈值大于所述第一预设阈值;及
第三确定模块1103,设置为根据每个元素点的元素值对应的所述目标大小关系、以及预先保存的每种大小关系对应的像素值的取值函数,确定边缘纹理保留后的目标图像中每个元素点对应位置的目标像素点的目标像素值,得到边缘纹理保留后的所述目标图像。
进一步地,所述大小关系包括所述元素值不大于所述第一预设阈值、所述元素值大于所述第一预设阈值且小于所述第二预设阈值和所述元素值不小于所述第二预设阈值,所述第三确定模块,具体用于针对每个元素点的元素值对应的所述目标大小关系,若所述目标大小关系分别为该元素点的元素值不大于所述第一预设阈值或不小于所述第二预设阈值,分别确定所述滤波图像或所述待处理图像中该元素点对应位置的像素点的像素值,将所述像素值确定为所述目标图像中该元素点对应位置的目标像素点的目标像素值;若所述目标大小关系还为该元素点的元素值大于所述第一预设阈值且小于所述第二预设阈值,确定所述第二预设阈值和该元素点的元素值的第一差值,所述第二预设阈值和所 述第一预设阈值的第二差值,并确定所述第一差值与所述第二差值对应的比例值,确定所述差异矩阵中该元素点对应位置的元素值与所述比例值的乘积值,并确定所述乘积值与所述待处理图像中该元素点对应位置的像素点的像素值的和值,将所述和值确定为所述目标图像中该元素点对应位置的目标像素点的目标像素值。
进一步地,所述装置还包括:
检测模块,设置为检测确定所述待处理图像对应的带状伪影边缘图像和非平坦区域图像;
融合模块,设置为对所述带状伪影边缘图像和所述非平坦区域图像进行保边滤波操作,确定带状伪影区域权重矩阵和非平坦区域权重矩阵,并根据预先保存的融合公式,确定融合权重矩阵;
第四确定模块,设置为确定所述融合权重矩阵与所述待处理图像中每个像素点的像素值组成的第一矩阵的第一乘积矩阵;根据第一预设数值与所述融合权重矩阵中每个元素点的元素值的差值确定的第一差值矩阵和所述目标图像中每个像素点的像素值组成的第二矩阵,确定所述第一差值矩阵与所述第二矩阵的第二乘积矩阵;及
更新模块,设置为确定所述第一乘积矩阵与所述第二乘积矩阵的和值矩阵,根据所述和值矩阵中每个元素点的元素值,对所述目标图像进行更新。
进一步地,所述融合模块,具体用于根据第一预设数值与所述带状伪影区域权重矩阵中每个元素点的元素值的差值,确定第二差值矩阵;将所述第二差值矩阵与所述非平坦区域权重矩阵的乘积矩阵确定为融合权重矩阵。
进一步地,所述更新模块,还设置为对所述带状伪影边缘图像进行滤波操作,确定去除边缘长度小于预设长度阈值的目标边缘后的图像为所述带状伪影边缘图像的更新后的图像。
进一步地,所述检测模块,具体设置为对所述待处理图像进行膨胀操作和腐蚀操作,确定膨胀图像和腐蚀图像;根据所述膨胀图像与所述腐蚀图像中每 个对应位置像素点的像素值的差值,确定所述膨胀图像与所述腐蚀图像的差异图像;对所述差异图像进行二值化滤波操作,确定所述差异图像的二值差异图像,对所述二值差异图像进行面积滤波,确定去除面积小于预设面积阈值的区域后的图像为非平坦区域图像。
进一步地,所述第四确定模块,还设置为针对所述目标图像中每个像素点,将该像素点的横坐标值和预设的误差传递矩阵的宽与第二预设数值的差值进行按位与操作,确定该像素点对应的目标横坐标值,将该像素点的纵坐标值和预设的误差传递矩阵的高与第二预设数值的差值进行按位与操作,确定该像素点对应的目标纵坐标值,根据该像素点对应的目标横坐标值和目标纵坐标值,从所述误差传递矩阵中确定出由所述目标横坐标值和所述目标纵坐标值对应的元素点的元素值,根据每个像素点对应的元素点的元素值和横坐标值及纵坐标值,确定出第三矩阵;
所述更新模块,还设置为确定所述第一差值矩阵与所述第三矩阵的第四乘积矩阵,并确定所述第四乘积矩阵与所述目标图像中每个像素点的像素值组成的第四矩阵的第三和值矩阵,根据所述第三和值矩阵中每个元素点的元素值,确定更新后的所述目标图像的对应位置的像素点的像素值。
实施例9:
图7为本申请实施例提供的一种电子设备结构示意图,在上述各实施例的基础上,本申请实施例中还提供了一种电子设备,包括处理器1201、通信接口1202、存储器1203和通信总线1204,其中,处理器1201,通信接口1202,存储器1203通过通信总线1204完成相互间的通信;
所述存储器1203中存储有计算机程序,当所述程序被所述处理器1201执行时,使得所述处理器1201执行如下步骤:
对待处理图像进行滤波操作,确定所述待处理图像去除带状伪影后的滤波图像,根据所述滤波图像与所述待处理图像中每个对应位置像素点的像素值的差值,确定所述滤波图像与所述待处理图像的差异矩阵和差异绝对值矩阵;
针对所述差异绝对值矩阵中的每个元素点,根据该元素点的元素值、第一预设阈值和第二预设阈值,确定所述元素值与所述第一预设阈值和所述第二预设阈值的目标大小关系,其中所述第二预设阈值大于所述第一预设阈值;
根据每个元素点的元素值对应的所述目标大小关系、以及预先保存的每种大小关系对应的像素值的取值函数,确定边缘纹理保留后的目标图像中每个元素点对应位置的目标像素点的目标像素值,得到边缘纹理保留后的所述目标图像。
进一步地,所述处理器1201具体设置为所述大小关系包括所述元素值不大于所述第一预设阈值、所述元素值大于所述第一预设阈值且小于所述第二预设阈值和所述元素值不小于所述第二预设阈值,所述根据每个元素点的元素值对应的所述目标大小关系、以及预先保存的每种大小关系对应的像素值的取值函数,确定边缘纹理保留后的目标图像中每个元素点对应位置的目标像素点的目标像素值包括:
针对每个元素点的元素值对应的所述目标大小关系,若所述目标大小关系分别为该元素点的元素值不大于所述第一预设阈值或不小于所述第二预设阈值,分别确定所述滤波图像或所述待处理图像中该元素点对应位置的像素点的像素值,将所述像素值确定为所述目标图像中该元素点对应位置的目标像素点的目标像素值;若所述目标大小关系还为该元素点的元素值大于所述第一预设阈值且小于所述第二预设阈值,确定所述第二预设阈值和该元素点的元素值的第一差值,所述第二预设阈值和所述第一预设阈值的第二差值,并确定所述第一差值与所述第二差值对应的比例值,确定所述差异矩阵中该元素点对应位置的元素值与所述比例值的乘积值,并确定所述乘积值与所述待处理图像中该元素点对应位置的像素点的像素值的和值,将所述和值确定为所述目标图像中该元素点对应位置的目标像素点的目标像素值。
进一步地,所述处理器1201还设置为检测确定所述待处理图像对应的带状伪影边缘图像和非平坦区域图像;
对所述带状伪影边缘图像和所述非平坦区域图像进行保边滤波操作,确定带状伪影区域权重矩阵和非平坦区域权重矩阵,并根据预先保存的融合公式,确定融合权重矩阵;
确定所述融合权重矩阵与所述待处理图像中每个像素点的像素值组成的第一矩阵的第一乘积矩阵;
根据第一预设数值与所述融合权重矩阵中每个元素点的元素值的差值确定的第一差值矩阵和所述目标图像中每个像素点的像素值组成的第二矩阵,确定所述第一差值矩阵与所述第二矩阵的第二乘积矩阵;
确定所述第一乘积矩阵与所述第二乘积矩阵的和值矩阵,根据所述和值矩阵中每个元素点的元素值,对所述目标图像进行更新。
进一步地,所述处理器1201具体设置为根据第一预设数值与所述带状伪影区域权重矩阵中每个元素点的元素值的差值,确定第二差值矩阵;
将所述第二差值矩阵与所述非平坦区域权重矩阵的乘积矩阵确定为融合权重矩阵。
进一步地,所述处理器1201还设置为所述检测确定所述待处理图像对应的带状伪影边缘图像之后,对所述带状伪影边缘图像进行滤波操作,确定去除边缘长度小于预设长度阈值的目标边缘后的图像为所述带状伪影边缘图像的更新后的图像。
进一步地,所述处理器1201具体设置为对所述待处理图像进行膨胀操作和腐蚀操作,确定膨胀图像和腐蚀图像;
根据所述膨胀图像与所述腐蚀图像中每个对应位置像素点的像素值的差值,确定所述膨胀图像与所述腐蚀图像的差异图像;
对所述差异图像进行二值化滤波操作,确定所述差异图像的二值差异图像,对所述二值差异图像进行面积滤波,确定去除面积小于预设面积阈值的区域后的图像为非平坦区域图像。
进一步地,所述处理器1201还设置为针对所述目标图像中每个像素点, 将该像素点的横坐标值和预设的误差传递矩阵的宽与第二预设数值的差值进行按位与操作,确定该像素点对应的目标横坐标值,将该像素点的纵坐标值和预设的误差传递矩阵的高与第二预设数值的差值进行按位与操作,确定该像素点对应的目标纵坐标值,根据该像素点对应的目标横坐标值和目标纵坐标值,从所述误差传递矩阵中确定出由所述目标横坐标值和所述目标纵坐标值对应的元素点的元素值,根据每个像素点对应的元素点的元素值和横坐标值及纵坐标值,确定出第三矩阵;
确定所述第一差值矩阵与所述第三矩阵的第四乘积矩阵,并确定所述第四乘积矩阵与所述目标图像中每个像素点的像素值组成的第四矩阵的第三和值矩阵,根据所述第三和值矩阵中每个元素点的元素值,确定更新后的所述目标图像的对应位置的像素点的像素值。
上述电子设备提到的通信总线可以是外设部件互连标准(Peripheral Component Interconnect,PCI)总线或扩展工业标准结构(Extended Industry Standard Architecture,EISA)总线等。该通信总线可以分为地址总线、数据总线、控制总线等。为便于表示,图中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
通信接口1202设置为上述电子设备与其他设备之间的通信。
存储器可以包括随机存取存储器(Random Access Memory,RAM),也可以包括非易失性存储器(Non-Volatile Memory,NVM),例如至少一个磁盘存储器。可选地,存储器还可以是至少一个位于远离前述处理器的存储装置。
上述处理器可以是通用处理器,包括中央处理器、网络处理器(Network Processor,NP)等;还可以是数字指令处理器(Digital Signal Processing,DSP)、专用集成电路、现场可编程门陈列或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。
实施例10:
在上述各实施例的基础上,本申请实施例还提供了一种计算机可读存储介 质,其存储有计算机程序,所述计算机程序被处理器执行如下步骤:
对待处理图像进行滤波操作,确定所述待处理图像去除带状伪影后的滤波图像,根据所述滤波图像与所述待处理图像中每个对应位置像素点的像素值的差值,确定所述滤波图像与所述待处理图像的差异矩阵和差异绝对值矩阵;
针对所述差异绝对值矩阵中的每个元素点,根据该元素点的元素值、第一预设阈值和第二预设阈值,确定所述元素值与所述第一预设阈值和所述第二预设阈值的目标大小关系,其中所述第二预设阈值大于所述第一预设阈值;
根据每个元素点的元素值对应的所述目标大小关系、以及预先保存的每种大小关系对应的像素值的取值函数,确定边缘纹理保留后的目标图像中每个元素点对应位置的目标像素点的目标像素值,得到边缘纹理保留后的所述目标图像。
进一步地,所述大小关系包括所述元素值不大于所述第一预设阈值、所述元素值大于所述第一预设阈值且小于所述第二预设阈值和所述元素值不小于所述第二预设阈值,所述根据每个元素点的元素值对应的所述目标大小关系、以及预先保存的每种大小关系对应的像素值的取值函数,确定边缘纹理保留后的目标图像中每个元素点对应位置的目标像素点的目标像素值包括:
针对每个元素点的元素值对应的所述目标大小关系,若所述目标大小关系分别为该元素点的元素值不大于所述第一预设阈值或不小于所述第二预设阈值,分别确定所述滤波图像或所述待处理图像中该元素点对应位置的像素点的像素值,将所述像素值确定为所述目标图像中该元素点对应位置的目标像素点的目标像素值;若所述目标大小关系还为该元素点的元素值大于所述第一预设阈值且小于所述第二预设阈值,确定所述第二预设阈值和该元素点的元素值的第一差值,所述第二预设阈值和所述第一预设阈值的第二差值,并确定所述第一差值与所述第二差值对应的比例值,确定所述差异矩阵中该元素点对应位置的元素值与所述比例值的乘积值,并确定所述乘积值与所述待处理图像中该元素点对应位置的像素点的像素值的和值,将所述和值确定为所述目标图像中该 元素点对应位置的目标像素点的目标像素值。
进一步地,所述方法还包括:
检测确定所述待处理图像对应的带状伪影边缘图像和非平坦区域图像;
对所述带状伪影边缘图像和所述非平坦区域图像进行保边滤波操作,确定带状伪影区域权重矩阵和非平坦区域权重矩阵,并根据预先保存的融合公式,确定融合权重矩阵;
确定所述融合权重矩阵与所述待处理图像中每个像素点的像素值组成的第一矩阵的第一乘积矩阵;
根据第一预设数值与所述融合权重矩阵中每个元素点的元素值的差值确定的第一差值矩阵和所述目标图像中每个像素点的像素值组成的第二矩阵,确定所述第一差值矩阵与所述第二矩阵的第二乘积矩阵;
确定所述第一乘积矩阵与所述第二乘积矩阵的和值矩阵,根据所述和值矩阵中每个元素点的元素值,对所述目标图像进行更新。
进一步地,所述根据预先保存的融合公式,确定出融合权重矩阵包括:
根据第一预设数值与所述带状伪影区域权重矩阵中每个元素点的元素值的差值,确定第二差值矩阵;
将所述第二差值矩阵与所述非平坦区域权重矩阵的乘积矩阵确定为融合权重矩阵。
进一步地,所述检测确定所述待处理图像对应的带状伪影边缘图像之后,所述方法还包括:
对所述带状伪影边缘图像进行滤波操作,确定去除边缘长度小于预设长度阈值的目标边缘后的图像为所述带状伪影边缘图像的更新后的图像。
进一步地,所述确定所述待处理图像对应的非平坦区域图像包括:
对所述待处理图像进行膨胀操作和腐蚀操作,确定膨胀图像和腐蚀图像;
根据所述膨胀图像与所述腐蚀图像中每个对应位置像素点的像素值的差值,确定所述膨胀图像与所述腐蚀图像的差异图像;
对所述差异图像进行二值化滤波操作,确定所述差异图像的二值差异图像,对所述二值差异图像进行面积滤波,确定去除面积小于预设面积阈值的区域后的图像为非平坦区域图像。
进一步地,所述方法还包括:
针对所述目标图像中每个像素点,将该像素点的横坐标值和预设的误差传递矩阵的宽与第二预设数值的差值进行按位与操作,确定该像素点对应的目标横坐标值,将该像素点的纵坐标值和预设的误差传递矩阵的高与第二预设数值的差值进行按位与操作,确定该像素点对应的目标纵坐标值,根据该像素点对应的目标横坐标值和目标纵坐标值,从所述误差传递矩阵中确定出由所述目标横坐标值和所述目标纵坐标值对应的元素点的元素值,根据每个像素点对应的元素点的元素值和横坐标值及纵坐标值,确定出第三矩阵;
确定所述第一差值矩阵与所述第三矩阵的第四乘积矩阵,并确定所述第四乘积矩阵与所述目标图像中每个像素点的像素值组成的第四矩阵的第三和值矩阵,根据所述第三和值矩阵中每个元素点的元素值,确定更新后的所述目标图像的对应位置的像素点的像素值。

Claims (10)

  1. 一种图像带状伪影去除方法,其中,所述方法包括如下步骤:
    对待处理图像进行滤波操作,确定所述待处理图像去除带状伪影后的滤波图像,根据所述滤波图像与所述待处理图像中每个对应位置像素点的像素值的差值,确定所述滤波图像与所述待处理图像的差异矩阵和差异绝对值矩阵;
    针对所述差异绝对值矩阵中的每个元素点,根据该元素点的元素值、第一预设阈值和第二预设阈值,确定所述元素值与所述第一预设阈值和所述第二预设阈值的目标大小关系,其中所述第二预设阈值大于所述第一预设阈值;及
    根据每个元素点的元素值对应的所述目标大小关系、以及预先保存的每种大小关系对应的像素值的取值函数,确定边缘纹理保留后的目标图像中每个元素点对应位置的目标像素点的目标像素值,得到边缘纹理保留后的所述目标图像。
  2. 根据权利要求1所述的方法,其中,所述大小关系包括所述元素值不大于所述第一预设阈值、所述元素值大于所述第一预设阈值且小于所述第二预设阈值和所述元素值不小于所述第二预设阈值,所述根据每个元素点的元素值对应的所述目标大小关系、以及预先保存的每种大小关系对应的像素值的取值函数,确定边缘纹理保留后的目标图像中每个元素点对应位置的目标像素点的目标像素值包括:
    针对每个元素点的元素值对应的所述目标大小关系,若所述目标大小关系分别为该元素点的元素值不大于所述第一预设阈值或不小于所述第二预设阈值,分别确定所述滤波图像或所述待处理图像中该元素点对应位置的像素点的像素值,将所述像素值确定为所述目标图像中该元素点对应位置的目标像素点的目标像素值;若所述目标大小关系还为该元素点的元素值大于所述第一预设阈值且小于所述第二预设阈值,确定所述第二预设阈值和该元素点的元素值的第一差值,所述第二预设阈值和所述第一预设阈值的第二差值,并确定所述第一差值与所述第二差值对应的比例值,确定所述差异矩阵中该元素点对应位置 的元素值与所述比例值的乘积值,并确定所述乘积值与所述待处理图像中该元素点对应位置的像素点的像素值的和值,将所述和值确定为所述目标图像中该元素点对应位置的目标像素点的目标像素值。
  3. 根据权利要求2所述的方法,其中,所述方法还包括:
    检测确定所述待处理图像对应的带状伪影边缘图像和非平坦区域图像;
    对所述带状伪影边缘图像和所述非平坦区域图像进行保边滤波操作,确定带状伪影区域权重矩阵和非平坦区域权重矩阵,并根据预先保存的融合公式,确定融合权重矩阵;
    确定所述融合权重矩阵与所述待处理图像中每个像素点的像素值组成的第一矩阵的第一乘积矩阵;
    根据第一预设数值与所述融合权重矩阵中每个元素点的元素值的差值确定的第一差值矩阵和所述目标图像中每个像素点的像素值组成的第二矩阵,确定所述第一差值矩阵与所述第二矩阵的第二乘积矩阵;
    确定所述第一乘积矩阵与所述第二乘积矩阵的和值矩阵,根据所述和值矩阵中每个元素点的元素值,对所述目标图像进行更新。
  4. 根据权利要求3所述的方法,其中,所述根据预先保存的融合公式,确定出融合权重矩阵包括:
    根据第一预设数值与所述带状伪影区域权重矩阵中每个元素点的元素值的差值,确定第二差值矩阵;
    将所述第二差值矩阵与所述非平坦区域权重矩阵的乘积矩阵确定为融合权重矩阵。
  5. 根据权利要求3所述的方法,其中,所述检测确定所述待处理图像对应的带状伪影边缘图像之后,所述方法还包括:
    对所述带状伪影边缘图像进行滤波操作,确定去除边缘长度小于预设长度阈值的目标边缘后的图像为所述带状伪影边缘图像的更新后的图像。
  6. 根据权利要求3所述的方法,其中,所述确定所述待处理图像对应的 非平坦区域图像包括:
    对所述待处理图像进行膨胀操作和腐蚀操作,确定膨胀图像和腐蚀图像;
    根据所述膨胀图像与所述腐蚀图像中每个对应位置像素点的像素值的差值,确定所述膨胀图像与所述腐蚀图像的差异图像;
    对所述差异图像进行二值化滤波操作,确定所述差异图像的二值差异图像,对所述二值差异图像进行面积滤波,确定去除面积小于预设面积阈值的区域后的图像为非平坦区域图像。
  7. 根据权利要求3所述的方法,其中,所述方法还包括:
    针对所述目标图像中每个像素点,将该像素点的横坐标值和预设的误差传递矩阵的宽与第二预设数值的差值进行按位与操作,确定该像素点对应的目标横坐标值,将该像素点的纵坐标值和预设的误差传递矩阵的高与第二预设数值的差值进行按位与操作,确定该像素点对应的目标纵坐标值,根据该像素点对应的目标横坐标值和目标纵坐标值,从所述误差传递矩阵中确定出由所述目标横坐标值和所述目标纵坐标值对应的元素点的元素值,根据每个像素点对应的元素点的元素值和横坐标值及纵坐标值,确定出第三矩阵;
    确定所述第一差值矩阵与所述第三矩阵的第四乘积矩阵,并确定所述第四乘积矩阵与所述目标图像中每个像素点的像素值组成的第四矩阵的第三和值矩阵,根据所述第三和值矩阵中每个元素点的元素值,确定更新后的所述目标图像的对应位置的像素点的像素值。
  8. 一种图像带状伪影去除装置,其中,所述装置包括:
    第一确定模块,设置为对待处理图像进行滤波操作,确定所述待处理图像去除带状伪影后的滤波图像,根据所述滤波图像与所述待处理图像中每个对应位置像素点的像素值的差值,确定所述滤波图像与所述待处理图像的差异矩阵和差异绝对值矩阵;
    第二确定模块,设置为针对所述差异绝对值矩阵中的每个元素点,根据该元素点的元素值、第一预设阈值和第二预设阈值,确定所述元素值与所述第一 预设阈值和所述第二预设阈值的目标大小关系,其中所述第二预设阈值大于所述第一预设阈值;及
    第三确定模块,设置为根据每个元素点的元素值对应的所述目标大小关系、以及预先保存的每种大小关系对应的像素值的取值函数,确定边缘纹理保留后的目标图像中每个元素点对应位置的目标像素点的目标像素值,得到边缘纹理保留后的所述目标图像。
  9. 一种电子设备,其中,包括:处理器、通信接口、存储器和通信总线,其中,处理器,通信接口,存储器通过通信总线完成相互间的通信;
    所述存储器中存储有计算机程序,当所述程序被所述处理器执行时,使得所述处理器执行权利要求1-7任一项所述方法。
  10. 一种计算机可读存储介质,其中,其存储有可由处理器执行的计算机程序,当所述程序在所述处理器上运行时,使得所述处理器执行权利要求1-7任一项所述方法。
PCT/CN2022/094771 2021-06-08 2022-05-24 一种图像带状伪影去除方法、装置、设备和介质 WO2022257759A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110647537.7 2021-06-08
CN202110647537.7A CN113362246A (zh) 2021-06-08 2021-06-08 一种图像带状伪影去除方法、装置、设备和介质

Publications (1)

Publication Number Publication Date
WO2022257759A1 true WO2022257759A1 (zh) 2022-12-15

Family

ID=77533785

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/094771 WO2022257759A1 (zh) 2021-06-08 2022-05-24 一种图像带状伪影去除方法、装置、设备和介质

Country Status (2)

Country Link
CN (1) CN113362246A (zh)
WO (1) WO2022257759A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113838001A (zh) * 2021-08-24 2021-12-24 内蒙古电力科学研究院 基于核密度估计的超声波全聚焦图像缺陷处理方法及装置
CN115797343A (zh) * 2023-02-06 2023-03-14 山东大佳机械有限公司 基于图像数据的畜禽养殖环境视频监控方法
CN116485819A (zh) * 2023-06-21 2023-07-25 青岛大学附属医院 一种耳鼻喉检查图像分割方法及***
CN116523924A (zh) * 2023-07-05 2023-08-01 吉林大学第一医院 一种医学实验用数据处理方法及***
CN117437279A (zh) * 2023-12-12 2024-01-23 山东艺达环保科技有限公司 一种包装盒表面平整度检测方法及***
CN117911716A (zh) * 2024-03-19 2024-04-19 天津医科大学总医院 基于机器视觉的关节炎ct影像特征提取方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113362246A (zh) * 2021-06-08 2021-09-07 百果园技术(新加坡)有限公司 一种图像带状伪影去除方法、装置、设备和介质
CN116452465B (zh) * 2023-06-13 2023-08-11 江苏游隼微电子有限公司 一种消除jpeg图像块伪影的方法

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020150306A1 (en) * 2001-04-11 2002-10-17 Baron John M. Method and apparatus for the removal of flash artifacts
CN101573980A (zh) * 2006-12-28 2009-11-04 汤姆逊许可证公司 检测编码图像和视频中的块伪影
CN101578868A (zh) * 2006-12-28 2009-11-11 汤姆逊许可证公司 数字视频内容中的条带状伪影检测
US20160125579A1 (en) * 2014-11-05 2016-05-05 Dolby Laboratories Licensing Corporation Systems and Methods for Rectifying Image Artifacts
CN106780649A (zh) * 2016-12-16 2017-05-31 上海联影医疗科技有限公司 图像的伪影去除方法和装置
US20170223383A1 (en) * 2016-01-28 2017-08-03 Interra Systems, Inc. Methods and systems for detection of artifacts in a video after error concealment
CN111402172A (zh) * 2020-03-24 2020-07-10 湖南国科微电子股份有限公司 一种图像降噪方法、***、设备及计算机可读存储介质
CN113362246A (zh) * 2021-06-08 2021-09-07 百果园技术(新加坡)有限公司 一种图像带状伪影去除方法、装置、设备和介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR0159559B1 (ko) * 1994-10-31 1999-01-15 배순훈 디지탈 화상의 적응적인 후처리 방법
US9092856B2 (en) * 2013-10-31 2015-07-28 Stmicroelectronics Asia Pacific Pte. Ltd. Recursive de-banding filter for digital images

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020150306A1 (en) * 2001-04-11 2002-10-17 Baron John M. Method and apparatus for the removal of flash artifacts
CN101573980A (zh) * 2006-12-28 2009-11-04 汤姆逊许可证公司 检测编码图像和视频中的块伪影
CN101578868A (zh) * 2006-12-28 2009-11-11 汤姆逊许可证公司 数字视频内容中的条带状伪影检测
US20160125579A1 (en) * 2014-11-05 2016-05-05 Dolby Laboratories Licensing Corporation Systems and Methods for Rectifying Image Artifacts
US20170223383A1 (en) * 2016-01-28 2017-08-03 Interra Systems, Inc. Methods and systems for detection of artifacts in a video after error concealment
CN106780649A (zh) * 2016-12-16 2017-05-31 上海联影医疗科技有限公司 图像的伪影去除方法和装置
CN111402172A (zh) * 2020-03-24 2020-07-10 湖南国科微电子股份有限公司 一种图像降噪方法、***、设备及计算机可读存储介质
CN113362246A (zh) * 2021-06-08 2021-09-07 百果园技术(新加坡)有限公司 一种图像带状伪影去除方法、装置、设备和介质

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113838001A (zh) * 2021-08-24 2021-12-24 内蒙古电力科学研究院 基于核密度估计的超声波全聚焦图像缺陷处理方法及装置
CN113838001B (zh) * 2021-08-24 2024-02-13 内蒙古电力科学研究院 基于核密度估计的超声波全聚焦图像缺陷处理方法及装置
CN115797343A (zh) * 2023-02-06 2023-03-14 山东大佳机械有限公司 基于图像数据的畜禽养殖环境视频监控方法
CN115797343B (zh) * 2023-02-06 2023-04-21 山东大佳机械有限公司 基于图像数据的畜禽养殖环境视频监控方法
CN116485819A (zh) * 2023-06-21 2023-07-25 青岛大学附属医院 一种耳鼻喉检查图像分割方法及***
CN116485819B (zh) * 2023-06-21 2023-09-01 青岛大学附属医院 一种耳鼻喉检查图像分割方法及***
CN116523924A (zh) * 2023-07-05 2023-08-01 吉林大学第一医院 一种医学实验用数据处理方法及***
CN116523924B (zh) * 2023-07-05 2023-08-29 吉林大学第一医院 一种医学实验用数据处理方法及***
CN117437279A (zh) * 2023-12-12 2024-01-23 山东艺达环保科技有限公司 一种包装盒表面平整度检测方法及***
CN117437279B (zh) * 2023-12-12 2024-03-22 山东艺达环保科技有限公司 一种包装盒表面平整度检测方法及***
CN117911716A (zh) * 2024-03-19 2024-04-19 天津医科大学总医院 基于机器视觉的关节炎ct影像特征提取方法
CN117911716B (zh) * 2024-03-19 2024-06-21 天津医科大学总医院 基于机器视觉的关节炎ct影像特征提取方法

Also Published As

Publication number Publication date
CN113362246A (zh) 2021-09-07

Similar Documents

Publication Publication Date Title
WO2022257759A1 (zh) 一种图像带状伪影去除方法、装置、设备和介质
CN108961186B (zh) 一种基于深度学习的老旧影片修复重制方法
CN108694705B (zh) 一种多帧图像配准与融合去噪的方法
US9501818B2 (en) Local multiscale tone-mapping operator
CN108495135B (zh) 一种屏幕内容视频编码的快速编码方法
US20190294931A1 (en) Systems and Methods for Generative Ensemble Networks
WO2017084258A1 (zh) 编码过程中的实时视频降噪方法、终端和非易失性计算机可读存储介质
CA2674149A1 (en) Banding artifact detection in digital video content
US20120257679A1 (en) System and method for encoding and decoding video data
CN115606179A (zh) 用于使用学习的下采样特征进行图像和视频编码的基于学习的下采样的cnn滤波器
JP2018107797A (ja) 画像データのエンコード及びデコード
WO2023226584A1 (zh) 图像降噪、滤波数据处理方法、装置和计算机设备
CN110717864B (zh) 一种图像增强方法、装置、终端设备及计算机可读介质
KR102315471B1 (ko) 영상 처리 방법과 장치
WO2019037471A1 (zh) 视频处理方法、视频处理装置以及终端
Wang et al. Semantic-aware video compression for automotive cameras
CN110913230A (zh) 一种视频帧预测方法、装置及终端设备
CN110415175B (zh) 一种快速去除平坦区域编码马赛克的方法
CN111160340B (zh) 一种运动目标检测方法、装置、存储介质及终端设备
CN113658050A (zh) 一种图像的去噪方法、去噪装置、移动终端及存储介质
Guan et al. NODE: Extreme low light raw image denoising using a noise decomposition network
CN112581365A (zh) 一种跨尺度自适应信息映射成像方法及装置、介质
WO2015128302A1 (en) Method and apparatus for filtering and analyzing a noise in an image
CN113395475A (zh) 数据处理方法、装置、电子设备及存储设备
CN111080550B (zh) 图像处理方法、装置、电子设备及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22819357

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22819357

Country of ref document: EP

Kind code of ref document: A1