CN118195902A - Super-resolution image processing method and processing system based on interpolation algorithm - Google Patents

Super-resolution image processing method and processing system based on interpolation algorithm Download PDF

Info

Publication number
CN118195902A
CN118195902A CN202410612941.4A CN202410612941A CN118195902A CN 118195902 A CN118195902 A CN 118195902A CN 202410612941 A CN202410612941 A CN 202410612941A CN 118195902 A CN118195902 A CN 118195902A
Authority
CN
China
Prior art keywords
resolution image
super
image
gradient
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410612941.4A
Other languages
Chinese (zh)
Other versions
CN118195902B (en
Inventor
姚平
谢超平
陆俊
郭竹修
徐巍威
赵丹萍
范星伟
赵晶
程峰
冯获
陈明昌
郑有凌
张宇
姜春桐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Xinshi Chuangwei Ultra High Definition Technology Co ltd
Original Assignee
Sichuan Xinshi Chuangwei Ultra High Definition Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Xinshi Chuangwei Ultra High Definition Technology Co ltd filed Critical Sichuan Xinshi Chuangwei Ultra High Definition Technology Co ltd
Priority to CN202410612941.4A priority Critical patent/CN118195902B/en
Publication of CN118195902A publication Critical patent/CN118195902A/en
Application granted granted Critical
Publication of CN118195902B publication Critical patent/CN118195902B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides a super-resolution image processing method and a processing system thereof based on an interpolation algorithm, which belong to the technical field of image processing and comprise the steps of obtaining a low-resolution image to be processed and a corresponding original high-resolution image thereof, and obtaining the super-resolution image of the low-resolution image to be processed through the interpolation algorithm; dividing image areas based on gradient information of the super-resolution image; correspondingly adjusting the weight of the linear impact filter according to the image region division result, and carrying out filtering processing on the corresponding image region by utilizing the adjusted linear impact filter; evaluating the quality difference between the filtered super-resolution image and the corresponding original high-resolution image based on the evaluation index to obtain a quality evaluation result; the quality evaluation result is judged based on the quality standard condition, and the super-resolution image reaching the quality standard condition is output.

Description

Super-resolution image processing method and processing system based on interpolation algorithm
Technical Field
The invention relates to the technical field of image processing, in particular to a super-resolution image processing method and a super-resolution image processing system based on an interpolation algorithm.
Background
With the rapid development of digital technology, image processing technology is increasingly widely applied in the fields of medical diagnosis, satellite remote sensing, security monitoring, entertainment media and the like. In these applications, the resolution of the image is critical to the acquisition and identification of the information. However, due to factors such as device limitations, transmission bandwidth, storage cost, etc., the resolution of the actually acquired image is often low, and it is difficult to meet the application requirements. Therefore, research on efficient super-resolution image processing technology has important practical application value.
In the existing super-resolution image processing method, an interpolation algorithm is a simple and common method. It estimates the value of the unknown pixel from the known pixel value, thereby improving the resolution of the image. However, the interpolation algorithm improves the resolution of the image, and meanwhile, the problems of blurring of the image edge, loss of details and the like are often caused, so that the quality of the image is affected.
Therefore, it is necessary to provide a super-resolution image processing method based on an interpolation algorithm and a processing system thereof to solve the above technical problems.
Disclosure of Invention
In order to solve the technical problems, the invention provides a super-resolution image processing method and a processing system thereof based on an interpolation algorithm.
The invention provides a super-resolution image processing method based on an interpolation algorithm, which comprises the following steps:
s1: acquiring a low-resolution image to be processed and a corresponding original high-resolution image thereof, and performing super-resolution processing on the low-resolution image to be processed through an interpolation algorithm to obtain a super-resolution image;
s2: dividing an image region based on gradient information of the super-resolution image, and dividing the image region into an edge region and a smooth region, wherein the gradient information comprises a gradient amplitude value and a gradient direction;
S3: adjusting the weight of the linear impact filter according to the correspondence of the image region division result, and carrying out filtering treatment on the corresponding image region by utilizing the adjusted linear impact filter to obtain a filtered super-resolution image;
S4: evaluating the quality difference between the filtered super-resolution image and the corresponding original high-resolution image based on the evaluation index to obtain a quality evaluation result;
S5: and judging the quality evaluation result based on a preset quality standard condition, and outputting a super-resolution image reaching the quality standard condition.
Preferably, the step S1 specifically includes the following steps:
S101: acquiring a low-resolution image to be processed and an original high-resolution image corresponding to the low-resolution image to be processed, and preprocessing the low-resolution image to be processed, wherein the preprocessing comprises contrast enhancement and color correction;
S102: and selectively carrying out interpolation calculation on the preprocessed low-resolution image by using an interpolation algorithm, and combining pixel values obtained by the interpolation calculation into a super-resolution image, wherein the interpolation algorithm comprises nearest neighbor interpolation, bilinear interpolation and bicubic interpolation.
Preferably, in step S102, an interpolation algorithm is selectively utilized, specifically, one or more interpolation algorithms are selected from nearest neighbor interpolation, bilinear interpolation, bicubic interpolation according to the image features of the low-resolution image to be processed, so as to perform interpolation calculation.
Preferably, the step S2 specifically includes the following steps:
s201: performing gradient calculation on the super-resolution image by using an edge detection operator to obtain gradient amplitude and direction of each pixel point;
S202: comparing the gradient amplitude of each pixel point with a preset threshold value, and primarily dividing an image area into an edge area and a smooth area;
S203: and refining the preliminarily divided edge region and the smooth region to obtain a final edge region and a smooth region, wherein the refining comprises removing misclassified pixel points.
Preferably, the step S3 specifically includes the following steps:
S301: based on the divided edge area and the smooth area, refining the local gradient amplitude and the local gradient direction in each area;
s302: calculating the gradient adjustment weight of the linear impact filter according to the local gradient amplitude values in each region, wherein the calculation formula of the gradient adjustment weight is as follows:
Wherein, Is a scale factor, k is an exponential factor,/>Is the local gradient amplitude,/>The weight is adjusted for the gradient;
S303: calculating the direction adjustment weight of the linear impact filter according to the local gradient direction in each region, wherein the calculation formula of the direction adjustment weight is as follows:
Wherein, Is the direction of the linear impulse filter,/>Is the local gradient direction,/>Adjusting weights for directions;
S304: adjusting weights in combination with gradients And direction adjustment weight/>Calculating to obtain the adjustment weight/>The adjustment weight/>The calculation formula of (2) is as follows:
s305: and adjusting the linear impact filter based on the adjustment weight W, and performing differential filtering processing on the edge area and the smooth area by using the adjusted linear impact filter.
Preferably, the step S4 specifically includes the following steps:
s401: calculating a local gradient consistency score LG between the super-resolution image and the corresponding original high-resolution image;
s402: calculating global color distribution similarity GC between the super-resolution image and the corresponding original high-resolution image;
s403: obtaining a quality evaluation result according to a quality score calculation formula, wherein the quality score calculation formula is as follows:
Wherein, Is a weight factor,/>Is a composite quality score.
Preferably, the step S5 specifically includes the following steps:
s501: setting a quality standard condition, wherein the quality standard condition is a comprehensive quality score threshold;
S502: judging the magnitude of the comprehensive quality score and the comprehensive quality score threshold of the super-resolution image,
If the comprehensive quality score of the super-resolution image is greater than or equal to the comprehensive quality score threshold, directly outputting the super-resolution image,
If the integrated quality score of the super-resolution image is smaller than the integrated quality score threshold, executing the next step S503;
S503: identifying parameters to be adjusted according to the calculated local gradient consistency score LG and the global color distribution similarity GC, searching optimal parameters to be adjusted based on a gradient optimization algorithm, and re-executing the steps S1 to S4 according to the optimal parameters to be adjusted until the quality evaluation result of the super-resolution image reaches the quality standard condition.
The invention provides a super-resolution image processing system based on an interpolation algorithm, which is applied to a super-resolution image processing method based on the interpolation algorithm, and comprises the following steps:
the super-resolution image acquisition module is used for acquiring a low-resolution image to be processed and a corresponding original high-resolution image thereof, and performing super-resolution processing on the low-resolution image to be processed through an interpolation algorithm to obtain a super-resolution image;
The region dividing module is used for dividing the image region based on gradient information of the super-resolution image and dividing the image region into an edge region and a smooth region, wherein the gradient information comprises gradient amplitude and gradient direction;
the filtering module is used for correspondingly adjusting the weight of the linear impact filter according to the image region division result, and filtering the corresponding image region by utilizing the adjusted linear impact filter to obtain a filtered super-resolution image;
The quality evaluation module is used for evaluating the quality difference between the filtered super-resolution image and the corresponding original high-resolution image based on the evaluation index to obtain a quality evaluation result;
and the output module is used for judging the quality evaluation result based on a preset quality standard condition and outputting a super-resolution image reaching the quality standard condition.
Compared with the related art, the super-resolution image processing method and the processing system based on the interpolation algorithm have the following beneficial effects:
The invention carries out super-resolution processing on the low-resolution image through an interpolation algorithm, then carries out region division by utilizing gradient information of the image, adjusts weights of the linear impact filters for different regions to carry out filtering processing, finally evaluates the filtered super-resolution image through an evaluation index and outputs the image reaching a preset quality standard.
Drawings
FIG. 1 is a flow chart of a super-resolution image processing method based on an interpolation algorithm;
Fig. 2 is a flowchart of step S1 of a super-resolution image processing method based on an interpolation algorithm provided by the present invention;
FIG. 3 is a flowchart of step S2 of a super-resolution image processing method based on an interpolation algorithm provided by the invention;
FIG. 4 is a flowchart of step S3 of a super-resolution image processing method based on an interpolation algorithm provided by the invention;
FIG. 5 is a flowchart of step S4 of a super-resolution image processing method based on an interpolation algorithm provided by the invention;
FIG. 6 is a flowchart of step S5 of a super-resolution image processing method based on an interpolation algorithm provided by the invention;
Fig. 7 is a block diagram of a super-resolution image processing system based on an interpolation algorithm according to the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings. Furthermore, embodiments of the invention and features of the embodiments may be combined with each other without conflict.
It should be further noted that, for convenience of description, only some, but not all of the matters related to the present invention are shown in the accompanying drawings. Before discussing exemplary embodiments in more detail, it should be mentioned that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart depicts operations (or steps) as being processed sequentially, many of the operations can be performed in parallel, concurrently or at the same time. Furthermore, the order of the operations may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figures. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Example 1
The invention provides a super-resolution image processing method based on an interpolation algorithm, which is shown by referring to fig. 1, and comprises the following steps:
s1: obtaining a low-resolution image to be processed and an original high-resolution image corresponding to the low-resolution image to be processed, and performing super-resolution processing on the low-resolution image to be processed through an interpolation algorithm to obtain a super-resolution image.
In this embodiment, the low resolution image is up-sampled by interpolation method to primarily increase its resolution.
In particular operations, interpolation algorithms are used to magnify the low resolution image in an attempt to recover details of the high resolution image. Specific interpolation algorithms include, but are not limited to, nearest neighbor, bilinear, bicubic, etc., predicting new pixel values from surrounding pixel values, thereby increasing the pixel density of the image. Illustratively, bicubic interpolation estimates the color value of the new pixel point by calculating a weighted average of sixteen surrounding pixels, which better preserves the smooth transition of the image.
Exemplary description: assuming a 32x32 pixel low resolution image, it is hoped to be lifted to 128x128 pixels by interpolation algorithm, after bicubic interpolation algorithm is applied, the position of each new pixel is obtained by complex weight calculation according to the color value of the surrounding original pixels, so as to generate a visually finer super resolution image.
S2: and dividing an image area based on gradient information of the super-resolution image, and dividing the image area into an edge area and a smooth area, wherein the gradient information comprises gradient amplitude and gradient direction.
In this embodiment, the sensitivity of different regions of the image (such as an edge region and a smoothing region) to filtering is different, and the gradient information needs to be treated separately.
Specifically, the gradient magnitude and direction of the super-resolution image are calculated using an edge detection algorithm, a region where the gradient magnitude is high is regarded as an edge region, and a region where the gradient magnitude is low is regarded as a smooth region. By setting the threshold value, it can be clear which pixels belong to the edge and which belong to the smooth background, and the threshold value can be based on experience or the threshold value of previous research, through the step, the subsequent processing can be ensured to be optimized in a targeted manner, the detail loss caused by excessive smoothing in the edge area is avoided, and meanwhile, noise points are effectively reduced in the smooth area.
S3: and adjusting the weight of the linear impact filter according to the correspondence of the image region division result, and filtering the corresponding image region by utilizing the adjusted linear impact filter to obtain a filtered super-resolution image.
In this embodiment, by adjusting the weight of the linear impulse filter, the image quality can be optimized by applying different degrees of smoothing to different areas while keeping the edges clear, where the weight refers to the coefficient in the filter kernel.
Specifically, for edge regions, filter weights are reduced to preserve edge details; and for the smooth area, the weight is properly increased to remove noise, and then the customized filters are applied to process, so that the edge of the super-resolution image is kept sharp, the smooth area is cleaner, and the overall image quality is further improved.
S4: and evaluating the quality difference between the filtered super-resolution image and the corresponding original high-resolution image based on the evaluation index to obtain a quality evaluation result.
In this embodiment, to ensure that the processed image approaches or even reaches the quality standard of the original high-resolution image, the quality difference between the filtered super-resolution image and the corresponding original high-resolution image may be evaluated by an evaluation index.
Specifically, an evaluation index is determined, and the image quality is quantitatively evaluated by calculating the index values between the filtered super-resolution image and the original high-resolution image, so as to provide quantitative quality feedback, and the success of the subsequent optimization or confirmation processing flow is facilitated.
S5: and judging the quality evaluation result based on a preset quality standard condition, and outputting a super-resolution image reaching the quality standard condition.
In the embodiment, only the super-resolution image meeting the quality standard condition is accepted, and the quality of the output image is controllable.
Specifically, the quality evaluation result obtained by evaluating the threshold value is evaluated, if the processing is considered to be successful, the super-resolution image is directly output, otherwise, the parameters are readjusted, and the iterative processing is carried out until the quality standard condition is reached, so that the reliability of the quality of the output image is finally ensured, the image which does not meet the requirements is prevented from flowing into the subsequent application links, and the stability and the user satisfaction of the whole system are improved.
Specifically, referring to fig. 2, step S1 specifically includes the following steps:
S101: and acquiring a low-resolution image to be processed and a corresponding original high-resolution image thereof, and preprocessing the low-resolution image to be processed, wherein the preprocessing comprises contrast enhancement and color correction.
In this embodiment, the low resolution image is preprocessed before the super resolution processing, where the preprocessing includes contrast enhancement and color correction, where the contrast enhancement helps to improve the visual effect of the image, so that the bright and dark portions of the image are more distinct, and the visibility of the visual details is improved. The color correction is to ensure that the color of the image appears more accurately, approaching the color balance of the real scene or the original high resolution image. The combination of the two steps provides a basis with better quality for the subsequent super-resolution processing, and is beneficial to improving the quality of the final output image.
Specifically, by adjusting the histogram distribution of the image, the pixel value range is expanded, so that darker areas become darker and lighter areas become brighter, thereby enhancing the overall contrast of the image.
Color correction is performed using color space conversion to gain adjust the channels to approximate a natural or desired color balance state.
The visual definition and the sense of reality of the color of the image can be improved through preprocessing, better input is provided for an interpolation algorithm, better recovery of details and textures in the super-resolution processing process is facilitated, influence of noise and uneven illumination is reduced through preprocessing, and the follow-up super-resolution algorithm can concentrate on enhancement of structures and details rather than correction of basic image quality problems.
S102: and selectively carrying out interpolation calculation on the preprocessed low-resolution image by using an interpolation algorithm, and combining pixel values obtained by the interpolation calculation into a super-resolution image, wherein the interpolation algorithm comprises nearest neighbor interpolation, bilinear interpolation and bicubic interpolation.
In this embodiment, after preprocessing, it is important to select the most appropriate interpolation algorithm according to the specific characteristics of the image. The recovery capacity of different interpolation methods on image details is different, and the algorithm which is most suitable for the current image characteristic is selected to furthest improve the effect of super-resolution processing.
Specifically, main features in the image, including content complexity, texture complexity and edge density, are identified by analyzing the preprocessed image, and then an interpolation algorithm is selected based on these features.
For the content complexity, if the content complexity of the low-resolution image to be processed is low, nearest neighbor interpolation or bilinear interpolation is used, so that acceptable quality can be provided for a simple image while the processing speed is kept; if the content complexity of the low-resolution image to be processed is high, bicubic interpolation is used, so that the details of the image can be better reserved and restored.
For edge density, if the edge density of the low resolution image to be processed is high, i.e. contains many sharp edges and contours, an interpolation method should be selected that can maintain the edge sharpness, such as bicubic interpolation.
For texture complexity, if the texture complexity of the low-resolution image to be processed is high, an interpolation method capable of smoothly transiting and reducing noise, such as bilinear interpolation or bicubic interpolation, should be selected, so that texture details can be better recovered, and excessive noise or distortion is avoided being introduced in the interpolation process.
In some cases, different interpolation methods may be selected according to different regions of the low resolution image to be processed, for example bicubic interpolation may be used in regions of high edge density to maintain edge sharpness, bilinear interpolation may be used in regions of high texture complexity to reduce noise and distortion, and such a mixed use strategy may balance the requirements between processing speed and quality.
Specifically, referring to fig. 3, step S2 specifically includes the following steps:
s201: and carrying out gradient calculation on the super-resolution image by utilizing an edge detection operator to obtain the gradient amplitude and the gradient direction of each pixel point.
In this embodiment, in super-resolution image processing, accurately identifying edges and textures in an image is critical to subsequent region division, where gradient information reflects the intensity and direction of local changes in the image, which is the basis for distinguishing image details from smooth regions. By calculating the gradient magnitude and direction, necessary guidance information can be provided for subsequent processing, ensuring that the filtering and enhancement operations take the most appropriate strategy where they are correct.
Specifically, an edge detection operator is applied to analyze the super-resolution image pixel by pixel, and the operator calculates the difference of the pixel points in the horizontal and vertical directions so as to obtain the gradient amplitude (namely the change intensity) and the gradient direction (the change direction) of each pixel point. The places with large gradient amplitude usually represent edges or textures in the image, while the places with small gradient amplitude correspond to smoother areas, and the edge detection operator comprises, but is not limited to Sobel, laplacian or Canny, and the finally generated gradient map is endowed with structural attribute labels for each pixel point of the image, so that accurate basis is provided for subsequent image area division, important image details are protected in the processing process, and quality loss caused by excessive processing is avoided.
S202: and comparing the gradient amplitude of each pixel point with a preset threshold value, and primarily dividing the image area into an edge area and a smooth area.
In this embodiment, by setting the threshold value, gradient information of a complex image can be simplified into a binary classification that is easy to process—an edge or a smooth region, which is a precondition that the subsequent filtering and enhancement operations can be performed pertinently.
Specifically, according to the gradient amplitude calculation result, comparing the gradient amplitude of each pixel point with a preset threshold value, and if the gradient amplitude exceeds the threshold value, marking the pixel point as a part of an edge area; otherwise, the region is classified as a smooth region. This process quickly segments the image into two broad categories, facilitating subsequent refinement, where the threshold may use an empirically or previously studied threshold.
The preliminary division lays a foundation for fine processing, effectively isolates edge details needing important protection and regions which can be moderately smooth, reduces unnecessary processing complexity and improves processing efficiency.
S203: and refining the preliminarily divided edge region and the smooth region to obtain a final edge region and a smooth region, wherein the refining comprises removing misclassified pixel points.
In this embodiment, the preliminary division may be misclassified, especially in a region with rich image details and complex gradient changes. The refinement step aims to correct these errors, ensuring that each pixel is correctly classified, thus improving the quality of the final image processing.
In particular, the refinement method employed includes, but is not limited to, morphological operations (e.g., dilation, erosion) to correct boundaries, identify and correct misclassified pixels. For example, for smooth region pixels that are erroneously marked as edges, the pixels can be reclassified by analyzing the characteristics of the surrounding pixels, and the refinement process improves the accuracy of region division, so that filtering and enhancement operations can more accurately act on various portions of the image, thereby not only protecting the sharpness of the edges, but also ensuring the purity of the smooth regions, and improving the quality and naturalness of the super-resolution image as a whole.
Specifically, referring to fig. 4, step S3 specifically includes the following steps:
s301: based on the divided edge regions and the smooth regions, the local gradient amplitude and the local gradient direction in each region are refined.
In the present embodiment, in step S2, rough region division has been performed based on global gradient information. However, complex structures and details may exist within the image, and a single gradient threshold may not accurately capture all details, so by refining local gradient information, subtle changes in the image are more accurately identified so that the image can be processed more finely for subsequent filtering, protecting local features.
Specifically, in each divided region, a finer local window sliding analysis method is adopted to calculate the local gradient amplitude and direction around each pixel, so that smaller-scale edge and texture changes in the region can be captured. For example, for edge regions, this step may identify small variations inside the edge, while in smooth regions it helps identify potential fine textures or noise distributions.
The refinement process enables the subsequent filtering operation to be more accurately matched with the characteristics of each part of the image, avoids over-processing or under-processing, and improves the precision and fidelity of image processing.
S302: calculating the gradient adjustment weight of the linear impact filter according to the local gradient amplitude values in each region, wherein the calculation formula of the gradient adjustment weight is as follows:
Wherein, Is a scale factor, k is an exponential factor,/>Is the local gradient amplitude,/>Weights are adjusted for the gradients.
In this embodiment, the sensitivity of the regions with different local gradient magnitudes to filtering is different, and the filtering strength can be dynamically adjusted according to the actual image features by calculating the gradient adjustment weight, so as to adapt to the protection requirement of local details.
Specifically, for a high gradient region (namely an edge region), the damage of filtering to edge details is reduced by reducing the filtering weight; on the contrary, the weight is properly increased in a low gradient region (smooth region), the denoising effect is enhanced, so that the self-adaptive adjustment of the filtering strength is realized, the effects of edge protection and noise suppression are enhanced, and the maintenance of image details and the overall quality are improved.
S303: calculating the direction adjustment weight of the linear impact filter according to the local gradient direction in each region, wherein the calculation formula of the direction adjustment weight is as follows:
Wherein, Is the direction of the linear impulse filter,/>Is the local gradient direction,/>Weights are adjusted for directions.
In this embodiment, since the image filtering is affected not only by the gradient magnitude, but also by the gradient direction, especially when processing edges and textures, the filtering along the edge direction should be more careful to avoid edge blurring.
Specifically, according to the relation of the included angles between the two directions, the direction adjustment weight is obtained through calculation, the filtering operation is ensured to be consistent with the natural texture direction of the image, unnecessary texture damage is reduced, the filtering process is enabled to be more compliant with the natural structure of the image, image distortion or texture distortion caused by improper filtering direction is avoided, and the realism and naturalness of the image are improved.
S304: adjusting weights in combination with gradientsAnd direction adjustment weight/>Calculating to obtain the adjustment weight/>The adjustment weight/>The calculation formula of (2) is as follows:
In this embodiment, considering the gradient or direction alone may not be enough to fully reflect the complexity of the local characteristics of the image, and combining the weights of the gradient or direction may make the filtering decision more comprehensive and reasonable.
Specifically, the importance of the local gradient and the consistency of the filtering direction are integrated through comprehensively adjusting the weights, personalized strength guidance is provided for the filtering operation of each pixel point, the comprehensive optimization of the filtering parameters is realized, the edges and the details of the image can be protected, the noise can be effectively restrained, and the comprehensive effect of the image processing is improved.
S305: and adjusting the linear impact filter based on the adjustment weight W, and performing differential filtering processing on the edge area and the smooth area by using the adjusted linear impact filter.
In this embodiment, the calculated adjustment weight is applied to the kernel of the linear impact filter, and the filter intensity and directivity are adjusted according to the weight of each pixel point, so as to perform the differentiation processing on the edge and the smooth region. The edge area is mainly protected details, and the smooth area is mainly denoised.
The filtering result obtained finally effectively reduces noise points of a smooth area while maintaining edge definition, realizes high-quality super-resolution image output, and improves adaptability of the whole processing flow and visual quality of images.
Specifically, referring to fig. 5, step S4 specifically includes the following steps:
s401: a local gradient consistency score LG between the super-resolution image and the corresponding original high-resolution image is calculated.
In this embodiment, the local gradient consistency score is intended to evaluate whether the super-resolution image faithfully restores the information of the original high-resolution image at edges and details. By comparing gradient information of two images at the same position or region, whether the super-resolution processing effectively retains structural information and edge details of the images can be judged.
Specifically, first, local gradients are calculated for the super-resolution image and the original high-resolution image, respectively, and the gradient magnitude and direction of each small region in the image are calculated. Then, a score reflecting local structural consistency is obtained using the concept of local contrast and structural composition in Structural Similarity (SSIM). For each pixel or specific region, if the gradient direction and the amplitude are similar, the consistency is considered to be high, the score is correspondingly high, through the step, the representation of the super-resolution image on detail maintenance can be accurately quantified, and the faithful restoration of edges and textures is ensured, which is important for the realism and the recognizability of the image.
S402: a global color distribution similarity GC between the super-resolution image and the corresponding original high-resolution image is calculated.
In this embodiment, the global color distribution similarity evaluates consistency of the super-resolution image and the original image in color balance and overall hue. Even if the local detail is well restored, if the overall color deviation is large, an unnatural feeling can be given to people. Thus, this assessment is an important consideration for the overall appearance quality of the image.
Specifically, the degree of matching of the histograms of the two images is calculated by using a color histogram cross entropy method. By comparing global color statistics (e.g., distribution of luminance, chrominance, saturation) of the two images, a score is obtained that reflects the level of color consistency. The high score means that the super-resolution image is highly consistent with the original image in color, so that the super-resolution image is ensured to be close to the original image in detail and color perception, and the overall visual harmony and sense of reality of the image are improved.
S403: obtaining a quality evaluation result according to a quality score calculation formula, wherein the quality score calculation formula is as follows:
Wherein, Is a weight factor,/>Is a composite quality score.
In this embodiment, the local gradient consistency score (LG) and the global color distribution similarity (GC) are comprehensively considered, and a comprehensive quality score is obtained by means of weighted average, so as to comprehensively evaluate the effect of the super-resolution processing. The method allows the weights of different aspects to be adjusted according to actual demands, so that the best comprehensive evaluation is achieved.
Specifically, referring to fig. 6, step S5 specifically includes the following steps:
s501: and setting a quality standard condition, wherein the quality standard condition is a comprehensive quality score threshold value.
In this embodiment, the setting of the integrated quality score threshold as the quality standard condition is to quantitatively define the standard of success or failure of the super-resolution image processing, so as to ensure that the output image is not only visually close to the original high-resolution image, but also can reach the predetermined high standard on the mathematical index.
Specifically, a threshold value of the composite quality score is set in combination with the historical data and the previous experimental results.
S502: judging the magnitude of the comprehensive quality score and the comprehensive quality score threshold of the super-resolution image,
If the comprehensive quality score of the super-resolution image is greater than or equal to the comprehensive quality score threshold, directly outputting the super-resolution image,
If the composite quality score of the super-resolution image is less than the composite quality score threshold, the next step S503 is executed.
In this embodiment, the super-resolution images obtained through the processing in steps S1 to S4 are calculated to have a comprehensive quality score by using a previously determined evaluation system, and then compared with a comprehensive quality score threshold. If the score meets or exceeds the threshold value, indicating that the treatment result is satisfactory; otherwise, the method has room for improvement, realizes the rapid judgment of the processing result, improves the efficiency of the processing flow, avoids unnecessary repeated processing, and simultaneously ensures that the output image is always maintained at a high quality level.
S503: identifying parameters to be adjusted according to the calculated local gradient consistency score LG and the global color distribution similarity GC, searching optimal parameters to be adjusted based on a gradient optimization algorithm, and re-executing the steps S1 to S4 according to the optimal parameters to be adjusted until the quality evaluation result of the super-resolution image reaches the quality standard condition.
In this embodiment, when the image quality is not as expected, it is necessary to locate the problem and optimize the processing parameters. The local gradient consistency score LG and the global color distribution similarity GC are two important feedback indicators that reflect the detail retention and the overall color fidelity of the image, respectively, from which key parameters affecting quality can be identified.
Specifically, first, according to the calculation results of LG and GC, whether edge details are lost or color deviation causes quality to be not up to standard is analyzed. Then, parameters of the interpolation algorithm (such as filter weights, selection of interpolation methods, etc.) are fine-tuned by using the gradient optimization algorithm to obtain a parameter combination that maximizes LG and GC indexes, and then, the image processing procedure is re-executed using these optimal parameters until the quality reaches the standard.
The local gradient consistency score measures the similarity of the super-resolution image to the original high-resolution image at the edges and detail portions. If the LG score is low, it indicates that the super-resolution image has a large difference from the original image in terms of edge and detail, possibly because of serious edge information loss in the interpolation algorithm or filtering process. At this time, it is important to consider adjusting parameters affecting edge preservation, such as weight distribution of the filter in the edge region, selecting a finer interpolation algorithm (e.g., adjusting from bilinear interpolation to bicubic interpolation), etc.
The global color distribution similarity evaluates the matching degree of the super-resolution image and the original image on the color distribution, and a low score value indicates that the problem of color deviation or over-high/low saturation can exist. If the GC score is low, it may mean that the color balance is destroyed during the filtering process, and parameters affecting the color fidelity, such as the action intensity of the filter in the smooth region, the color space conversion strategy, etc., need to be adjusted.
The method for adjusting parameters by adopting the gradient optimization algorithm specifically comprises the following steps:
First, a composite objective function is constructed based on LG and GC, the minimum or maximum of which corresponds to the optimal combination of image processing parameters. For example, the goal may be to maximize the weighted sum of LG and GC, with the weights adjusted according to the actual situation to balance the importance of detail preservation and color fidelity.
For the constructed objective function, a gradient under the current parameter setting is calculated by using a gradient descent method, and the gradient points to the direction in which the objective function value increases. By calculating the partial derivatives of the objective function with respect to the respective parameters, it can be known how to change the parameters in order to expect improvement of LG and GC index.
And gradually adjusting parameter values according to the gradient direction, and controlling the amplitude of each adjustment based on the learning rate parameter to avoid missing the optimal solution caused by overlarge step length. The process iterates until a set of parameters is found, such that the objective function reaches a maximum stop condition.
The super-resolution image processing method based on the interpolation algorithm provided by the invention has the following working principle: the invention carries out super-resolution processing on the low-resolution image through an interpolation algorithm, then carries out region division by utilizing gradient information of the image, adjusts weights of the linear impact filters for different regions to carry out filtering processing, finally evaluates the filtered super-resolution image through an evaluation index and outputs the image reaching a preset quality standard.
Example two
The invention provides a super-resolution image processing system based on an interpolation algorithm, which is applied to a super-resolution image processing method based on the interpolation algorithm, and is shown with reference to fig. 7, wherein the processing system comprises:
The super-resolution image acquisition module is used for acquiring a low-resolution image to be processed and a corresponding original high-resolution image thereof, and performing super-resolution processing on the low-resolution image to be processed through an interpolation algorithm to obtain a super-resolution image.
The region division module is used for dividing the image region based on gradient information of the super-resolution image and dividing the image region into an edge region and a smooth region, wherein the gradient information comprises gradient amplitude and gradient direction.
And the filtering module is used for correspondingly adjusting the weight of the linear impact filter according to the image region division result, and filtering the corresponding image region by utilizing the adjusted linear impact filter to obtain a filtered super-resolution image.
The quality evaluation module is used for evaluating the quality difference between the filtered super-resolution image and the corresponding original high-resolution image based on the evaluation index to obtain a quality evaluation result.
And the output module is used for judging the quality evaluation result based on a preset quality standard condition and outputting a super-resolution image reaching the quality standard condition.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flowchart and/or block of the flowchart illustrations and/or block diagrams, and combinations of flowcharts and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Those of ordinary skill in the art will appreciate that all or part of the steps of the various methods of the above embodiments may be implemented by a program that instructs associated hardware, the program may be stored in a computer readable storage medium including Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), programmable Read-Only Memory (Programmable Read-Only Memory, PROM), erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), one-time programmable Read-Only Memory (OTPROM), electrically erasable programmable Read-Only Memory (EEPROM), compact disc Read-Only Memory (CD-ROM), or other optical disc Memory, magnetic disk Memory, tape Memory, or any other medium capable of being used for computer readable carrying or storing data.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.

Claims (8)

1. The super-resolution image processing method based on the interpolation algorithm is characterized by comprising the following steps of:
s1: acquiring a low-resolution image to be processed and a corresponding original high-resolution image thereof, and performing super-resolution processing on the low-resolution image to be processed through an interpolation algorithm to obtain a super-resolution image;
s2: dividing an image region based on gradient information of the super-resolution image, and dividing the image region into an edge region and a smooth region, wherein the gradient information comprises a gradient amplitude value and a gradient direction;
S3: correspondingly adjusting the weight of the linear impact filter according to the image region division result, and carrying out filtering treatment on the corresponding image region by utilizing the adjusted linear impact filter to obtain a filtered super-resolution image;
S4: evaluating the quality difference between the filtered super-resolution image and the corresponding original high-resolution image based on the evaluation index to obtain a quality evaluation result;
S5: and judging the quality evaluation result based on a preset quality standard condition, and outputting a super-resolution image reaching the quality standard condition.
2. The method for processing a super-resolution image according to claim 1, wherein the step S1 specifically comprises the steps of:
S101: acquiring a low-resolution image to be processed and an original high-resolution image corresponding to the low-resolution image to be processed, and preprocessing the low-resolution image to be processed, wherein the preprocessing comprises contrast enhancement and color correction;
S102: and selectively carrying out interpolation calculation on the preprocessed low-resolution image by using an interpolation algorithm, and combining pixel values obtained by the interpolation calculation into a super-resolution image, wherein the interpolation algorithm comprises nearest neighbor interpolation, bilinear interpolation and bicubic interpolation.
3. The method according to claim 2, wherein in step S102, an interpolation algorithm is selectively used, in particular, one or more interpolation algorithms selected from nearest neighbor interpolation, bilinear interpolation, bicubic interpolation are selected for interpolation calculation according to the image characteristics of the low resolution image to be processed.
4. The method for processing a super-resolution image based on an interpolation algorithm according to claim 3, wherein the step S2 specifically comprises the following steps:
s201: performing gradient calculation on the super-resolution image by using an edge detection operator to obtain gradient amplitude and direction of each pixel point;
S202: comparing the gradient amplitude of each pixel point with a preset threshold value, and primarily dividing an image area into an edge area and a smooth area;
S203: and refining the preliminarily divided edge region and the smooth region to obtain a final edge region and a smooth region, wherein the refining comprises removing misclassified pixel points.
5. The method for processing a super-resolution image based on an interpolation algorithm according to claim 4, wherein the step S3 specifically comprises the steps of:
S301: based on the divided edge area and the smooth area, refining the local gradient amplitude and the local gradient direction in each area;
s302: calculating the gradient adjustment weight of the linear impact filter according to the local gradient amplitude values in each region, wherein the calculation formula of the gradient adjustment weight is as follows:
Wherein, Is a scale factor, k is an exponential factor,/>Is the local gradient amplitude,/>The weight is adjusted for the gradient;
S303: calculating the direction adjustment weight of the linear impact filter according to the local gradient direction in each region, wherein the calculation formula of the direction adjustment weight is as follows:
Wherein, Is the direction of the linear impulse filter,/>Is the local gradient direction,/>Adjusting weights for directions;
S304: adjusting weights in combination with gradients And direction adjustment weight/>Calculating to obtain the adjustment weight/>The adjustment weight/>The calculation formula of (2) is as follows:
s305: and adjusting the linear impact filter based on the adjustment weight W, and performing differential filtering processing on the edge area and the smooth area by using the adjusted linear impact filter.
6. The method for processing a super-resolution image based on an interpolation algorithm according to claim 5, wherein the step S4 specifically comprises the steps of:
s401: calculating a local gradient consistency score LG between the super-resolution image and the corresponding original high-resolution image;
s402: calculating global color distribution similarity GC between the super-resolution image and the corresponding original high-resolution image;
s403: obtaining a quality evaluation result according to a quality score calculation formula, wherein the quality score calculation formula is as follows:
Wherein, Is a weight factor,/>Is a composite quality score.
7. The method for processing a super-resolution image according to claim 6, wherein the step S5 comprises the steps of:
s501: setting a quality standard condition, wherein the quality standard condition is a comprehensive quality score threshold;
S502: judging the magnitude of the comprehensive quality score and the comprehensive quality score threshold of the super-resolution image,
If the comprehensive quality score of the super-resolution image is greater than or equal to the comprehensive quality score threshold, directly outputting the super-resolution image,
If the integrated quality score of the super-resolution image is smaller than the integrated quality score threshold, executing the next step S503;
S503: identifying parameters to be adjusted according to the calculated local gradient consistency score LG and the global color distribution similarity GC, searching optimal parameters to be adjusted based on a gradient optimization algorithm, and re-executing the steps S1 to S4 according to the optimal parameters to be adjusted until the quality evaluation result of the super-resolution image reaches the quality standard condition.
8. A super-resolution image processing system based on an interpolation algorithm, applied to the super-resolution image processing method based on an interpolation algorithm as claimed in any one of claims 1 to 7, characterized in that the processing system comprises:
the super-resolution image acquisition module is used for acquiring a low-resolution image to be processed and a corresponding original high-resolution image thereof, and performing super-resolution processing on the low-resolution image to be processed through an interpolation algorithm to obtain a super-resolution image;
The region dividing module is used for dividing the image region based on gradient information of the super-resolution image and dividing the image region into an edge region and a smooth region, wherein the gradient information comprises gradient amplitude and gradient direction;
the filtering module is used for correspondingly adjusting the weight of the linear impact filter according to the image region division result, and filtering the corresponding image region by utilizing the adjusted linear impact filter to obtain a filtered super-resolution image;
The quality evaluation module is used for evaluating the quality difference between the filtered super-resolution image and the corresponding original high-resolution image based on the evaluation index to obtain a quality evaluation result;
and the output module is used for judging the quality evaluation result based on a preset quality standard condition and outputting a super-resolution image reaching the quality standard condition.
CN202410612941.4A 2024-05-17 2024-05-17 Super-resolution image processing method and processing system based on interpolation algorithm Active CN118195902B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410612941.4A CN118195902B (en) 2024-05-17 2024-05-17 Super-resolution image processing method and processing system based on interpolation algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410612941.4A CN118195902B (en) 2024-05-17 2024-05-17 Super-resolution image processing method and processing system based on interpolation algorithm

Publications (2)

Publication Number Publication Date
CN118195902A true CN118195902A (en) 2024-06-14
CN118195902B CN118195902B (en) 2024-07-16

Family

ID=91400226

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410612941.4A Active CN118195902B (en) 2024-05-17 2024-05-17 Super-resolution image processing method and processing system based on interpolation algorithm

Country Status (1)

Country Link
CN (1) CN118195902B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651938A (en) * 2017-01-17 2017-05-10 湖南优象科技有限公司 Depth map enhancement method blending high-resolution color image
CN112132749A (en) * 2020-09-24 2020-12-25 合肥学院 Image processing method and device applying parameterized Thiele continuous fractional interpolation
CN114708165A (en) * 2022-04-11 2022-07-05 重庆理工大学 Edge perception texture filtering method combining super pixels
CN116091322A (en) * 2023-04-12 2023-05-09 山东科技大学 Super-resolution image reconstruction method and computer equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651938A (en) * 2017-01-17 2017-05-10 湖南优象科技有限公司 Depth map enhancement method blending high-resolution color image
CN112132749A (en) * 2020-09-24 2020-12-25 合肥学院 Image processing method and device applying parameterized Thiele continuous fractional interpolation
CN114708165A (en) * 2022-04-11 2022-07-05 重庆理工大学 Edge perception texture filtering method combining super pixels
CN116091322A (en) * 2023-04-12 2023-05-09 山东科技大学 Super-resolution image reconstruction method and computer equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
韩华, 文伟, 彭思龙: "多核重复背投影图像超分辨率方法", 计算机辅助设计与图形学学报, no. 07, 20 July 2005 (2005-07-20), pages 130 - 136 *
黄吉庆;王丽会;秦进;程欣宇;张健;李智;: "基于多种正则化的改进超分辨率重建算法", 计算机工程与应用, no. 15, 1 August 2018 (2018-08-01), pages 27 - 33 *

Also Published As

Publication number Publication date
CN118195902B (en) 2024-07-16

Similar Documents

Publication Publication Date Title
EP2176830B1 (en) Face and skin sensitive image enhancement
JP5362878B2 (en) Image processing apparatus and image processing method
JP4460839B2 (en) Digital image sharpening device
JP2003198850A (en) Digital color image processing method
CN107123124B (en) Retina image analysis method and device and computing equipment
CN109214996B (en) Image processing method and device
WO2022016326A1 (en) Image processing method, electronic device, and computer-readable medium
CN109035167B (en) Method, device, equipment and medium for processing multiple faces in image
CN112700363B (en) Self-adaptive visual watermark embedding method and device based on region selection
CN114219732A (en) Image defogging method and system based on sky region segmentation and transmissivity refinement
Zhu et al. Fast single image dehazing through edge-guided interpolated filter
CN114202491B (en) Method and system for enhancing optical image
Kansal et al. Fusion-based image de-fogging using dual tree complex wavelet transform
CN118195902B (en) Super-resolution image processing method and processing system based on interpolation algorithm
CN114913099B (en) Method and system for processing video file
JP3636936B2 (en) Grayscale image binarization method and recording medium recording grayscale image binarization program
Jeong et al. Fast fog detection for de-fogging of road driving images
CN110633705A (en) Low-illumination imaging license plate recognition method and device
US9225876B2 (en) Method and apparatus for using an enlargement operation to reduce visually detected defects in an image
Kumari et al. Real time image and video deweathering: The future prospects and possibilities
Goel The implementation of image enhancement techniques using Matlab
JP4773240B2 (en) Similarity discrimination apparatus, method, and program
CN112085683A (en) Depth map reliability detection method in significance detection
JP6117982B1 (en) Image processing apparatus, image processing method, and image processing program
JP4230960B2 (en) Image processing apparatus, image processing method, and image processing program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant