CN112381812A - Simple and efficient image quality evaluation method and system - Google Patents
Simple and efficient image quality evaluation method and system Download PDFInfo
- Publication number
- CN112381812A CN112381812A CN202011314948.6A CN202011314948A CN112381812A CN 112381812 A CN112381812 A CN 112381812A CN 202011314948 A CN202011314948 A CN 202011314948A CN 112381812 A CN112381812 A CN 112381812A
- Authority
- CN
- China
- Prior art keywords
- feature
- yuv
- image
- sub
- follows
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000013441 quality evaluation Methods 0.000 title claims description 29
- 238000004364 calculation method Methods 0.000 claims abstract description 22
- 238000001303 quality assessment method Methods 0.000 claims abstract description 10
- 238000000638 solvent extraction Methods 0.000 claims description 16
- 230000009466 transformation Effects 0.000 claims description 6
- 230000000903 blocking effect Effects 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 3
- 230000001131 transforming effect Effects 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/60—Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Evolutionary Biology (AREA)
- Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Image Analysis (AREA)
Abstract
A simple and efficient image quality assessment method and system are provided, the method comprises the following steps: step S1: converting the RGB color space of the reference image and the image to be evaluated into YUV color space, and respectively adding; step S2: extracting the first feature and the second feature in the luminance space Y; step S3: extracting the third feature and the fourth feature on the chrominance space U and the chrominance space V respectively, and then extracting the fifth feature and the sixth feature on the chrominance space V respectively; step S4: performing difference calculation on the third feature, the fourth feature, the fifth feature and the sixth feature obtained in the step S3 to obtain a similarity index; step S5: and performing difference calculation according to the first feature and the second feature obtained in the step S2, and obtaining the final similarity between the reference image and the image to be evaluated according to the similarity index obtained in the step S4. The invention has low algorithm complexity, less resource consumption and high practical value.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a simple and efficient image quality evaluation method and system.
Background
In recent years, with the rapid development of the information age, the everyday acceptance of various image and video information has been explosively increased. Digital images play many roles in production and life, but they inevitably bring about distortion in the processes of acquisition, compression and transmission due to various reasons such as imperfect processing method, loss of transmission medium, noise pollution and the like, thereby causing the degradation of image quality. Therefore, how to accurately and quickly perform quality evaluation on images has become an important problem in many fields such as image acquisition, transmission, compression, recovery and enhancement.
The quality evaluation method can be divided into 3 types of full reference quality evaluation, no reference quality evaluation and weak reference quality evaluation according to the quantity of the acquired reference information. The full reference image quality evaluation algorithm uses an original image as a reference image of a distorted image; only the information of partial reference images is used in the semi-reference image quality evaluation algorithm; the no-reference image quality assessment algorithm does not use any information in the reference image as a priori data.
The most common method at present is a full-reference image quality assessment algorithm. The traditional full-reference objective image quality evaluation algorithm has a mean square error and a peak signal-to-noise ratio, and is widely used due to a simple calculation method and clear physical significance, but the algorithms only analyze images in a statistical sense and do not consider the correlation between pixels. The follow-up scholars propose an SSIM (structural similarity) algorithm based on the structural similarity of the image, the algorithm evaluates the quality of the image from three aspects of brightness characteristic, contrast characteristic and structural similarity characteristic, compared with the previous evaluation method, the algorithm obtains qualitative breakthrough and is a landmark type representative algorithm in the image quality evaluation algorithm. After the method is published, researchers make many improvements on the basis of the method, and all the improvements achieve good effects. However, most of the algorithms involve various complicated calculation processes, which are time-consuming and not beneficial to practical application.
The foregoing description is provided for general background information and is not admitted to be prior art.
Disclosure of Invention
The invention aims to provide a simple and efficient image quality evaluation method and system which can accurately evaluate the quality of an image, has low algorithm complexity and less resource consumption, can meet the requirement of real-time processing and has high practical value.
The invention provides a simple and efficient image quality evaluation method, which comprises the following steps: step S1: converting the RGB color space of the reference image and the image to be evaluated into YUV color space which is YUV _ A (x, y) and YUV _ B (x, y) respectively; step S2: extracting a first feature and a second feature of the YUV _ A (x, Y) and the YUV _ B (x, Y) on a luminance space Y respectively; step S3: extracting a third feature and a fourth feature of the YUV _ A (x, y) and the YUV _ B (x, y) on a chrominance space U, respectively, and then extracting a fifth feature and a sixth feature of the YUV _ A (x, y) and the YUV _ B (x, y) on a chrominance space V, respectively; step S4: performing difference calculation on the third feature, the fourth feature, the fifth feature and the sixth feature obtained in the step S3 to obtain a similarity index; step S5: and performing difference calculation according to the first feature and the second feature obtained in the step S2, and obtaining the final similarity between the reference image and the image to be evaluated according to the similarity index obtained in the step S4.
Further, the YUV _ a (x, Y) is denoted as Y _ a (x, Y) in the luminance space Y; the step S2 includes: step S21: extracting LBP characteristics of the Y _ A (x, Y), wherein a specific formula is as follows:wherein LBP _ Y _ A (x, Y) represents an LBP characteristic of said Y _ A (x, Y),(xc,yc) A center pixel point, g, representing the YUV _ A (x, y)cThe brightness value, g, of the center pixel pointpRepresenting the immediate surroundings of YUV _ A (x, y)The brightness value of the pixel point; step S22: extracting gradient information of LBP _ Y _ a (x, Y) obtained in step S21 by using a sobel operator, where the gradient information of LBP _ Y _ a (x, Y) includes horizontal direction gradient information and vertical direction gradient information, and the specific formula is as follows:whereinRepresenting a convolution operation, G1And G2Represents the horizontal direction gradient information and vertical direction gradient information, respectively, and G _ Y _ a (x, Y) represents the gradient information of the LBP _ Y _ a (x, Y); step S23: performing blocking processing on the G _ Y _ a (x, Y) obtained in the step S22, specifically dividing the G _ Y _ a into first sub-block images with preset sizes, where the first sub-block images are marked as { G _ Y _ an(x, Y) | N ═ 1, …, N }, where N denotes the number of all subblocks after the G _ Y _ a (x, Y) is partitioned; step S24: calculating the mean value and the variance of each first sub-block image, and storing the mean value and the variance as features, so that each first sub-block image obtains a two-dimensional vector, and finally the obtained first features are as follows: { F _ Y _ An|n=1,…,N}。
Further, the YUV _ B (x, Y) is denoted as Y _ B (x, Y) in the luminance space Y; the second feature is calculated in a manner consistent with the methods of step S21, step S22, step S23, and step S24, and the second feature is { F _ Y _ Bn|n=1,…,N}。
Further, the YUV _ a (x, y) is denoted as U _ a (x, y) in the chrominance space U; the step S3 includes: step S31: partitioning the U _ A (x, y), and specifically partitioning the U _ A (x, y) into second sub-block images with preset sizes, wherein the second sub-block images are marked as { U _ An(x, y) | N ═ 1, …, N }; step S32: dividing the U _ A (x, y) into sixteen equal parts, respectively counting the color histogram of each second sub-block image, obtaining a sixteen-dimensional vector from each second sub-block image, and finally obtaining the third characteristic: { F _ U _ An|n=1,…,N}。
Further, YUV _ B (x, y) is inThe chroma space U is marked as U _ B (x, y), the chroma space V is marked as V _ A (x, y), and the chroma space V is marked as V _ B (x, y); the fourth, fifth, and sixth features are calculated in the same manner as the methods of step S31 and step S32, and are { F _ U _ Bn|n=1,…,N}、{F_V_An|n=1,…,N}、 {F_V_Bn|n=1,…,N}。
Further, the step S4 specifically includes: the similarity index is { beta n1, …, N, the specific formula is as follows:wherein | F _ U _ An-F_U_Bn| represents a difference between the third feature and the fourth feature, | F _ V _ An-F-V_BnL represents a difference of the fifth feature and the sixth feature, wherein · represents a dot product operation.
Further, the step S5 includes: step S51: calculating the difference between the first characteristic and the second characteristic, wherein the specific formula is as follows: t is tn=F_Y_An-F_Y_BnWherein t isnRepresenting the difference; step S52: calculating the similarity of the reference image and each sub-block image of the image to be evaluated, wherein a specific formula is as follows:wherein λ isnRepresenting the similarity, β, of the reference image and each of the sub-block images of the image to be evaluatednRepresenting the similarity index; step S53: and counting the final similarity of the reference image and the image to be evaluated, wherein a specific formula is as follows:wherein λ represents the final similarity between the reference image and the image to be evaluated.
The invention also provides a simple and efficient image quality evaluation system which comprises a transformation module, an extraction module and a calculation module, wherein the transformation module is used for transforming the RGB color space of the reference image and the image to be evaluated into YUV color space which is YUV _ A (x, y) and YUV _ B (x, y); the extracting module is used for respectively extracting a first feature and a second feature of YUV _ A (x, Y) and YUV _ B (x, Y) on a luminance space Y, respectively extracting a third feature and a fourth feature of YUV _ A (x, Y) and YUV _ B (x, Y) on a chrominance space U, and then respectively extracting a fifth feature and a sixth feature on YUV _ A (x, Y) and YUV _ B (x, Y) chrominance space V; the calculation module is used for performing difference calculation according to the third feature, the fourth feature and the fifth feature to obtain a similar index, performing difference calculation according to the first feature and the second feature, and obtaining the final similarity of the reference image and the image to be evaluated according to the similar index.
Further, the YUV _ a (x, Y) is denoted as Y _ a (x, Y) in the luminance space Y, and the first feature is obtained by: extracting LBP characteristics of the Y _ A (x, Y), wherein a specific formula is as follows:wherein LBP _ Y _ A (x, Y) represents an LBP characteristic of said Y _ A (x, Y),(xc,yc) A center pixel point, g, representing the YUV _ A (x, y)cThe brightness value, g, of the center pixel pointpRepresenting the brightness value of the adjacent pixel points around the YUV _ A (x, y); extracting gradient information of the LBP _ Y _ A (x, Y) by using a sobel operator, wherein the gradient information of the LBP _ Y _ A (x, Y) comprises horizontal direction gradient information and vertical direction gradient information, and the specific formula is as follows:whereinRepresenting a convolution operation, G1And G2Respectively representing the horizontal direction gradient information and the vertical directionRepresenting gradient information of the LBP _ Y _ a (x, Y) to gradient information, G _ Y _ a (x, Y); performing blocking processing on the G _ Y _ A (x, Y), and specifically dividing the G _ Y _ A (x, Y) into first sub-block images with preset sizes, wherein the first sub-block images are marked as { G _ Y _ An(x, Y) | N ═ 1, …, N }, where N denotes the number of all subblocks after the G _ Y _ a (x, Y) is partitioned; calculating the mean value and the variance of each first sub-block image, and storing the mean value and the variance as features, so that each first sub-block image obtains a two-dimensional vector, and finally the obtained first features are as follows: { F _ Y _ A n1, | N ═ 1, …, N }; the YUV _ B (x, Y) is denoted as Y _ B (x, Y) in a luminance space Y, and the second feature is obtained by: consistent with the first feature, the second feature is { F _ Y _ B n1, | N ═ 1, …, N }; the YUV _ a (x, y) is denoted as U _ a (x, y) in a chrominance space U, and the third feature is obtained in the following manner: partitioning the U _ A (x, y), and specifically partitioning the U _ A (x, y) into second sub-block images with preset sizes, wherein the second sub-block images are marked as { U _ An(x, y) | N ═ 1, …, N }; dividing the U _ A (x, y) into sixteen equal parts, respectively counting the color histogram of each second sub-block image, obtaining a sixteen-dimensional vector from each second sub-block image, and finally obtaining the third characteristic: { F _ U _ A n1, | N ═ 1, …, N }; the YUV _ B (x, y) is denoted as U _ B (x, y) in a chrominance space U, the YUV _ a (x, y) is denoted as V _ a (x, y) in a chrominance space V, the YUV _ B (x, y) is denoted as V _ B (x, y) in a chrominance space V, and the fourth feature, the fifth feature, and the sixth feature are obtained in the following manner: the fourth feature, the fifth feature and the sixth feature are respectively as follows:
{F_U_Bn|n=1,…,N}、{F_V_An|n=1,…,N}、{F_V_B n1, | N ═ 1, …, N }; the similarity index is { beta n1, …, N, the specific formula is as follows:
wherein | F _ U _ An-F_U_BnL represents the third feature and the fourth featureDifference, | F _ V _ An-F_V_Bn| represents a difference between the fifth feature and the sixth feature, wherein |, represents a dot product operation; the final similarity obtaining mode of the reference image and the image to be evaluated is as follows: calculating the difference between the first characteristic and the second characteristic, wherein the specific formula is as follows: t is tn=F_Y_An-F_Y_BnWherein t isnRepresenting the difference; calculating the similarity of the reference image and each sub-block image of the image to be evaluated, wherein a specific formula is as follows:wherein λ isnRepresenting the similarity, β, of the reference image and each of the sub-block images of the image to be evaluatednRepresenting the similarity index; and counting the final similarity of the reference image and the image to be evaluated, wherein a specific formula is as follows:wherein λ represents the final similarity between the reference image and the image to be evaluated.
The simple and efficient image quality evaluation method and system provided by the invention can accurately evaluate the quality of the image by obtaining the final similarity of the reference image and the image to be evaluated, has low algorithm complexity and less resource consumption, can meet the requirement of real-time processing, and has very high practical value.
Drawings
Fig. 1 is a first flowchart of a simple and efficient image quality evaluation method according to an embodiment of the present invention.
Fig. 2 is a second flowchart of the simple and efficient image quality evaluation method shown in fig. 1.
Fig. 3 is a schematic diagram of a specific flow of extracting the first feature in the simple and efficient image quality evaluation method shown in fig. 1.
Fig. 4 is a schematic diagram of a specific flow of extracting a third feature in the simple and efficient image quality evaluation method shown in fig. 1.
Fig. 5 is a schematic diagram of a detailed flow of the final similarity between the reference image and the image to be evaluated in the simple and efficient image quality evaluation method shown in fig. 1.
Fig. 6 is a schematic structural diagram of a simple and efficient image quality evaluation system according to an embodiment of the present invention.
Detailed Description
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
As shown in fig. 1 to 5, in the present embodiment, a simple and efficient image quality evaluation method is provided, which includes the following steps:
step S1: the RGB (three primary optical colors, R for red, G for green, B for blue) color space of the reference image and the image to be evaluated is transformed into YUV (luminance and color difference signal) color space, YUV _ a (x, y) and YUV _ B (x, y), respectively.
In this embodiment, the reference image is RGB _ a (x, y), and the image to be evaluated is RGB _ B (x, y), which are color RGB images. Specifically, RGB _ a (x, y) and RGB _ B (x, y) are converted into YUV _ a (x, y) and YUV _ B (x, y). This color representation is more consistent with the visual characteristics of the human eye because in the YUV color space, the luminance space Y and the two chrominance spaces U, V are completely separate.
Step S2: and respectively extracting a first feature and a second feature of YUV _ A (x, Y) and YUV _ B (x, Y) on a luminance space Y.
In this embodiment, YUV _ a (x, Y) is denoted as Y _ a (x, Y) in the luminance space Y, and YUV _ B (x, Y) is denoted as Y _ B (x, Y) in the luminance space Y.
First, a first feature of YUV _ a (x, Y) in the luminance space Y is extracted, as shown in a specific flowchart of a simple and efficient image quality evaluation method shown in fig. 3. In a specific application example, the detailed flow of step S2 of the present invention includes:
step S21: the LBP (local binary pattern) feature of Y _ a (x, Y) is extracted and denoted as LBP _ Y _ a (x, Y). The specific formula is as follows:
wherein LBP _ Y _ A (x, Y) denotes the LBP characteristic of Y _ A (x, Y),(xc,yc) Center pixel point, g, representing YUV _ A (x, y)cBrightness value, g, representing the center pixelpAnd the brightness values of the adjacent pixel points around YUV _ A (x, y) are represented.
Step S22: gradient information of LBP _ Y _ a (x, Y) obtained in the above step S21 is extracted by a sobel (sobel) operator. Gradient information of LBP _ Y _ a (x, Y) includes horizontal direction gradient information and vertical direction gradient information. The specific formula is as follows:
whereinRepresenting a convolution operation, G1And G2Respectively, horizontal direction gradient information and vertical direction gradient information, and G _ Y _ a (x, Y) represents gradient information of LBP _ Y _ a (x, Y).
Step S23: the G _ Y _ a (x, Y) obtained in the step S22 is subjected to a block division process, and is specifically divided into a first sub-block image with a preset size, and the first sub-block image is marked as { G _ Y _ an(x, Y) | N ═ 1, …, N }, where N denotes the number of all subblocks after G _ Y _ a (x, Y) partitioning.
In this embodiment, the predetermined size is 20 × 20 mm. In other embodiments, the first sub-block image with a preset size of 30 × 30mm or 50 × 50mm or other values may be divided.
Step S24: calculating the mean value and the variance of each first sub-block image, and storing the mean value and the variance as features, so that each first sub-block image obtains a two-dimensional vector, and finally obtaining first features as follows: { F _ Y _ An|n=1,…,N}。
Next, a second feature of YUV _ B (x, Y) in luminance space Y is extracted. The second feature is calculated in the same manner as the steps S21, S22,The methods of step S23 and step S24 are the same and will not be described in detail here. The second characteristic is { F _ Y _ Bn|n=1,…,N}。
Step S3: extracting a third feature and a fourth feature of YUV _ a (x, y) and YUV _ B (x, y) in the chrominance space U, respectively, and then extracting a fifth feature and a sixth feature in YUV _ a (x, y) and YUV _ B (x, y) chrominance space V, respectively.
In this embodiment, YUV _ a (x, y) is denoted as U _ a (x, y) in the chrominance space U, YUV _ B (x, y) is denoted as U _ B (x, y) in the chrominance space U, YUV _ a (x, y) is denoted as V _ a (x, y) in the chrominance space V, and YUV _ B (x, y) is denoted as V _ B (x, y) in the chrominance space V.
First, a third feature of YUV _ a (x, y) in the chrominance space U is extracted, as shown in a specific flowchart of a simple and efficient image quality evaluation method shown in fig. 4. In a specific application example, the detailed flow of step S3 of the present invention includes:
step S31: partitioning the U _ A (x, y), and specifically partitioning the U _ A (x, y) into second sub-block images with preset sizes, wherein the second sub-block images are recorded as { U _ An(x,y)|n=1,…,N}。
In this embodiment, the predetermined size is 20 × 20 mm. In other embodiments, the second sub-block image with a preset size of 30 × 30mm or 50 × 50mm or other values may be divided.
Step S32: dividing the U _ A (x, y) into sixteen equal parts, respectively counting the color histogram of each second sub-block image, obtaining a sixteen-dimensional vector from each second sub-block image, and finally obtaining a third characteristic: { F _ U _ An|n=1,…,N}。
Next, a fourth feature of YUV _ B (x, y) on the chrominance space U is extracted, and a fifth feature and a sixth feature on YUV _ a (x, y) and YUV _ B (x, y) chrominance space V are extracted. The calculation manners of the fourth, fifth and sixth features are consistent with the methods of the above steps S31 and S32, and will not be elaborated here. The fourth feature, the fifth feature, and the sixth feature are respectively:
{F_U_Bn|n=1,…,N}、{F_V_An|n=1,…,N}、{F_V_Bn|n=1,…,N}。
step S4: and performing difference calculation on the third feature, the fourth feature, the fifth feature and the sixth feature obtained in the step S3 to obtain a similarity index.
Specifically, a similarity index is calculated for each sub-block image into which U _ a (x, y), U _ B (x, y), V _ a (x, y), and V _ B (x, y) are divided, based on the characteristics of the chromaticity space classification. The similarity index is { beta n1, …, N, the specific formula is as follows:
wherein | F _ U _ An-F_U_Bn| represents a difference between the third feature and the fourth feature, | F _ V _ an-F_V_BnAnd | represents a difference between the fifth feature and the sixth feature, wherein · represents a point multiplication operation.
Step S5: and performing difference calculation according to the first feature and the second feature obtained in the step S2, and obtaining the final similarity between the reference image and the image to be evaluated according to the similarity index obtained in the step S4.
As shown in fig. 1 to fig. 2, in the embodiment, the image quality is evaluated through the final similarity between the reference image and the image to be evaluated, and the method has the effects of simplicity, short time consumption and benefit for practical application. As shown in fig. 5, in a specific application example, the detailed flow of step S5 of the present invention includes:
step S51: calculating the difference between the first characteristic and the second characteristic, wherein the specific formula is as follows: t is tn=F_Y_An-F_Y_BnWherein t isnIndicating the difference.
Step S52: calculating the similarity of each sub-block image of the reference image and the image to be evaluated, wherein the specific formula is as follows:
wherein λ isnRepresenting the degree of similarity, β, of the reference image and each of the sub-block images of the image to be evaluatednIndicating a similarity index.
Step S53: and (3) counting the final similarity of the reference image and the image to be evaluated, wherein the specific formula is as follows:wherein λ represents the final similarity between the reference image and the image to be evaluated.
According to the image quality evaluation method and device, the similarity between the reference image and the image to be evaluated is calculated, the quality of the image can be accurately evaluated, the algorithm complexity is low, the resource consumption is low, the requirement for real-time processing can be met, and the practical value is high.
As shown in fig. 6, the present invention further provides a simple and efficient image quality evaluation system, which includes a transformation module 60, an extraction module 61, and a calculation module 62.
The transformation module 60 is configured to transform the RGB color space of the reference image and the image to be evaluated into YUV color spaces, which are YUV _ a (x, y) and YUV _ B (x, y), respectively.
The extracting module 61 is configured to extract a first feature and a second feature of YUV _ a (x, Y) and YUV _ B (x, Y) in the luminance space Y, respectively, extract a third feature and a fourth feature of YUV _ a (x, Y) and YUV _ B (x, Y) in the chrominance space U, respectively, and then extract a fifth feature and a sixth feature of YUV _ a (x, Y) and YUV _ B (x, Y) in the chrominance space V, respectively.
YUV _ a (x, Y) is denoted as Y _ a (x, Y) in luminance space Y, and the first feature is obtained by:
extracting LBP characteristics of Y _ A (x, Y), wherein the specific formula is as follows:
wherein LBP _ Y _ A (x, Y) denotes the LBP characteristic of Y _ A (x, Y),(xc,yc) Center pixel point, g, representing YUV _ A (x, y)cBrightness value, g, representing the center pixelpRepresenting the brightness value of the adjacent pixel points around YUV _ A (x, y); extracting gradient information of LBP _ Y _ A (x, Y) by using sobel operator, wherein the gradient information of LBP _ Y _ A (x, Y) comprises a horizontal squareThe specific formula of the gradient information and the vertical gradient information is as follows:whereinRepresenting a convolution operation, G1And G2Represents horizontal direction gradient information and vertical direction gradient information, respectively, G _ Y _ a (x, Y) represents gradient information of LBP _ Y _ a (x, Y); partitioning the G _ Y _ A (x, Y), and specifically partitioning the G _ Y _ A (x, Y) into first sub-block images with preset sizes, wherein the first sub-block images are marked as { G _ Y _ An(x, Y) | N ═ 1, …, N }, where N denotes the number of all subblocks after G _ Y _ a (x, Y) partitioning; calculating the mean value and the variance of each first sub-block image, and storing the mean value and the variance as features, so that each first sub-block image obtains a two-dimensional vector, and finally obtaining first features as follows: { F _ Y _ An|n=1,…,N}。
In this embodiment, the predetermined size is 20 × 20 mm. In other embodiments, the preset size may be 30 × 30mm or 50 × 50mm, and other values may be further divided.
YUV _ B (x, Y) is denoted as Y _ B (x, Y) in the luminance space Y, and the second feature is obtained by:
consistent with the first feature, the second feature is { F _ Y _ Bn|n=1,…,N}。
YUV _ a (x, y) is denoted as U _ a (x, y) in the chrominance space U, and the third feature is obtained by the following method:
partitioning the U _ A (x, y), and specifically partitioning the U _ A (x, y) into second sub-block images with preset sizes, wherein the second sub-block images are recorded as { U _ An(x, y) | N ═ 1, …, N }; dividing the U _ A (x, y) into sixteen equal parts, respectively counting the color histogram of each second sub-block image, obtaining a sixteen-dimensional vector from each second sub-block image, and finally obtaining a third characteristic: { F _ U _ An|n=1,…,N}。
In this embodiment, the predetermined size is 20 × 20 mm. In other embodiments, the preset size may be 30 × 30mm or 50 × 50mm, and other values may be further divided.
YUV _ B (x, y) is denoted as U _ B (x, y) in the chrominance space U, YUV _ a (x, y) is denoted as V _ a (x, y) in the chrominance space V, YUV _ B (x, y) is denoted as V _ B (x, y) in the chrominance space V, and the fourth feature, the fifth feature, and the sixth feature are obtained in the following manner:
are all consistent with the third feature, and the fourth, fifth and sixth features are { F _ U _ Bn|n=1,…,N}、{F_V_An|n=1,…,N}、{F_V_Bn|n=1,…,N}。
The calculating module 62 is configured to perform difference calculation according to the third feature, the fourth feature, the fifth feature and the sixth feature to obtain a similar index, perform difference calculation according to the first feature and the second feature, and obtain a final similarity between the reference image and the image to be evaluated according to the similar index.
In this embodiment, the similarity index is { β [ ]n1, …, N, the specific formula is as follows:wherein | F _ U _ An-F_U_Bn| represents a difference between the third feature and the fourth feature, | F _ V _ an-F_V_BnAnd | represents a difference between the fifth feature and the sixth feature, wherein · represents a point multiplication operation.
The final similarity obtaining mode of the reference image and the image to be evaluated is as follows: calculating the difference between the first characteristic and the second characteristic, wherein the specific formula is as follows: t is tn=F_Y_An-F_Y_BnWherein t isnRepresents a difference; calculating the similarity of each sub-block image of the reference image and the image to be evaluated, wherein the specific formula is as follows:wherein λ isnRepresenting the degree of similarity, β, of the reference image and each of the sub-block images of the image to be evaluatednRepresenting a similarity index; and (3) counting the final similarity of the reference image and the image to be evaluated, wherein the specific formula is as follows:where λ represents the final reference and estimated imagesThe similarity of (c).
In this document, the terms "upper", "lower", "front", "rear", "left", "right", "top", "bottom", "inner", "outer", "vertical", "horizontal", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for the purpose of clarity and convenience of description of the technical solutions, and thus, should not be construed as limiting the present invention.
As used herein, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, including not only those elements listed, but also other elements not expressly listed.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.
Claims (9)
1. A simple and efficient image quality assessment method is characterized by comprising the following steps: step S1: converting the RGB color space of the reference image and the image to be evaluated into YUV color space which is YUV _ A (x, y) and YUV _ B (x, y) respectively; step S2: extracting a first feature and a second feature of the YUV _ A (x, Y) and the YUV _ B (x, Y) on a luminance space Y respectively; step S3: extracting a third feature and a fourth feature of the YUV _ A (x, y) and the YUV _ B (x, y) on a chrominance space U, respectively, and then extracting a fifth feature and a sixth feature of the YUV _ A (x, y) and the YUV _ B (x, y) on a chrominance space V, respectively; step S4: performing difference calculation on the third feature, the fourth feature, the fifth feature and the sixth feature obtained in the step S3 to obtain a similarity index; step S5: and performing difference calculation according to the first feature and the second feature obtained in the step S2, and obtaining the final similarity between the reference image and the image to be evaluated according to the similarity index obtained in the step S4.
2. The simple and efficient image quality assessment method according to claim 1, wherein said YUV _ a (x, Y) is denoted as Y _ a (x, Y) in luminance space Y; the step S2 includes: step S21: extracting LBP characteristics of the Y _ A (x, Y), wherein a specific formula is as follows:
wherein LBP _ Y _ A (x, Y) represents an LBP characteristic of said Y _ A (x, Y),(xc,yc) A center pixel point, g, representing the YUV _ A (x, y)cThe brightness value, g, of the center pixel pointpRepresenting the brightness value of the adjacent pixel points around the YUV _ A (x, y); step S22: extracting gradient information of LBP _ Y _ a (x, Y) obtained in step S21 by using a sobel operator, where the gradient information of LBP _ Y _ a (x, Y) includes horizontal direction gradient information and vertical direction gradient information, and the specific formula is as follows:wherein Representing a convolution operation, G1And G2Represents the horizontal direction gradient information and vertical direction gradient information, respectively, and G _ Y _ a (x, Y) represents the gradient information of the LBP _ Y _ a (x, Y); step S23: performing blocking processing on the G _ Y _ a (x, Y) obtained in the step S22, specifically dividing the G _ Y _ a into first sub-block images with preset sizes, where the first sub-block images are marked as { G _ Y _ an(x, Y) | N ═ 1, …, N }, where N denotes the number of all subblocks after the G _ Y _ a (x, Y) is partitioned; step S24: calculating the mean and variance of each first sub-block image, and storing the mean and variance as features to ensure thatObtaining each first sub-block image to obtain a two-dimensional vector, and finally obtaining the first characteristic as follows: { F _ Y _ An|n=1,…,N}。
3. The simple and efficient image quality assessment method according to claim 2, wherein said YUV _ B (x, Y) is denoted as Y _ B (x, Y) in luminance space Y; the second feature is calculated in a manner consistent with the methods of step S21, step S22, step S23, and step S24, and the second feature is { F _ Y _ Bn|n=1,…,N}。
4. The simple and efficient image quality assessment method according to claim 3, wherein said YUV _ A (x, y) is denoted as U _ A (x, y) in the chrominance space U; the step S3 includes: step S31: partitioning the U _ A (x, y), and specifically partitioning the U _ A (x, y) into second sub-block images with preset sizes, wherein the second sub-block images are marked as { U _ An(x, y) | N ═ 1, …, N }; step S32: dividing the U _ A (x, y) into sixteen equal parts, respectively counting the color histogram of each second sub-block image, obtaining a sixteen-dimensional vector from each second sub-block image, and finally obtaining the third characteristic: { F _ U _ An|n=1,…,N}。
5. The simple and efficient image quality assessment method according to claim 4, wherein said YUV _ B (x, y) is denoted as U _ B (x, y) in chrominance space U, said YUV _ A (x, y) is denoted as V _ A (x, y) in chrominance space V, and said YUV _ B (x, y) is denoted as V _ B (x, y) in chrominance space V; the fourth, fifth, and sixth features are calculated in the same manner as the methods of step S31 and step S32, and are { F _ U _ Bn|n=1,…,N}、{F_V_An|n=1,…,N}、{F_V_Bn|n=1,…,N}。
6. The simple and efficient image quality assessment method according to claim 5, wherein said step S4 specifically comprises: the similarity index is { betan|n=1,…N, the specific formula is as follows:wherein | F _ U _ An-F_U_Bn| represents a difference between the third feature and the fourth feature, | F _ V _ An-F_V_BnL represents a difference of the fifth feature and the sixth feature, wherein · represents a dot product operation.
7. The simple and efficient image quality evaluation method according to claim 6, wherein said step S5 includes: step S51: calculating the difference between the first characteristic and the second characteristic, wherein the specific formula is as follows: t is tn=F_Y_An-F_Y_BnWherein t isnRepresenting the difference; step S52: calculating the similarity of the reference image and each sub-block image of the image to be evaluated, wherein a specific formula is as follows:wherein λ isnRepresenting the similarity, β, of the reference image and each of the sub-block images of the image to be evaluatednRepresenting the similarity index; step S53: and counting the final similarity of the reference image and the image to be evaluated, wherein a specific formula is as follows:wherein λ represents the final similarity between the reference image and the image to be evaluated.
8. A simple and efficient image quality evaluation system is characterized by comprising a transformation module, an extraction module and a calculation module, wherein the transformation module is used for transforming RGB color spaces of a reference image and an image to be evaluated into YUV color spaces which are YUV _ A (x, y) and YUV _ B (x, y) respectively; the extracting module is used for respectively extracting a first feature and a second feature of YUV _ A (x, Y) and YUV _ B (x, Y) on a luminance space Y, respectively extracting a third feature and a fourth feature of YUV _ A (x, Y) and YUV _ B (x, Y) on a chrominance space U, and then respectively extracting a fifth feature and a sixth feature on YUV _ A (x, Y) and YUV _ B (x, Y) chrominance space V; the calculation module is used for performing difference calculation according to the third feature, the fourth feature and the fifth feature to obtain a similar index, performing difference calculation according to the first feature and the second feature, and obtaining the final similarity of the reference image and the image to be evaluated according to the similar index.
9. The simple and efficient image quality assessment system according to claim 8, wherein YUV _ a (x, Y) is denoted as Y _ a (x, Y) in luminance space Y, and the first feature is obtained by:
extracting LBP characteristics of the Y _ A (x, Y), wherein a specific formula is as follows:
wherein LBP _ Y _ A (x, Y) represents an LBP characteristic of said Y _ A (x, Y),(xc,yc) A center pixel point, g, representing the YUV _ A (x, y)cThe brightness value, g, of the center pixel pointpRepresenting the brightness value of the adjacent pixel points around the YUV _ A (x, y); extracting gradient information of the LBP _ Y _ A (x, Y) by using a sobel operator, wherein the gradient information of the LBP _ Y _ A (x, Y) comprises horizontal direction gradient information and vertical direction gradient information, and the specific formula is as follows:
wherein Representing a convolution operation, G1And G2Represents the horizontal direction gradient information and vertical direction gradient information, respectively, and G _ Y _ a (x, Y) represents the gradient information of the LBP _ Y _ a (x, Y); performing blocking processing on the G _ Y _ A (x, Y), and specifically dividing the G _ Y _ A (x, Y) into first sub-block images with preset sizes, wherein the first sub-block images are marked as { G _ Y _ An(x, Y) | N ═ 1, …, N }, where N denotes the number of all subblocks after the G _ Y _ a (x, Y) is partitioned; calculating the mean value and the variance of each first sub-block image, and storing the mean value and the variance as features, so that each first sub-block image obtains a two-dimensional vector, and finally the obtained first features are as follows: { F _ Y _ An1, | N ═ 1, …, N }; the YUV _ B (x, Y) is denoted as Y _ B (x, Y) in a luminance space Y, and the second feature is obtained by: consistent with the first feature, the second feature is { F _ Y _ Bn1, | N ═ 1, …, N }; the YUV _ a (x, y) is denoted as U _ a (x, y) in a chrominance space U, and the third feature is obtained in the following manner: partitioning the U _ A (x, y), and specifically partitioning the U _ A (x, y) into second sub-block images with preset sizes, wherein the second sub-block images are marked as { U _ An(x, y) | N ═ 1, …, N }; dividing the U _ A (x, y) into sixteen equal parts, respectively counting the color histogram of each second sub-block image, obtaining a sixteen-dimensional vector from each second sub-block image, and finally obtaining the third characteristic: { F _ U _ An1, | N ═ 1, …, N }; the YUV _ B (x, y) is denoted as U _ B (x, y) in a chrominance space U, the YUV _ a (x, y) is denoted as V _ a (x, y) in a chrominance space V, the YUV _ B (x, y) is denoted as V _ B (x, y) in a chrominance space V, and the fourth feature, the fifth feature, and the sixth feature are obtained in the following manner: the fourth feature, the fifth feature and the sixth feature are respectively as follows: { F _ U _ Bn|n=1,…,N}、{F_V_An|n=1,…,N}、{F_V_Bn1, | N ═ 1, …, N }; the similarity index is { betan1, …, N, the specific formula is as follows:
wherein | F _ U _ An-F_U_Bn| represents the third featureAnd the difference of the fourth feature, | F _ V _ An-F_V_BnL represents a difference of the fifth feature and the sixth feature, wherein · represents a dot product operation; the final similarity obtaining mode of the reference image and the image to be evaluated is as follows: calculating the difference between the first characteristic and the second characteristic, wherein the specific formula is as follows: t is tn=F_Y_An-F_Y_BnWherein t isnRepresenting the difference; calculating the similarity of the reference image and each sub-block image of the image to be evaluated, wherein a specific formula is as follows:wherein λ isnRepresenting the similarity, β, of the reference image and each of the sub-block images of the image to be evaluatednRepresenting the similarity index; and counting the final similarity of the reference image and the image to be evaluated, wherein a specific formula is as follows:wherein λ represents the final similarity between the reference image and the image to be evaluated.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011314948.6A CN112381812A (en) | 2020-11-20 | 2020-11-20 | Simple and efficient image quality evaluation method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011314948.6A CN112381812A (en) | 2020-11-20 | 2020-11-20 | Simple and efficient image quality evaluation method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112381812A true CN112381812A (en) | 2021-02-19 |
Family
ID=74587440
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011314948.6A Pending CN112381812A (en) | 2020-11-20 | 2020-11-20 | Simple and efficient image quality evaluation method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112381812A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160307293A1 (en) * | 2014-01-28 | 2016-10-20 | Sharp Kabushiki Kaisha | Image processing device |
CN109191428A (en) * | 2018-07-26 | 2019-01-11 | 西安理工大学 | Full-reference image quality evaluating method based on masking textural characteristics |
CN110619648A (en) * | 2019-09-19 | 2019-12-27 | 四川长虹电器股份有限公司 | Method for dividing image area based on RGB change trend |
CN111382751A (en) * | 2020-03-11 | 2020-07-07 | 西安应用光学研究所 | Target re-identification method based on color features |
CN111489346A (en) * | 2020-04-14 | 2020-08-04 | 广东工业大学 | Full-reference image quality evaluation method and system |
CN111815601A (en) * | 2020-07-03 | 2020-10-23 | 浙江大学 | Texture image surface defect detection method based on depth convolution self-encoder |
-
2020
- 2020-11-20 CN CN202011314948.6A patent/CN112381812A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160307293A1 (en) * | 2014-01-28 | 2016-10-20 | Sharp Kabushiki Kaisha | Image processing device |
CN109191428A (en) * | 2018-07-26 | 2019-01-11 | 西安理工大学 | Full-reference image quality evaluating method based on masking textural characteristics |
CN110619648A (en) * | 2019-09-19 | 2019-12-27 | 四川长虹电器股份有限公司 | Method for dividing image area based on RGB change trend |
CN111382751A (en) * | 2020-03-11 | 2020-07-07 | 西安应用光学研究所 | Target re-identification method based on color features |
CN111489346A (en) * | 2020-04-14 | 2020-08-04 | 广东工业大学 | Full-reference image quality evaluation method and system |
CN111815601A (en) * | 2020-07-03 | 2020-10-23 | 浙江大学 | Texture image surface defect detection method based on depth convolution self-encoder |
Non-Patent Citations (2)
Title |
---|
李 霞 等: "基于局部和全局语义融合的跨语言句子语义相似度计算模型", 中 文信息学报 * |
肖勇旗: "基于自然场景统计的无参考遥感图像质量评价", 《中国优秀硕士学位论文全文数据库信息科技辑》, pages 42 - 47 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103400150B (en) | A kind of method and device that road edge identification is carried out based on mobile platform | |
CN115641336B (en) | Air conditioner sheet metal part defect identification method based on computer vision | |
CN111489346B (en) | Full-reference image quality evaluation method and system | |
CN103093444A (en) | Image super-resolution reconstruction method based on self-similarity and structural information constraint | |
CN108711160B (en) | Target segmentation method based on HSI (high speed input/output) enhanced model | |
CN111563866B (en) | Multisource remote sensing image fusion method | |
CN114359323A (en) | Image target area detection method based on visual attention mechanism | |
CN107451954A (en) | Iterated pixel interpolation method based on image low-rank property | |
Yang et al. | EHNQ: Subjective and objective quality evaluation of enhanced night-time images | |
Zhang et al. | Color-to-gray conversion based on boundary points | |
US8971619B2 (en) | Method and a device for extracting color features | |
CN111611940A (en) | Rapid video face recognition method based on big data processing | |
CN1588447A (en) | Remote sensitive image fusing method based on residual error | |
CN106558047A (en) | Color image quality evaluation method based on complementary colours small echo | |
CN114511567B (en) | Tongue body and tongue coating image identification and separation method | |
CN112381812A (en) | Simple and efficient image quality evaluation method and system | |
CN116402802A (en) | Underwater image quality evaluation method based on color space multi-feature fusion | |
CN114067006B (en) | Screen content image quality evaluation method based on discrete cosine transform | |
CN113379785B (en) | Saliency target detection method integrating boundary priori and frequency domain information | |
Yuan et al. | Color image quality assessment with multi deep convolutional networks | |
CN114463379A (en) | Dynamic capturing method and device for video key points | |
CN112184588A (en) | Image enhancement system and method for fault detection | |
Tang et al. | Zoned mapping network from sdr video to hdr video | |
Xiao et al. | Blind Quality Metric via Measurement of Contrast, Texture, and Colour in Night-Time Scenario. | |
CN111127437A (en) | Full-reference image quality evaluation method based on color space decomposition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |