CN112381812A - Simple and efficient image quality evaluation method and system - Google Patents

Simple and efficient image quality evaluation method and system Download PDF

Info

Publication number
CN112381812A
CN112381812A CN202011314948.6A CN202011314948A CN112381812A CN 112381812 A CN112381812 A CN 112381812A CN 202011314948 A CN202011314948 A CN 202011314948A CN 112381812 A CN112381812 A CN 112381812A
Authority
CN
China
Prior art keywords
feature
yuv
image
sub
follows
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011314948.6A
Other languages
Chinese (zh)
Inventor
刘晓华
李�昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Youxiang Computing Technology Co ltd
Original Assignee
Shenzhen Youxiang Computing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Youxiang Computing Technology Co ltd filed Critical Shenzhen Youxiang Computing Technology Co ltd
Priority to CN202011314948.6A priority Critical patent/CN112381812A/en
Publication of CN112381812A publication Critical patent/CN112381812A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Analysis (AREA)

Abstract

A simple and efficient image quality assessment method and system are provided, the method comprises the following steps: step S1: converting the RGB color space of the reference image and the image to be evaluated into YUV color space, and respectively adding; step S2: extracting the first feature and the second feature in the luminance space Y; step S3: extracting the third feature and the fourth feature on the chrominance space U and the chrominance space V respectively, and then extracting the fifth feature and the sixth feature on the chrominance space V respectively; step S4: performing difference calculation on the third feature, the fourth feature, the fifth feature and the sixth feature obtained in the step S3 to obtain a similarity index; step S5: and performing difference calculation according to the first feature and the second feature obtained in the step S2, and obtaining the final similarity between the reference image and the image to be evaluated according to the similarity index obtained in the step S4. The invention has low algorithm complexity, less resource consumption and high practical value.

Description

Simple and efficient image quality evaluation method and system
Technical Field
The invention relates to the technical field of image processing, in particular to a simple and efficient image quality evaluation method and system.
Background
In recent years, with the rapid development of the information age, the everyday acceptance of various image and video information has been explosively increased. Digital images play many roles in production and life, but they inevitably bring about distortion in the processes of acquisition, compression and transmission due to various reasons such as imperfect processing method, loss of transmission medium, noise pollution and the like, thereby causing the degradation of image quality. Therefore, how to accurately and quickly perform quality evaluation on images has become an important problem in many fields such as image acquisition, transmission, compression, recovery and enhancement.
The quality evaluation method can be divided into 3 types of full reference quality evaluation, no reference quality evaluation and weak reference quality evaluation according to the quantity of the acquired reference information. The full reference image quality evaluation algorithm uses an original image as a reference image of a distorted image; only the information of partial reference images is used in the semi-reference image quality evaluation algorithm; the no-reference image quality assessment algorithm does not use any information in the reference image as a priori data.
The most common method at present is a full-reference image quality assessment algorithm. The traditional full-reference objective image quality evaluation algorithm has a mean square error and a peak signal-to-noise ratio, and is widely used due to a simple calculation method and clear physical significance, but the algorithms only analyze images in a statistical sense and do not consider the correlation between pixels. The follow-up scholars propose an SSIM (structural similarity) algorithm based on the structural similarity of the image, the algorithm evaluates the quality of the image from three aspects of brightness characteristic, contrast characteristic and structural similarity characteristic, compared with the previous evaluation method, the algorithm obtains qualitative breakthrough and is a landmark type representative algorithm in the image quality evaluation algorithm. After the method is published, researchers make many improvements on the basis of the method, and all the improvements achieve good effects. However, most of the algorithms involve various complicated calculation processes, which are time-consuming and not beneficial to practical application.
The foregoing description is provided for general background information and is not admitted to be prior art.
Disclosure of Invention
The invention aims to provide a simple and efficient image quality evaluation method and system which can accurately evaluate the quality of an image, has low algorithm complexity and less resource consumption, can meet the requirement of real-time processing and has high practical value.
The invention provides a simple and efficient image quality evaluation method, which comprises the following steps: step S1: converting the RGB color space of the reference image and the image to be evaluated into YUV color space which is YUV _ A (x, y) and YUV _ B (x, y) respectively; step S2: extracting a first feature and a second feature of the YUV _ A (x, Y) and the YUV _ B (x, Y) on a luminance space Y respectively; step S3: extracting a third feature and a fourth feature of the YUV _ A (x, y) and the YUV _ B (x, y) on a chrominance space U, respectively, and then extracting a fifth feature and a sixth feature of the YUV _ A (x, y) and the YUV _ B (x, y) on a chrominance space V, respectively; step S4: performing difference calculation on the third feature, the fourth feature, the fifth feature and the sixth feature obtained in the step S3 to obtain a similarity index; step S5: and performing difference calculation according to the first feature and the second feature obtained in the step S2, and obtaining the final similarity between the reference image and the image to be evaluated according to the similarity index obtained in the step S4.
Further, the YUV _ a (x, Y) is denoted as Y _ a (x, Y) in the luminance space Y; the step S2 includes: step S21: extracting LBP characteristics of the Y _ A (x, Y), wherein a specific formula is as follows:
Figure RE-GDA0002899065770000021
wherein LBP _ Y _ A (x, Y) represents an LBP characteristic of said Y _ A (x, Y),
Figure RE-GDA0002899065770000022
(xc,yc) A center pixel point, g, representing the YUV _ A (x, y)cThe brightness value, g, of the center pixel pointpRepresenting the immediate surroundings of YUV _ A (x, y)The brightness value of the pixel point; step S22: extracting gradient information of LBP _ Y _ a (x, Y) obtained in step S21 by using a sobel operator, where the gradient information of LBP _ Y _ a (x, Y) includes horizontal direction gradient information and vertical direction gradient information, and the specific formula is as follows:
Figure RE-GDA0002899065770000031
wherein
Figure RE-GDA0002899065770000032
Representing a convolution operation, G1And G2Represents the horizontal direction gradient information and vertical direction gradient information, respectively, and G _ Y _ a (x, Y) represents the gradient information of the LBP _ Y _ a (x, Y); step S23: performing blocking processing on the G _ Y _ a (x, Y) obtained in the step S22, specifically dividing the G _ Y _ a into first sub-block images with preset sizes, where the first sub-block images are marked as { G _ Y _ an(x, Y) | N ═ 1, …, N }, where N denotes the number of all subblocks after the G _ Y _ a (x, Y) is partitioned; step S24: calculating the mean value and the variance of each first sub-block image, and storing the mean value and the variance as features, so that each first sub-block image obtains a two-dimensional vector, and finally the obtained first features are as follows: { F _ Y _ An|n=1,…,N}。
Further, the YUV _ B (x, Y) is denoted as Y _ B (x, Y) in the luminance space Y; the second feature is calculated in a manner consistent with the methods of step S21, step S22, step S23, and step S24, and the second feature is { F _ Y _ Bn|n=1,…,N}。
Further, the YUV _ a (x, y) is denoted as U _ a (x, y) in the chrominance space U; the step S3 includes: step S31: partitioning the U _ A (x, y), and specifically partitioning the U _ A (x, y) into second sub-block images with preset sizes, wherein the second sub-block images are marked as { U _ An(x, y) | N ═ 1, …, N }; step S32: dividing the U _ A (x, y) into sixteen equal parts, respectively counting the color histogram of each second sub-block image, obtaining a sixteen-dimensional vector from each second sub-block image, and finally obtaining the third characteristic: { F _ U _ An|n=1,…,N}。
Further, YUV _ B (x, y) is inThe chroma space U is marked as U _ B (x, y), the chroma space V is marked as V _ A (x, y), and the chroma space V is marked as V _ B (x, y); the fourth, fifth, and sixth features are calculated in the same manner as the methods of step S31 and step S32, and are { F _ U _ Bn|n=1,…,N}、{F_V_An|n=1,…,N}、 {F_V_Bn|n=1,…,N}。
Further, the step S4 specifically includes: the similarity index is { beta n1, …, N, the specific formula is as follows:
Figure RE-GDA0002899065770000041
wherein | F _ U _ An-F_U_Bn| represents a difference between the third feature and the fourth feature, | F _ V _ An-F-V_BnL represents a difference of the fifth feature and the sixth feature, wherein · represents a dot product operation.
Further, the step S5 includes: step S51: calculating the difference between the first characteristic and the second characteristic, wherein the specific formula is as follows: t is tn=F_Y_An-F_Y_BnWherein t isnRepresenting the difference; step S52: calculating the similarity of the reference image and each sub-block image of the image to be evaluated, wherein a specific formula is as follows:
Figure RE-GDA0002899065770000042
wherein λ isnRepresenting the similarity, β, of the reference image and each of the sub-block images of the image to be evaluatednRepresenting the similarity index; step S53: and counting the final similarity of the reference image and the image to be evaluated, wherein a specific formula is as follows:
Figure RE-GDA0002899065770000043
wherein λ represents the final similarity between the reference image and the image to be evaluated.
The invention also provides a simple and efficient image quality evaluation system which comprises a transformation module, an extraction module and a calculation module, wherein the transformation module is used for transforming the RGB color space of the reference image and the image to be evaluated into YUV color space which is YUV _ A (x, y) and YUV _ B (x, y); the extracting module is used for respectively extracting a first feature and a second feature of YUV _ A (x, Y) and YUV _ B (x, Y) on a luminance space Y, respectively extracting a third feature and a fourth feature of YUV _ A (x, Y) and YUV _ B (x, Y) on a chrominance space U, and then respectively extracting a fifth feature and a sixth feature on YUV _ A (x, Y) and YUV _ B (x, Y) chrominance space V; the calculation module is used for performing difference calculation according to the third feature, the fourth feature and the fifth feature to obtain a similar index, performing difference calculation according to the first feature and the second feature, and obtaining the final similarity of the reference image and the image to be evaluated according to the similar index.
Further, the YUV _ a (x, Y) is denoted as Y _ a (x, Y) in the luminance space Y, and the first feature is obtained by: extracting LBP characteristics of the Y _ A (x, Y), wherein a specific formula is as follows:
Figure RE-GDA0002899065770000051
wherein LBP _ Y _ A (x, Y) represents an LBP characteristic of said Y _ A (x, Y),
Figure RE-GDA0002899065770000052
(xc,yc) A center pixel point, g, representing the YUV _ A (x, y)cThe brightness value, g, of the center pixel pointpRepresenting the brightness value of the adjacent pixel points around the YUV _ A (x, y); extracting gradient information of the LBP _ Y _ A (x, Y) by using a sobel operator, wherein the gradient information of the LBP _ Y _ A (x, Y) comprises horizontal direction gradient information and vertical direction gradient information, and the specific formula is as follows:
Figure RE-GDA0002899065770000053
wherein
Figure RE-GDA0002899065770000054
Representing a convolution operation, G1And G2Respectively representing the horizontal direction gradient information and the vertical directionRepresenting gradient information of the LBP _ Y _ a (x, Y) to gradient information, G _ Y _ a (x, Y); performing blocking processing on the G _ Y _ A (x, Y), and specifically dividing the G _ Y _ A (x, Y) into first sub-block images with preset sizes, wherein the first sub-block images are marked as { G _ Y _ An(x, Y) | N ═ 1, …, N }, where N denotes the number of all subblocks after the G _ Y _ a (x, Y) is partitioned; calculating the mean value and the variance of each first sub-block image, and storing the mean value and the variance as features, so that each first sub-block image obtains a two-dimensional vector, and finally the obtained first features are as follows: { F _ Y _ A n1, | N ═ 1, …, N }; the YUV _ B (x, Y) is denoted as Y _ B (x, Y) in a luminance space Y, and the second feature is obtained by: consistent with the first feature, the second feature is { F _ Y _ B n1, | N ═ 1, …, N }; the YUV _ a (x, y) is denoted as U _ a (x, y) in a chrominance space U, and the third feature is obtained in the following manner: partitioning the U _ A (x, y), and specifically partitioning the U _ A (x, y) into second sub-block images with preset sizes, wherein the second sub-block images are marked as { U _ An(x, y) | N ═ 1, …, N }; dividing the U _ A (x, y) into sixteen equal parts, respectively counting the color histogram of each second sub-block image, obtaining a sixteen-dimensional vector from each second sub-block image, and finally obtaining the third characteristic: { F _ U _ A n1, | N ═ 1, …, N }; the YUV _ B (x, y) is denoted as U _ B (x, y) in a chrominance space U, the YUV _ a (x, y) is denoted as V _ a (x, y) in a chrominance space V, the YUV _ B (x, y) is denoted as V _ B (x, y) in a chrominance space V, and the fourth feature, the fifth feature, and the sixth feature are obtained in the following manner: the fourth feature, the fifth feature and the sixth feature are respectively as follows:
{F_U_Bn|n=1,…,N}、{F_V_An|n=1,…,N}、{F_V_B n1, | N ═ 1, …, N }; the similarity index is { beta n1, …, N, the specific formula is as follows:
Figure RE-GDA0002899065770000061
wherein | F _ U _ An-F_U_BnL represents the third feature and the fourth featureDifference, | F _ V _ An-F_V_Bn| represents a difference between the fifth feature and the sixth feature, wherein |, represents a dot product operation; the final similarity obtaining mode of the reference image and the image to be evaluated is as follows: calculating the difference between the first characteristic and the second characteristic, wherein the specific formula is as follows: t is tn=F_Y_An-F_Y_BnWherein t isnRepresenting the difference; calculating the similarity of the reference image and each sub-block image of the image to be evaluated, wherein a specific formula is as follows:
Figure RE-GDA0002899065770000062
wherein λ isnRepresenting the similarity, β, of the reference image and each of the sub-block images of the image to be evaluatednRepresenting the similarity index; and counting the final similarity of the reference image and the image to be evaluated, wherein a specific formula is as follows:
Figure RE-GDA0002899065770000063
wherein λ represents the final similarity between the reference image and the image to be evaluated.
The simple and efficient image quality evaluation method and system provided by the invention can accurately evaluate the quality of the image by obtaining the final similarity of the reference image and the image to be evaluated, has low algorithm complexity and less resource consumption, can meet the requirement of real-time processing, and has very high practical value.
Drawings
Fig. 1 is a first flowchart of a simple and efficient image quality evaluation method according to an embodiment of the present invention.
Fig. 2 is a second flowchart of the simple and efficient image quality evaluation method shown in fig. 1.
Fig. 3 is a schematic diagram of a specific flow of extracting the first feature in the simple and efficient image quality evaluation method shown in fig. 1.
Fig. 4 is a schematic diagram of a specific flow of extracting a third feature in the simple and efficient image quality evaluation method shown in fig. 1.
Fig. 5 is a schematic diagram of a detailed flow of the final similarity between the reference image and the image to be evaluated in the simple and efficient image quality evaluation method shown in fig. 1.
Fig. 6 is a schematic structural diagram of a simple and efficient image quality evaluation system according to an embodiment of the present invention.
Detailed Description
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
As shown in fig. 1 to 5, in the present embodiment, a simple and efficient image quality evaluation method is provided, which includes the following steps:
step S1: the RGB (three primary optical colors, R for red, G for green, B for blue) color space of the reference image and the image to be evaluated is transformed into YUV (luminance and color difference signal) color space, YUV _ a (x, y) and YUV _ B (x, y), respectively.
In this embodiment, the reference image is RGB _ a (x, y), and the image to be evaluated is RGB _ B (x, y), which are color RGB images. Specifically, RGB _ a (x, y) and RGB _ B (x, y) are converted into YUV _ a (x, y) and YUV _ B (x, y). This color representation is more consistent with the visual characteristics of the human eye because in the YUV color space, the luminance space Y and the two chrominance spaces U, V are completely separate.
Step S2: and respectively extracting a first feature and a second feature of YUV _ A (x, Y) and YUV _ B (x, Y) on a luminance space Y.
In this embodiment, YUV _ a (x, Y) is denoted as Y _ a (x, Y) in the luminance space Y, and YUV _ B (x, Y) is denoted as Y _ B (x, Y) in the luminance space Y.
First, a first feature of YUV _ a (x, Y) in the luminance space Y is extracted, as shown in a specific flowchart of a simple and efficient image quality evaluation method shown in fig. 3. In a specific application example, the detailed flow of step S2 of the present invention includes:
step S21: the LBP (local binary pattern) feature of Y _ a (x, Y) is extracted and denoted as LBP _ Y _ a (x, Y). The specific formula is as follows:
Figure RE-GDA0002899065770000081
wherein LBP _ Y _ A (x, Y) denotes the LBP characteristic of Y _ A (x, Y),
Figure RE-GDA0002899065770000082
(xc,yc) Center pixel point, g, representing YUV _ A (x, y)cBrightness value, g, representing the center pixelpAnd the brightness values of the adjacent pixel points around YUV _ A (x, y) are represented.
Step S22: gradient information of LBP _ Y _ a (x, Y) obtained in the above step S21 is extracted by a sobel (sobel) operator. Gradient information of LBP _ Y _ a (x, Y) includes horizontal direction gradient information and vertical direction gradient information. The specific formula is as follows:
Figure RE-GDA0002899065770000083
wherein
Figure RE-GDA0002899065770000084
Representing a convolution operation, G1And G2Respectively, horizontal direction gradient information and vertical direction gradient information, and G _ Y _ a (x, Y) represents gradient information of LBP _ Y _ a (x, Y).
Step S23: the G _ Y _ a (x, Y) obtained in the step S22 is subjected to a block division process, and is specifically divided into a first sub-block image with a preset size, and the first sub-block image is marked as { G _ Y _ an(x, Y) | N ═ 1, …, N }, where N denotes the number of all subblocks after G _ Y _ a (x, Y) partitioning.
In this embodiment, the predetermined size is 20 × 20 mm. In other embodiments, the first sub-block image with a preset size of 30 × 30mm or 50 × 50mm or other values may be divided.
Step S24: calculating the mean value and the variance of each first sub-block image, and storing the mean value and the variance as features, so that each first sub-block image obtains a two-dimensional vector, and finally obtaining first features as follows: { F _ Y _ An|n=1,…,N}。
Next, a second feature of YUV _ B (x, Y) in luminance space Y is extracted. The second feature is calculated in the same manner as the steps S21, S22,The methods of step S23 and step S24 are the same and will not be described in detail here. The second characteristic is { F _ Y _ Bn|n=1,…,N}。
Step S3: extracting a third feature and a fourth feature of YUV _ a (x, y) and YUV _ B (x, y) in the chrominance space U, respectively, and then extracting a fifth feature and a sixth feature in YUV _ a (x, y) and YUV _ B (x, y) chrominance space V, respectively.
In this embodiment, YUV _ a (x, y) is denoted as U _ a (x, y) in the chrominance space U, YUV _ B (x, y) is denoted as U _ B (x, y) in the chrominance space U, YUV _ a (x, y) is denoted as V _ a (x, y) in the chrominance space V, and YUV _ B (x, y) is denoted as V _ B (x, y) in the chrominance space V.
First, a third feature of YUV _ a (x, y) in the chrominance space U is extracted, as shown in a specific flowchart of a simple and efficient image quality evaluation method shown in fig. 4. In a specific application example, the detailed flow of step S3 of the present invention includes:
step S31: partitioning the U _ A (x, y), and specifically partitioning the U _ A (x, y) into second sub-block images with preset sizes, wherein the second sub-block images are recorded as { U _ An(x,y)|n=1,…,N}。
In this embodiment, the predetermined size is 20 × 20 mm. In other embodiments, the second sub-block image with a preset size of 30 × 30mm or 50 × 50mm or other values may be divided.
Step S32: dividing the U _ A (x, y) into sixteen equal parts, respectively counting the color histogram of each second sub-block image, obtaining a sixteen-dimensional vector from each second sub-block image, and finally obtaining a third characteristic: { F _ U _ An|n=1,…,N}。
Next, a fourth feature of YUV _ B (x, y) on the chrominance space U is extracted, and a fifth feature and a sixth feature on YUV _ a (x, y) and YUV _ B (x, y) chrominance space V are extracted. The calculation manners of the fourth, fifth and sixth features are consistent with the methods of the above steps S31 and S32, and will not be elaborated here. The fourth feature, the fifth feature, and the sixth feature are respectively:
{F_U_Bn|n=1,…,N}、{F_V_An|n=1,…,N}、{F_V_Bn|n=1,…,N}。
step S4: and performing difference calculation on the third feature, the fourth feature, the fifth feature and the sixth feature obtained in the step S3 to obtain a similarity index.
Specifically, a similarity index is calculated for each sub-block image into which U _ a (x, y), U _ B (x, y), V _ a (x, y), and V _ B (x, y) are divided, based on the characteristics of the chromaticity space classification. The similarity index is { beta n1, …, N, the specific formula is as follows:
Figure RE-GDA0002899065770000101
wherein | F _ U _ An-F_U_Bn| represents a difference between the third feature and the fourth feature, | F _ V _ an-F_V_BnAnd | represents a difference between the fifth feature and the sixth feature, wherein · represents a point multiplication operation.
Step S5: and performing difference calculation according to the first feature and the second feature obtained in the step S2, and obtaining the final similarity between the reference image and the image to be evaluated according to the similarity index obtained in the step S4.
As shown in fig. 1 to fig. 2, in the embodiment, the image quality is evaluated through the final similarity between the reference image and the image to be evaluated, and the method has the effects of simplicity, short time consumption and benefit for practical application. As shown in fig. 5, in a specific application example, the detailed flow of step S5 of the present invention includes:
step S51: calculating the difference between the first characteristic and the second characteristic, wherein the specific formula is as follows: t is tn=F_Y_An-F_Y_BnWherein t isnIndicating the difference.
Step S52: calculating the similarity of each sub-block image of the reference image and the image to be evaluated, wherein the specific formula is as follows:
Figure RE-GDA0002899065770000102
wherein λ isnRepresenting the degree of similarity, β, of the reference image and each of the sub-block images of the image to be evaluatednIndicating a similarity index.
Step S53: and (3) counting the final similarity of the reference image and the image to be evaluated, wherein the specific formula is as follows:
Figure RE-GDA0002899065770000103
wherein λ represents the final similarity between the reference image and the image to be evaluated.
According to the image quality evaluation method and device, the similarity between the reference image and the image to be evaluated is calculated, the quality of the image can be accurately evaluated, the algorithm complexity is low, the resource consumption is low, the requirement for real-time processing can be met, and the practical value is high.
As shown in fig. 6, the present invention further provides a simple and efficient image quality evaluation system, which includes a transformation module 60, an extraction module 61, and a calculation module 62.
The transformation module 60 is configured to transform the RGB color space of the reference image and the image to be evaluated into YUV color spaces, which are YUV _ a (x, y) and YUV _ B (x, y), respectively.
The extracting module 61 is configured to extract a first feature and a second feature of YUV _ a (x, Y) and YUV _ B (x, Y) in the luminance space Y, respectively, extract a third feature and a fourth feature of YUV _ a (x, Y) and YUV _ B (x, Y) in the chrominance space U, respectively, and then extract a fifth feature and a sixth feature of YUV _ a (x, Y) and YUV _ B (x, Y) in the chrominance space V, respectively.
YUV _ a (x, Y) is denoted as Y _ a (x, Y) in luminance space Y, and the first feature is obtained by:
extracting LBP characteristics of Y _ A (x, Y), wherein the specific formula is as follows:
Figure RE-GDA0002899065770000111
wherein LBP _ Y _ A (x, Y) denotes the LBP characteristic of Y _ A (x, Y),
Figure RE-GDA0002899065770000112
(xc,yc) Center pixel point, g, representing YUV _ A (x, y)cBrightness value, g, representing the center pixelpRepresenting the brightness value of the adjacent pixel points around YUV _ A (x, y); extracting gradient information of LBP _ Y _ A (x, Y) by using sobel operator, wherein the gradient information of LBP _ Y _ A (x, Y) comprises a horizontal squareThe specific formula of the gradient information and the vertical gradient information is as follows:
Figure RE-GDA0002899065770000113
wherein
Figure RE-GDA0002899065770000114
Representing a convolution operation, G1And G2Represents horizontal direction gradient information and vertical direction gradient information, respectively, G _ Y _ a (x, Y) represents gradient information of LBP _ Y _ a (x, Y); partitioning the G _ Y _ A (x, Y), and specifically partitioning the G _ Y _ A (x, Y) into first sub-block images with preset sizes, wherein the first sub-block images are marked as { G _ Y _ An(x, Y) | N ═ 1, …, N }, where N denotes the number of all subblocks after G _ Y _ a (x, Y) partitioning; calculating the mean value and the variance of each first sub-block image, and storing the mean value and the variance as features, so that each first sub-block image obtains a two-dimensional vector, and finally obtaining first features as follows: { F _ Y _ An|n=1,…,N}。
In this embodiment, the predetermined size is 20 × 20 mm. In other embodiments, the preset size may be 30 × 30mm or 50 × 50mm, and other values may be further divided.
YUV _ B (x, Y) is denoted as Y _ B (x, Y) in the luminance space Y, and the second feature is obtained by:
consistent with the first feature, the second feature is { F _ Y _ Bn|n=1,…,N}。
YUV _ a (x, y) is denoted as U _ a (x, y) in the chrominance space U, and the third feature is obtained by the following method:
partitioning the U _ A (x, y), and specifically partitioning the U _ A (x, y) into second sub-block images with preset sizes, wherein the second sub-block images are recorded as { U _ An(x, y) | N ═ 1, …, N }; dividing the U _ A (x, y) into sixteen equal parts, respectively counting the color histogram of each second sub-block image, obtaining a sixteen-dimensional vector from each second sub-block image, and finally obtaining a third characteristic: { F _ U _ An|n=1,…,N}。
In this embodiment, the predetermined size is 20 × 20 mm. In other embodiments, the preset size may be 30 × 30mm or 50 × 50mm, and other values may be further divided.
YUV _ B (x, y) is denoted as U _ B (x, y) in the chrominance space U, YUV _ a (x, y) is denoted as V _ a (x, y) in the chrominance space V, YUV _ B (x, y) is denoted as V _ B (x, y) in the chrominance space V, and the fourth feature, the fifth feature, and the sixth feature are obtained in the following manner:
are all consistent with the third feature, and the fourth, fifth and sixth features are { F _ U _ Bn|n=1,…,N}、{F_V_An|n=1,…,N}、{F_V_Bn|n=1,…,N}。
The calculating module 62 is configured to perform difference calculation according to the third feature, the fourth feature, the fifth feature and the sixth feature to obtain a similar index, perform difference calculation according to the first feature and the second feature, and obtain a final similarity between the reference image and the image to be evaluated according to the similar index.
In this embodiment, the similarity index is { β [ ]n1, …, N, the specific formula is as follows:
Figure RE-GDA0002899065770000131
wherein | F _ U _ An-F_U_Bn| represents a difference between the third feature and the fourth feature, | F _ V _ an-F_V_BnAnd | represents a difference between the fifth feature and the sixth feature, wherein · represents a point multiplication operation.
The final similarity obtaining mode of the reference image and the image to be evaluated is as follows: calculating the difference between the first characteristic and the second characteristic, wherein the specific formula is as follows: t is tn=F_Y_An-F_Y_BnWherein t isnRepresents a difference; calculating the similarity of each sub-block image of the reference image and the image to be evaluated, wherein the specific formula is as follows:
Figure RE-GDA0002899065770000132
wherein λ isnRepresenting the degree of similarity, β, of the reference image and each of the sub-block images of the image to be evaluatednRepresenting a similarity index; and (3) counting the final similarity of the reference image and the image to be evaluated, wherein the specific formula is as follows:
Figure RE-GDA0002899065770000133
where λ represents the final reference and estimated imagesThe similarity of (c).
In this document, the terms "upper", "lower", "front", "rear", "left", "right", "top", "bottom", "inner", "outer", "vertical", "horizontal", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for the purpose of clarity and convenience of description of the technical solutions, and thus, should not be construed as limiting the present invention.
As used herein, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, including not only those elements listed, but also other elements not expressly listed.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (9)

1. A simple and efficient image quality assessment method is characterized by comprising the following steps: step S1: converting the RGB color space of the reference image and the image to be evaluated into YUV color space which is YUV _ A (x, y) and YUV _ B (x, y) respectively; step S2: extracting a first feature and a second feature of the YUV _ A (x, Y) and the YUV _ B (x, Y) on a luminance space Y respectively; step S3: extracting a third feature and a fourth feature of the YUV _ A (x, y) and the YUV _ B (x, y) on a chrominance space U, respectively, and then extracting a fifth feature and a sixth feature of the YUV _ A (x, y) and the YUV _ B (x, y) on a chrominance space V, respectively; step S4: performing difference calculation on the third feature, the fourth feature, the fifth feature and the sixth feature obtained in the step S3 to obtain a similarity index; step S5: and performing difference calculation according to the first feature and the second feature obtained in the step S2, and obtaining the final similarity between the reference image and the image to be evaluated according to the similarity index obtained in the step S4.
2. The simple and efficient image quality assessment method according to claim 1, wherein said YUV _ a (x, Y) is denoted as Y _ a (x, Y) in luminance space Y; the step S2 includes: step S21: extracting LBP characteristics of the Y _ A (x, Y), wherein a specific formula is as follows:
Figure RE-FDA0002899065760000011
wherein LBP _ Y _ A (x, Y) represents an LBP characteristic of said Y _ A (x, Y),
Figure RE-FDA0002899065760000012
(xc,yc) A center pixel point, g, representing the YUV _ A (x, y)cThe brightness value, g, of the center pixel pointpRepresenting the brightness value of the adjacent pixel points around the YUV _ A (x, y); step S22: extracting gradient information of LBP _ Y _ a (x, Y) obtained in step S21 by using a sobel operator, where the gradient information of LBP _ Y _ a (x, Y) includes horizontal direction gradient information and vertical direction gradient information, and the specific formula is as follows:
Figure RE-FDA0002899065760000013
wherein
Figure RE-FDA0002899065760000014
Figure RE-FDA0002899065760000015
Representing a convolution operation, G1And G2Represents the horizontal direction gradient information and vertical direction gradient information, respectively, and G _ Y _ a (x, Y) represents the gradient information of the LBP _ Y _ a (x, Y); step S23: performing blocking processing on the G _ Y _ a (x, Y) obtained in the step S22, specifically dividing the G _ Y _ a into first sub-block images with preset sizes, where the first sub-block images are marked as { G _ Y _ an(x, Y) | N ═ 1, …, N }, where N denotes the number of all subblocks after the G _ Y _ a (x, Y) is partitioned; step S24: calculating the mean and variance of each first sub-block image, and storing the mean and variance as features to ensure thatObtaining each first sub-block image to obtain a two-dimensional vector, and finally obtaining the first characteristic as follows: { F _ Y _ An|n=1,…,N}。
3. The simple and efficient image quality assessment method according to claim 2, wherein said YUV _ B (x, Y) is denoted as Y _ B (x, Y) in luminance space Y; the second feature is calculated in a manner consistent with the methods of step S21, step S22, step S23, and step S24, and the second feature is { F _ Y _ Bn|n=1,…,N}。
4. The simple and efficient image quality assessment method according to claim 3, wherein said YUV _ A (x, y) is denoted as U _ A (x, y) in the chrominance space U; the step S3 includes: step S31: partitioning the U _ A (x, y), and specifically partitioning the U _ A (x, y) into second sub-block images with preset sizes, wherein the second sub-block images are marked as { U _ An(x, y) | N ═ 1, …, N }; step S32: dividing the U _ A (x, y) into sixteen equal parts, respectively counting the color histogram of each second sub-block image, obtaining a sixteen-dimensional vector from each second sub-block image, and finally obtaining the third characteristic: { F _ U _ An|n=1,…,N}。
5. The simple and efficient image quality assessment method according to claim 4, wherein said YUV _ B (x, y) is denoted as U _ B (x, y) in chrominance space U, said YUV _ A (x, y) is denoted as V _ A (x, y) in chrominance space V, and said YUV _ B (x, y) is denoted as V _ B (x, y) in chrominance space V; the fourth, fifth, and sixth features are calculated in the same manner as the methods of step S31 and step S32, and are { F _ U _ Bn|n=1,…,N}、{F_V_An|n=1,…,N}、{F_V_Bn|n=1,…,N}。
6. The simple and efficient image quality assessment method according to claim 5, wherein said step S4 specifically comprises: the similarity index is { betan|n=1,…N, the specific formula is as follows:
Figure RE-FDA0002899065760000021
wherein | F _ U _ An-F_U_Bn| represents a difference between the third feature and the fourth feature, | F _ V _ An-F_V_BnL represents a difference of the fifth feature and the sixth feature, wherein · represents a dot product operation.
7. The simple and efficient image quality evaluation method according to claim 6, wherein said step S5 includes: step S51: calculating the difference between the first characteristic and the second characteristic, wherein the specific formula is as follows: t is tn=F_Y_An-F_Y_BnWherein t isnRepresenting the difference; step S52: calculating the similarity of the reference image and each sub-block image of the image to be evaluated, wherein a specific formula is as follows:
Figure RE-FDA0002899065760000031
wherein λ isnRepresenting the similarity, β, of the reference image and each of the sub-block images of the image to be evaluatednRepresenting the similarity index; step S53: and counting the final similarity of the reference image and the image to be evaluated, wherein a specific formula is as follows:
Figure RE-FDA0002899065760000032
wherein λ represents the final similarity between the reference image and the image to be evaluated.
8. A simple and efficient image quality evaluation system is characterized by comprising a transformation module, an extraction module and a calculation module, wherein the transformation module is used for transforming RGB color spaces of a reference image and an image to be evaluated into YUV color spaces which are YUV _ A (x, y) and YUV _ B (x, y) respectively; the extracting module is used for respectively extracting a first feature and a second feature of YUV _ A (x, Y) and YUV _ B (x, Y) on a luminance space Y, respectively extracting a third feature and a fourth feature of YUV _ A (x, Y) and YUV _ B (x, Y) on a chrominance space U, and then respectively extracting a fifth feature and a sixth feature on YUV _ A (x, Y) and YUV _ B (x, Y) chrominance space V; the calculation module is used for performing difference calculation according to the third feature, the fourth feature and the fifth feature to obtain a similar index, performing difference calculation according to the first feature and the second feature, and obtaining the final similarity of the reference image and the image to be evaluated according to the similar index.
9. The simple and efficient image quality assessment system according to claim 8, wherein YUV _ a (x, Y) is denoted as Y _ a (x, Y) in luminance space Y, and the first feature is obtained by:
extracting LBP characteristics of the Y _ A (x, Y), wherein a specific formula is as follows:
Figure RE-FDA0002899065760000041
wherein LBP _ Y _ A (x, Y) represents an LBP characteristic of said Y _ A (x, Y),
Figure RE-FDA0002899065760000042
(xc,yc) A center pixel point, g, representing the YUV _ A (x, y)cThe brightness value, g, of the center pixel pointpRepresenting the brightness value of the adjacent pixel points around the YUV _ A (x, y); extracting gradient information of the LBP _ Y _ A (x, Y) by using a sobel operator, wherein the gradient information of the LBP _ Y _ A (x, Y) comprises horizontal direction gradient information and vertical direction gradient information, and the specific formula is as follows:
Figure RE-FDA0002899065760000043
wherein
Figure RE-FDA0002899065760000044
Figure RE-FDA0002899065760000045
Representing a convolution operation, G1And G2Represents the horizontal direction gradient information and vertical direction gradient information, respectively, and G _ Y _ a (x, Y) represents the gradient information of the LBP _ Y _ a (x, Y); performing blocking processing on the G _ Y _ A (x, Y), and specifically dividing the G _ Y _ A (x, Y) into first sub-block images with preset sizes, wherein the first sub-block images are marked as { G _ Y _ An(x, Y) | N ═ 1, …, N }, where N denotes the number of all subblocks after the G _ Y _ a (x, Y) is partitioned; calculating the mean value and the variance of each first sub-block image, and storing the mean value and the variance as features, so that each first sub-block image obtains a two-dimensional vector, and finally the obtained first features are as follows: { F _ Y _ An1, | N ═ 1, …, N }; the YUV _ B (x, Y) is denoted as Y _ B (x, Y) in a luminance space Y, and the second feature is obtained by: consistent with the first feature, the second feature is { F _ Y _ Bn1, | N ═ 1, …, N }; the YUV _ a (x, y) is denoted as U _ a (x, y) in a chrominance space U, and the third feature is obtained in the following manner: partitioning the U _ A (x, y), and specifically partitioning the U _ A (x, y) into second sub-block images with preset sizes, wherein the second sub-block images are marked as { U _ An(x, y) | N ═ 1, …, N }; dividing the U _ A (x, y) into sixteen equal parts, respectively counting the color histogram of each second sub-block image, obtaining a sixteen-dimensional vector from each second sub-block image, and finally obtaining the third characteristic: { F _ U _ An1, | N ═ 1, …, N }; the YUV _ B (x, y) is denoted as U _ B (x, y) in a chrominance space U, the YUV _ a (x, y) is denoted as V _ a (x, y) in a chrominance space V, the YUV _ B (x, y) is denoted as V _ B (x, y) in a chrominance space V, and the fourth feature, the fifth feature, and the sixth feature are obtained in the following manner: the fourth feature, the fifth feature and the sixth feature are respectively as follows: { F _ U _ Bn|n=1,…,N}、{F_V_An|n=1,…,N}、{F_V_Bn1, | N ═ 1, …, N }; the similarity index is { betan1, …, N, the specific formula is as follows:
Figure RE-FDA0002899065760000051
wherein | F _ U _ An-F_U_Bn| represents the third featureAnd the difference of the fourth feature, | F _ V _ An-F_V_BnL represents a difference of the fifth feature and the sixth feature, wherein · represents a dot product operation; the final similarity obtaining mode of the reference image and the image to be evaluated is as follows: calculating the difference between the first characteristic and the second characteristic, wherein the specific formula is as follows: t is tn=F_Y_An-F_Y_BnWherein t isnRepresenting the difference; calculating the similarity of the reference image and each sub-block image of the image to be evaluated, wherein a specific formula is as follows:
Figure RE-FDA0002899065760000052
wherein λ isnRepresenting the similarity, β, of the reference image and each of the sub-block images of the image to be evaluatednRepresenting the similarity index; and counting the final similarity of the reference image and the image to be evaluated, wherein a specific formula is as follows:
Figure RE-FDA0002899065760000053
wherein λ represents the final similarity between the reference image and the image to be evaluated.
CN202011314948.6A 2020-11-20 2020-11-20 Simple and efficient image quality evaluation method and system Pending CN112381812A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011314948.6A CN112381812A (en) 2020-11-20 2020-11-20 Simple and efficient image quality evaluation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011314948.6A CN112381812A (en) 2020-11-20 2020-11-20 Simple and efficient image quality evaluation method and system

Publications (1)

Publication Number Publication Date
CN112381812A true CN112381812A (en) 2021-02-19

Family

ID=74587440

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011314948.6A Pending CN112381812A (en) 2020-11-20 2020-11-20 Simple and efficient image quality evaluation method and system

Country Status (1)

Country Link
CN (1) CN112381812A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160307293A1 (en) * 2014-01-28 2016-10-20 Sharp Kabushiki Kaisha Image processing device
CN109191428A (en) * 2018-07-26 2019-01-11 西安理工大学 Full-reference image quality evaluating method based on masking textural characteristics
CN110619648A (en) * 2019-09-19 2019-12-27 四川长虹电器股份有限公司 Method for dividing image area based on RGB change trend
CN111382751A (en) * 2020-03-11 2020-07-07 西安应用光学研究所 Target re-identification method based on color features
CN111489346A (en) * 2020-04-14 2020-08-04 广东工业大学 Full-reference image quality evaluation method and system
CN111815601A (en) * 2020-07-03 2020-10-23 浙江大学 Texture image surface defect detection method based on depth convolution self-encoder

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160307293A1 (en) * 2014-01-28 2016-10-20 Sharp Kabushiki Kaisha Image processing device
CN109191428A (en) * 2018-07-26 2019-01-11 西安理工大学 Full-reference image quality evaluating method based on masking textural characteristics
CN110619648A (en) * 2019-09-19 2019-12-27 四川长虹电器股份有限公司 Method for dividing image area based on RGB change trend
CN111382751A (en) * 2020-03-11 2020-07-07 西安应用光学研究所 Target re-identification method based on color features
CN111489346A (en) * 2020-04-14 2020-08-04 广东工业大学 Full-reference image quality evaluation method and system
CN111815601A (en) * 2020-07-03 2020-10-23 浙江大学 Texture image surface defect detection method based on depth convolution self-encoder

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李 霞 等: "基于局部和全局语义融合的跨语言句子语义相似度计算模型", 中 文信息学报 *
肖勇旗: "基于自然场景统计的无参考遥感图像质量评价", 《中国优秀硕士学位论文全文数据库信息科技辑》, pages 42 - 47 *

Similar Documents

Publication Publication Date Title
CN103400150B (en) A kind of method and device that road edge identification is carried out based on mobile platform
CN115641336B (en) Air conditioner sheet metal part defect identification method based on computer vision
CN111489346B (en) Full-reference image quality evaluation method and system
CN103093444A (en) Image super-resolution reconstruction method based on self-similarity and structural information constraint
CN108711160B (en) Target segmentation method based on HSI (high speed input/output) enhanced model
CN111563866B (en) Multisource remote sensing image fusion method
CN114359323A (en) Image target area detection method based on visual attention mechanism
CN107451954A (en) Iterated pixel interpolation method based on image low-rank property
Yang et al. EHNQ: Subjective and objective quality evaluation of enhanced night-time images
Zhang et al. Color-to-gray conversion based on boundary points
US8971619B2 (en) Method and a device for extracting color features
CN111611940A (en) Rapid video face recognition method based on big data processing
CN1588447A (en) Remote sensitive image fusing method based on residual error
CN106558047A (en) Color image quality evaluation method based on complementary colours small echo
CN114511567B (en) Tongue body and tongue coating image identification and separation method
CN112381812A (en) Simple and efficient image quality evaluation method and system
CN116402802A (en) Underwater image quality evaluation method based on color space multi-feature fusion
CN114067006B (en) Screen content image quality evaluation method based on discrete cosine transform
CN113379785B (en) Saliency target detection method integrating boundary priori and frequency domain information
Yuan et al. Color image quality assessment with multi deep convolutional networks
CN114463379A (en) Dynamic capturing method and device for video key points
CN112184588A (en) Image enhancement system and method for fault detection
Tang et al. Zoned mapping network from sdr video to hdr video
Xiao et al. Blind Quality Metric via Measurement of Contrast, Texture, and Colour in Night-Time Scenario.
CN111127437A (en) Full-reference image quality evaluation method based on color space decomposition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination