CN111598826B - Picture objective quality evaluation method and system based on combined multi-scale picture characteristics - Google Patents

Picture objective quality evaluation method and system based on combined multi-scale picture characteristics Download PDF

Info

Publication number
CN111598826B
CN111598826B CN201910122634.7A CN201910122634A CN111598826B CN 111598826 B CN111598826 B CN 111598826B CN 201910122634 A CN201910122634 A CN 201910122634A CN 111598826 B CN111598826 B CN 111598826B
Authority
CN
China
Prior art keywords
picture
similarity
edge
representing
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910122634.7A
Other languages
Chinese (zh)
Other versions
CN111598826A (en
Inventor
柳宁
杨琦
徐异凌
孙军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201910122634.7A priority Critical patent/CN111598826B/en
Publication of CN111598826A publication Critical patent/CN111598826A/en
Application granted granted Critical
Publication of CN111598826B publication Critical patent/CN111598826B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • G06V10/464Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a method and a system for objectively evaluating picture quality based on joint multi-scale picture characteristics, wherein the method comprises the following steps: and (3) picture processing: processing the original image into image groups with different scales by using a Gaussian pyramid and a Laplacian pyramid, and respectively marking the image groups as y 0 (n) And y is 1 (n) From y 1 (n) Extracting to obtain edge structure characteristics; and (3) edge salient feature extraction: using the luminance mask and contrast mask, a group of pictures y is processed from a Gaussian pyramid 0 (n) Picture group y processed by Laplacian pyramid 1 (n) And extracting to obtain the edge saliency characteristics. The invention has higher accuracy in evaluating the quality of the desktop picture, and the comprehensive performance is superior to the prior art and the distortion type of the desktop picture is found through the verification of the prior database: gaussian blur, motion blur and JPEG2000 compression distortion have outstanding superior performance.

Description

Picture objective quality evaluation method and system based on combined multi-scale picture characteristics
Technical Field
The invention relates to the field of image processing, in particular to a method and a system for evaluating objective quality of pictures based on joint multi-scale picture features.
Background
With the wide application of intelligent terminals, such as smartphones, tablets and notebook computers, desktop content pictures have become the most common and highest-consumption pictures in daily life of people instead of natural pictures. Desktop content pictures are computer-generated, a picture formed by combining graphics, texts and natural pictures, and are widely used in applications such as desktop games, desktop collaboration and remote education. For these applications, picture quality is particularly important. However, since the desktop content picture and the natural picture have different characteristics, the conventional quality evaluation method designed for the natural picture cannot well reflect the distortion condition of the desktop content picture: compared with the characteristics of rich colors and smooth edges of natural pictures, the desktop content pictures are often single in color, sharp in edges and rich in a large number of repeated graphics; and the distortion of the natural pictures is generally caused by limited physical sensor capability, but the distortion of the desktop content pictures is generally caused by the self reasons of the computer. Therefore, an accurate and efficient objective quality evaluation method for the picture of the inner surface content is strongly demanded.
Patent document CN108335289a (application number: 201810049789.8) discloses a full-reference fused image objective quality evaluation method, comprising: selecting a picture database as input of model training, grouping pictures according to distortion types, wherein pictures with different degrees of distortion under each distortion type respectively obtain file names and labels of each group of pictures; extracting features, namely selecting a plurality of full-reference metric algorithms, respectively scoring pictures in each distortion type, and calculating each group of pictures by using one full-reference metric algorithm to obtain a feature vector which forms a feature matrix; data preprocessing, namely normalizing the feature vector scores corresponding to the distorted image labels and the distorted types to be between (1, 100) and (0, 1), and performing transposition processing to meet the training requirement of the SVM; and (5) training the characteristics to obtain a quality evaluation model.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a method and a system for objectively evaluating the quality of a picture based on combined multi-scale picture characteristics.
The invention provides a picture objective quality evaluation method based on joint multi-scale picture characteristics, which comprises the following steps:
and (3) picture processing: processing the original image into image groups with different scales by using a Gaussian pyramid and a Laplacian pyramid, and respectively marking the image groups as y 0 (n) And y is 1 (n) From y 1 (n) Extracting to obtain edge structure characteristics;
and (3) edge salient feature extraction: using the luminance mask and contrast mask, a group of pictures y is processed from a Gaussian pyramid 0 (n) Picture group y processed by Laplacian pyramid 1 (n) Extracting to obtain edge saliency characteristics;
and calculating the feature similarity: according to the obtained edge structure characteristics and edge saliency characteristics, calculating to obtain edge structure similarity and edge saliency similarity;
the characteristic combination step: according to the obtained edge structure similarity and edge saliency similarity, calculating to obtain a final local quality diagram;
feature pooling: and calculating to obtain a final objective evaluation score according to the obtained final local quality map.
Preferably, the picture processing step:
processing the original image into image groups with different scales by using a Gaussian pyramid and a Laplacian pyramid, and respectively marking the image groups as y 0 (n) And y is 1 (n)
Preferably, the edge salient feature extraction step:
using the luminance mask and contrast mask, a group of pictures y is processed from a Gaussian pyramid 0 (n) Picture group y processed by Laplacian pyramid 1 (n) The edge saliency characteristics are extracted and obtained, and the calculation formula is as follows:
Figure BDA0001972478370000021
Figure BDA0001972478370000022
Figure BDA0001972478370000023
wherein,,
C LM (n) representing a brightness mask calculation result, which is based on image characteristics of the image group processed by the Gaussian pyramid and the Laplacian pyramid;
Figure BDA0001972478370000024
representing a y function;
y 1 (n) representing the picture group processed by the Laplacian pyramid;
y 0 (n) representing a group of pictures after gaussian pyramid processing;
n represents the number of layers of the layer;
layer y 1 (1) The displayed picture features are edge structure features;
Figure BDA0001972478370000031
representing y 0 (n+1) Carrying in the calculation result of the y function;
γ 1 representing a brightness contrast threshold;
the absolute value is taken;
a 1 a constant that represents a guarantee of equation stability;
C LCM (n) representing the contrast mask calculation result, based on C LM (n) Image characteristics of the processed image group;
n represents the number of layers of the layer;
C LCM (1) representing the picture feature represented when n=1 as an edge saliency feature;
a 2 a constant that represents a guarantee of equation stability;
Figure BDA0001972478370000032
representing C LM (n+1) Carrying in the calculation result of the y function;
γ 2 representing a contrast detectable threshold;
g (x, y; sigma) represents a Gaussian kernel function;
* Representing a convolution;
∈2 represents upsampling.
Preferably, the feature similarity calculation step:
according to the obtained edge structure characteristics and edge saliency characteristics, the edge structure similarity is obtained through calculation, and the calculation formula is as follows:
Figure BDA0001972478370000033
wherein,,
S 1 (x, y) represents the structural similarity of the (x, y) sides of the points;
subscripts r and d represent that the feature is taken from a reference picture or a distorted picture, respectively;
y 1r (1) (x, y) represents an edge structure feature of the reference picture when n=1 at the point (x, y);
y 1d (2) (x, y) represents the edge structure feature of the distorted picture when n=2 at point (x, y);
T 1 the representation is a non-zero constant to ensure the stability of the equation;
according to the obtained edge structure characteristics and edge saliency characteristics, the edge saliency similarity is obtained through calculation, and the calculation formula is as follows:
S 2 (x,y)=MS 1 (x,y) α ·MS 2 (x,y)
Figure BDA0001972478370000034
Figure BDA0001972478370000041
wherein,,
S 2 (x, y) represents the point (x, y) side saliency similarity;
alpha represents S 2 M S in (x, y) 1 (x, y) the weight;
MS 1 (x, y) represents the edge structure similarity calculated by the similarity calculation function;
MS 2 (x, y) represents the edge significant similarity calculated by the similarity calculation function;
w 1 (x, y) represents a weight factor;
G LMr (1) (x, y) representsLM mask features when n=1 at point (x, y) of the reference picture;
G LMd (1) (x, y) represents LM mask characteristics when n=1 at point (x, y) of the reference picture;
T 2 representing a non-zero constant to ensure the stability of the equation;
(x,y) w 1 (x, y) represents w at all points on the picture 1 Accumulation of (x, y);
C LCMr (1) (x, y) represents LCM mask features of the reference picture when n=1 at point (x, y);
C LCMd (1) (x, y) represents LCM mask features for a distorted picture when n=1 at point (x, y);
C LCMd (2) (x, y) represents LCM mask features representing the distorted picture at point (x, y) where n=2;
T 3 representing a non-zero constant to ensure the stability of the equation.
Preferably, the feature combining step:
according to the obtained edge structure similarity and edge saliency similarity, calculating to obtain local quality similarity, wherein the calculation formula is as follows:
S QM (x,y)=(S 1 (x,y)) ξ ·(S 2 (x,y)) ψ
=(S 1 (x,y)) ξ ·MS 1 (x,y) μ ·MS 2 (x,y) ψ
μ=ψ·α
wherein,,
S QM (x, y) represents local mass similarity at point (x, y);
xi represents S 1 (x, y) at local mass S QM (x, y) the weight;
psi represents M S 1 (x, y) at local mass S QM (x, y) the weight;
mu represents M S 2 (x, y) at local mass S QM (x, y) the weight;
alpha represents S 2 M S in (x, y) 1 (x, y) weight.
Preferably, the feature pooling step:
according to the obtained local quality graph similarity, calculating to obtain a final objective evaluation score, wherein the calculation formula is as follows:
Figure BDA0001972478370000051
w 2 (x,y)=max(y 1r (2) (x,y),y 1d (2) (x,y))
wherein,,
s represents the final objective evaluation score;
w 2 (x, y) represents a weight parameter.
y 1r (2) (x, y) represents the edge structure feature of the reference picture when n=2 at the point (x, y).
The invention provides a desktop content picture objective quality evaluation system based on joint multi-scale picture characteristics, which comprises the following steps:
and a picture processing module: processing the original image into image groups with different scales by using a Gaussian pyramid and a Laplacian pyramid, and respectively marking the image groups as y 0 (n) And y is 1 (n) From y 1 (n) Extracting to obtain edge structure characteristics;
and the edge salient feature extraction module is used for: using the luminance mask and contrast mask, a group of pictures y is processed from a Gaussian pyramid 0 (n) Picture group y processed by Laplacian pyramid 1 (n) Extracting to obtain edge saliency characteristics;
and the feature similarity calculation module is used for: according to the obtained edge structure characteristics and edge saliency characteristics, calculating to obtain edge structure similarity and edge saliency similarity;
and the characteristic combination module is used for: according to the obtained edge structure similarity and edge saliency similarity, calculating to obtain a final local quality diagram;
and (3) a characteristic pooling module: and calculating to obtain a final objective evaluation score according to the obtained final local quality map.
Preferably, the picture processing module:
processing the original image into image groups with different scales by using a Gaussian pyramid and a Laplacian pyramid, and respectively marking the image groups as y 0 (n) And y is 1 (n)
The edge salient feature extraction module is used for:
using the luminance mask and contrast mask, a group of pictures y is processed from a Gaussian pyramid 0 (n) Picture group y processed by Laplacian pyramid 1 (n) The edge saliency characteristics are extracted and obtained, and the calculation formula is as follows:
Figure BDA0001972478370000052
Figure BDA0001972478370000053
Figure BDA0001972478370000054
wherein,,
C LM (n) representing a brightness mask calculation result, which is based on image characteristics of the image group processed by the Gaussian pyramid and the Laplacian pyramid;
Figure BDA0001972478370000061
representing a y function;
y 1 (n) representing the picture group processed by the Laplacian pyramid;
y 0 (n) representing a group of pictures after gaussian pyramid processing;
n represents the number of layers of the layer;
layer y 1 (1) The displayed picture features are edge structure features;
Figure BDA0001972478370000062
representing y 0 (n+1) Carrying in the calculation result of the y function;
γ 1 representing a brightness contrast threshold;
the absolute value is taken;
a 1 a constant that represents a guarantee of equation stability;
C LCM (n) representing the contrast mask calculation result, based on C LM (n) Image characteristics of the processed image group;
n represents the number of layers of the layer;
C LCM (1) representing the picture feature represented when n=1 as an edge saliency feature;
a 2 a constant that represents a guarantee of equation stability;
Figure BDA0001972478370000063
representing C LM (n+1) Carrying in the calculation result of the y function;
γ 2 representing a contrast detectable threshold;
g (x, y; sigma) represents a Gaussian kernel function;
* Representing a convolution;
∈2 represents upsampling;
the feature similarity calculation module is used for:
according to the obtained edge structure characteristics and edge saliency characteristics, the edge structure similarity is obtained through calculation, and the calculation formula is as follows:
Figure BDA0001972478370000064
wherein,,
S 1 (x, y) represents the structural similarity of the (x, y) sides of the points;
subscripts r and d represent that the feature is taken from a reference picture or a distorted picture, respectively;
y 1r (1) (x, y) represents an edge structure feature of the reference picture when n=1 at the point (x, y);
y 1d (2) (x, y) represents the edge structure feature of the distorted picture when n=2 at point (x, y);
T 1 the representation is a non-zero constant to ensure the stability of the equation;
according to the obtained edge structure characteristics and edge saliency characteristics, the edge saliency similarity is obtained through calculation, and the calculation formula is as follows:
S 2 (x,y)=MS 1 (x,y) α ·MS 2 (x,y)
Figure BDA0001972478370000071
Figure BDA0001972478370000072
/>
wherein,,
S 2 (x, y) represents the point (x, y) side saliency similarity;
alpha represents S 2 M S in (x, y) 1 (x, y) the weight;
MS 1 (x, y) represents the edge structure similarity calculated by the similarity calculation function;
MS 2 (x, y) represents the edge significant similarity calculated by the similarity calculation function;
w 1 (x, y) represents a weight factor;
C LMr (1) (x, y) represents LM mask characteristics when n=1 at point (x, y) of the reference picture;
C LMd (1) (x, y) represents LM mask characteristics when n=1 at point (x, y) of the reference picture;
T 2 representing a non-zero constant to ensure the stability of the equation;
(x,y) w 1 (x, y) represents w at all points on the picture 1 Accumulation of (x, y);
C LCMr (1) (x, y) represents LCM mask features of the reference picture when n=1 at point (x, y);
C LCMd (1) (x, y) represents LCM mask features for a distorted picture when n=1 at point (x, y);
C LCMd (2) (x, y) represents LCM mask features representing the distorted picture at point (x, y) where n=2;
T 3 representing a non-zero constant to ensure the stability of the equation.
Preferably, the feature combination module:
according to the obtained edge structure similarity and edge saliency similarity, calculating to obtain local quality similarity, wherein the calculation formula is as follows:
S QM (x,y)=(S 1 (x,y)) ξ ·(S 2 (x,y)) ψ
=(S 1 (x,y)) ξ ·MS 1 (x,y) μ ·MS 2 (x,y) ψ
μ=ψ·α
wherein,,
S QM (x, y) represents local mass similarity at point (x, y);
xi represents S 1 (x, y) at local mass S QM (x, y) the weight;
psi represents M S 1 (x, y) at local mass S QM (x, y) the weight;
mu represents M S 2 (x, y) at local mass S QM (x, y) the weight;
alpha represents S 2 M S in (x, y) 1 (x, y) weight.
The feature pooling module:
according to the obtained local quality graph similarity, calculating to obtain a final objective evaluation score, wherein the calculation formula is as follows:
Figure BDA0001972478370000081
w 2 (x,y)=max(y 1r (2) (x,y),y 1d (2) (x,y))
wherein,,
s represents the final objective evaluation score;
w 2 (x, y) represents a weight parameter.
y 1r (2) (x, y) represents the edge structure feature of the reference picture when n=2 at the point (x, y).
According to the present invention, there is provided a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method for objectively evaluating a picture based on joint multi-scale picture features as described in any one of the above.
Compared with the prior art, the invention has the following beneficial effects:
the invention has higher accuracy in evaluating the quality of the desktop picture, and the comprehensive performance is superior to the prior art and the distortion type of the desktop picture is found through the verification of the prior database: gaussian blur, motion blur and JPEG2000 compression distortion have outstanding superior performance.
Drawings
Other features, objects and advantages of the present invention will become more apparent upon reading of the detailed description of non-limiting embodiments, given with reference to the accompanying drawings in which:
fig. 1 is a schematic diagram of a process flow provided in the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the present invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications could be made by those skilled in the art without departing from the inventive concept. These are all within the scope of the present invention.
The invention provides a picture objective quality evaluation method based on joint multi-scale picture characteristics, which comprises the following steps:
and (3) picture processing: using Gaussian pyramids and pullsThe Laplacian pyramid processes the original picture into picture groups with different scales, which are respectively marked as y 0 (n) And y is 1 (n) From y 1 (n) Extracting to obtain edge structure characteristics;
and (3) edge salient feature extraction: using the luminance mask and contrast mask, a group of pictures y is processed from a Gaussian pyramid 0 (n) Picture group y processed by Laplacian pyramid 1 (n) Extracting to obtain edge saliency characteristics;
and calculating the feature similarity: according to the obtained edge structure characteristics and edge saliency characteristics, calculating to obtain edge structure similarity and edge saliency similarity;
the characteristic combination step: according to the obtained edge structure similarity and edge saliency similarity, calculating to obtain a final local quality diagram;
feature pooling: and calculating to obtain a final objective evaluation score according to the obtained final local quality map.
Specifically, the picture processing step:
processing the original image into image groups with different scales by using a Gaussian pyramid and a Laplacian pyramid, and respectively marking the image groups as y 0 (n) And y is 1 (n)
Specifically, the edge salient feature extraction step:
using the luminance mask and contrast mask, a group of pictures y is processed from a Gaussian pyramid 0 (n) Picture group y processed by Laplacian pyramid 1 (n) The edge saliency characteristics are extracted and obtained, and the calculation formula is as follows:
Figure BDA0001972478370000091
Figure BDA0001972478370000092
Figure BDA0001972478370000093
wherein,,
C LM (n) representing a brightness mask calculation result, which is based on image characteristics of the image group processed by the Gaussian pyramid and the Laplacian pyramid;
Figure BDA0001972478370000094
representing a y function;
y 1 (n) representing the picture group processed by the Laplacian pyramid;
y 0 (n) representing a group of pictures after gaussian pyramid processing;
n represents the number of layers of the layer;
layer y 1 (1) The displayed picture features are edge structure features;
Figure BDA0001972478370000101
representing y 0 (n+1) Carrying in the calculation result of the y function;
γ 1 representing a brightness contrast threshold;
the absolute value is taken;
a 1 a constant that represents a guarantee of equation stability;
C LCM (n) representing the contrast mask calculation result, based on C LM (n) Image characteristics of the processed image group;
n represents the number of layers of the layer;
C LCM (1) representing the picture feature represented when n=1 as an edge saliency feature;
a 2 a constant that represents a guarantee of equation stability;
Figure BDA0001972478370000102
representing C LM (n+1) Carrying in the calculation result of the y function;
γ 2 representing a contrast detectable threshold;
g (x, y; sigma) represents a Gaussian kernel function;
* Representing a convolution;
∈2 represents upsampling.
Specifically, the feature similarity calculation step:
according to the obtained edge structure characteristics and edge saliency characteristics, the edge structure similarity is obtained through calculation, and the calculation formula is as follows:
Figure BDA0001972478370000103
wherein,,
S 1 (x, y) represents the structural similarity of the (x, y) sides of the points;
subscripts r and d represent that the feature is taken from a reference picture or a distorted picture, respectively;
y 1r (1) (x, y) represents an edge structure feature of the reference picture when n=1 at the point (x, y);
y 1d (2) (x, y) represents the edge structure feature of the distorted picture when n=2 at point (x, y);
T 1 the representation is a non-zero constant to ensure the stability of the equation;
according to the obtained edge structure characteristics and edge saliency characteristics, the edge saliency similarity is obtained through calculation, and the calculation formula is as follows:
S 2 (x,y)=MS 1 (x,y) α ·MS 2 (x,y)
Figure BDA0001972478370000104
/>
Figure BDA0001972478370000111
wherein,,
S 2 (x, y) represents the point (x, y) side saliency similarity;
alpha represents S 2 M S in (x, y) 1 (x, y) the weight;
MS 1 (x, y) represents the edge structure similarity calculated by the similarity calculation function;
MS 2 (x, y) represents the edge significant similarity calculated by the similarity calculation function;
w 1 (x, y) represents a weight factor;
C LMr (1) (x, y) represents LM mask characteristics when n=1 at point (x, y) of the reference picture;
C LMd (1) (x, y) represents LM mask characteristics when n=1 at point (x, y) of the reference picture;
T 2 representing a non-zero constant to ensure the stability of the equation;
(x,y) w 1 (x, y) represents w at all points on the picture 1 Accumulation of (x, y);
C LCMr (1) (x, y) represents LCM mask features of the reference picture when n=1 at point (x, y);
C LCMd (1) (x, y) represents LCM mask features for a distorted picture when n=1 at point (x, y);
C LCMd (2) (x, y) represents LCM mask features representing the distorted picture at point (x, y) where n=2;
T 3 representing a non-zero constant to ensure the stability of the equation.
Specifically, the feature combination step:
according to the obtained edge structure similarity and edge saliency similarity, calculating to obtain local quality similarity, wherein the calculation formula is as follows:
S QM (x,y)=(S 1 (x,y)) ξ ·(S 2 (x,y)) ψ
=(S 1 (x,y)) ξ ·MS 1 (x,y) μ ·MS 2 (x,y) ψ
μ=ψ·α
wherein,,
S QM (x, y) represents local mass similarity at point (x, y);
xi represents S 1 (x, y) at local mass S QM (x, y) the weight;
psi represents M S 1 (x, y) at local mass S QM (x, y) the weight;
mu represents M S 2 (x, y) at local mass S QM (x, y) the weight;
alpha represents S 2 M S in (x, y) 1 (x, y) weight.
Specifically, the feature pooling step:
according to the obtained local quality graph similarity, calculating to obtain a final objective evaluation score, wherein the calculation formula is as follows:
Figure BDA0001972478370000121
w 2 (x,y)=max(y 1r (2) (x,y),y 1d (2) (x,y))
wherein,,
s represents the final objective evaluation score;
w 2 (x, y) represents a weight parameter.
y 1r (2) (x, y) represents the edge structure feature of the reference picture when n=2 at the point (x, y).
The desktop content picture objective quality evaluation system based on the combined multi-scale picture features can be realized through the step flow of the picture objective quality evaluation method based on the combined multi-scale picture features. The person skilled in the art can understand the method for evaluating the objective quality of the picture based on the joint multi-scale picture features as a preferable example of the system for evaluating the objective quality of the desktop content picture based on the joint multi-scale picture features.
The invention provides a desktop content picture objective quality evaluation system based on joint multi-scale picture characteristics, which comprises the following steps:
and a picture processing module: processing the original image into image groups with different scales by using a Gaussian pyramid and a Laplacian pyramid, and respectively marking the image groups as y 0 (n) And y is 1 (n) From y 1 (n) Extracting to obtain edge structure characteristics;
and the edge salient feature extraction module is used for: using the luminance mask and contrast mask, a group of pictures y is processed from a Gaussian pyramid 0 (n) Picture group y processed by Laplacian pyramid 1 (n) Extracting to obtain edge saliency characteristics;
and the feature similarity calculation module is used for: according to the obtained edge structure characteristics and edge saliency characteristics, calculating to obtain edge structure similarity and edge saliency similarity;
and the characteristic combination module is used for: according to the obtained edge structure similarity and edge saliency similarity, calculating to obtain a final local quality diagram;
and (3) a characteristic pooling module: and calculating to obtain a final objective evaluation score according to the obtained final local quality map.
Specifically, the picture processing module:
processing the original image into image groups with different scales by using a Gaussian pyramid and a Laplacian pyramid, and respectively marking the image groups as y 0 (n) And y is 1 (n)
The edge salient feature extraction module is used for:
using the luminance mask and contrast mask, a group of pictures y is processed from a Gaussian pyramid 0 (n) Picture group y processed by Laplacian pyramid 1 (n) The edge saliency characteristics are extracted and obtained, and the calculation formula is as follows:
Figure BDA0001972478370000131
Figure BDA0001972478370000132
Figure BDA0001972478370000133
wherein,,
C LM (n) representing a brightness mask calculation result, which is based on image characteristics of the image group processed by the Gaussian pyramid and the Laplacian pyramid;
Figure BDA0001972478370000134
representing a y function;
y 1 (n) representing the picture group processed by the Laplacian pyramid;
y 0 (n) representing a group of pictures after gaussian pyramid processing;
n represents the number of layers of the layer;
layer y 1 (1) The displayed picture features are edge structure features;
Figure BDA0001972478370000135
representing y 0 (n+1) Carrying in the calculation result of the y function;
γ 1 representing a brightness contrast threshold;
the absolute value is taken;
a 1 a constant that represents a guarantee of equation stability;
C LCM (n) representing the contrast mask calculation result, based on C LM (n) Image characteristics of the processed image group;
n represents the number of layers of the layer;
C LCM (1) representing the picture feature represented when n=1 as an edge saliency feature;
a 2 a constant that represents a guarantee of equation stability;
Figure BDA0001972478370000136
representing C LM (n+1) Carrying in the calculation result of the y function;
γ 2 representing a contrast detectable threshold;
g (x, y; sigma) represents a Gaussian kernel function;
* Representing a convolution;
∈2 represents upsampling;
the feature similarity calculation module is used for:
according to the obtained edge structure characteristics and edge saliency characteristics, the edge structure similarity is obtained through calculation, and the calculation formula is as follows:
Figure BDA0001972478370000141
wherein,,
S 1 (x, y) represents the structural similarity of the (x, y) sides of the points;
subscripts r and d represent that the feature is taken from a reference picture or a distorted picture, respectively;
y 1r (1) (x, y) represents an edge structure feature of the reference picture when n=1 at the point (x, y);
y 1d (2) (x, y) represents the edge structure feature of the distorted picture when n=2 at point (x, y);
T 1 the representation is a non-zero constant to ensure the stability of the equation;
according to the obtained edge structure characteristics and edge saliency characteristics, the edge saliency similarity is obtained through calculation, and the calculation formula is as follows:
S 2 (x,y)=MS 1 (x,y) α ·MS 2 (x,y)
Figure BDA0001972478370000142
Figure BDA0001972478370000143
wherein,,
S 2 (x, y) represents the point (x, y) side saliency similarity;
alpha represents S 2 M S in (x, y) 1 (x, y) the weight;
MS 1 (x, y) represents the edge structure similarity calculated by the similarity calculation function;
MS 2 (x, y) represents the edge significant similarity calculated by the similarity calculation function;
w 1 (x, y) represents a weight factor;
C LMr (1) (x, y) represents LM mask characteristics when n=1 at point (x, y) of the reference picture;
C LMd (1) (x, y) represents LM mask characteristics when n=1 at point (x, y) of the reference picture;
T 2 representing a non-zero constant to ensure the stability of the equation;
(x,y) w 1 (x, y) represents w at all points on the picture 1 Accumulation of (x, y);
C LCMr (1) (x, y) represents LCM mask features of the reference picture when n=1 at point (x, y);
C LCMd (1) (x, y) represents LCM mask features for a distorted picture when n=1 at point (x, y);
C LCMd (2) (x, y) represents LCM mask features representing the distorted picture at point (x, y) where n=2;
T 3 representing a non-zero constant to ensure the stability of the equation.
Specifically, the feature combination module:
according to the obtained edge structure similarity and edge saliency similarity, calculating to obtain local quality similarity, wherein the calculation formula is as follows:
S QM (x,y)=(S 1 (x,y)) ξ ·(S 2 (x,y)) ψ
=(S 1 (x,y)) ξ ·MS 1 (x,y) μ ·MS 2 (x,y) ψ
μ=ψ·α
wherein,,
S QM (x, y) represents local mass similarity at point (x, y);
xi represents S 1 (x, y) at local mass S QM (x, y) the weight;
psi represents M S 1 (x, y) at local mass S QM (x, y) the weight;
mu represents M S 2 (x, y) at local mass S QM (x, y) the weight;
alpha represents S 2 M S in (x, y) 1 (x, y) weight.
The feature pooling module:
according to the obtained local quality graph similarity, calculating to obtain a final objective evaluation score, wherein the calculation formula is as follows:
Figure BDA0001972478370000151
w 2 (x,y)=max(y 1r (2) (x,y),y 1d (2) (x,y))
wherein,,
s represents the final objective evaluation score;
w 2 (x, y) represents a weight parameter.
y 1r (2) (x, y) represents the edge structure feature of the reference picture when n=2 at the point (x, y).
According to the present invention, there is provided a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method for objectively evaluating a picture based on joint multi-scale picture features as described in any one of the above.
The present invention will be described more specifically by way of preferred examples.
Preferred example 1:
the invention provides an objective quality evaluation method based on a combined multi-scale desktop content picture, which is used for extracting edge structural features and edge saliency features of the picture and evaluating distortion degrees. Specifically, in combination with the characteristics of the human eye vision system, a picture feature extraction mode is designed, and the extracted picture features comprise two parts:
1. the structural characteristics of the edges of the strip,
2. edge saliency features.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
1. processing the original image into image groups with different scales by using a Gaussian pyramid and a Laplacian pyramid, and respectively marking the image groups as y 0 (n) And y is 1 (n) The method comprises the steps of carrying out a first treatment on the surface of the Let n=1 from y 1 (n) Extracting edge structural characteristics;
2. the edge saliency features are extracted from the two pyramids using a luminance mask (Luminance Masking, LM) and a contrast mask (Luminance Contrast Masking, LCM), specifically calculated as follows:
Figure BDA0001972478370000161
Figure BDA0001972478370000162
wherein the method comprises the steps of
Figure BDA0001972478370000163
G (x, y; σ) represents a Gaussian kernel function, x represents convolution, and ε 2 represents upsampling.
Calculation result C of luminance mask LM (n) Can detect the brightness change recognizable by human eyes and convert gamma according to the Buchsbaum curve 1 Set to 1; contrast mask calculation result C LCM (n) Can detect the change of the human eye identifiable contrast, and gamma is determined according to the contrast detectable threshold 2 Set to 0.62.
Let n=1 slave C LCM (n) And extracting edge saliency characteristics.
3. Feature similarity calculation
Edge structural similarity:
Figure BDA0001972478370000164
S 2 (x,y)=MS 1 (x,y) α ·MS 2 (x,y),
Figure BDA0001972478370000165
edge saliency similarity:
Figure BDA0001972478370000166
wherein subscripts r and d respectively denote that the feature is taken from a reference picture or a distorted picture, w 1 (x,y)=y 1r (2) ,T 1 ,T 2 ,T 3 0.07,1X 10-50 and 0.01 are taken respectively.
4. Feature combination
The final local mass map is:
S QM (x,y)=(S 1 (x,y)) ξ ·(S 2 (x,y)) ψ
=(S 1 (x,y)) ξ ·MS 1 (x,y) μ ·MS 2 (x,y) ψ ,
where μ=ψ·α,
Figure BDA0001972478370000176
1.8,0.02,0.9.
5. Feature pooling
Final objective evaluation score:
Figure BDA0001972478370000171
wherein w is 2 =max(y 1r (2) (x,y),y 1d (2) (x,y))。
To illustrate the effectiveness of the above model, tests were performed on the desktop content picture authority database SIQAD. The SIQAD database includes 20 reference pictures, each picture corresponding to 7 orders of 7 distortion for a total of 980 distorted pictures. The 7 kinds of distortions include Gaussian Noise (GN), gaussian Blur (GB), motion Blur (MB), contrast variation (CC), JPEG compression (JPEG), JPEG2000 compression (J2K), layer efficient coding (LSC).
Three indices proposed by the VQEG expert group, which are specifically used to measure the consistency of subjective scores with objective evaluation scores, are used to determine the superiority of the model, and are pearson linear correlation coefficients (Pearson linear correlation coefficient, PLCC), root mean square error (Root mean squared error, RMSE) and Spearman rank-order correlation coefficients (Spearman rank-order correlation coefficient, SROCC), respectively, in the following manner:
Figure BDA0001972478370000172
Figure BDA0001972478370000173
Figure BDA0001972478370000174
wherein m and Q represent subjective score and objective score respectively,
Figure BDA0001972478370000175
represents the average value of the subjective score and the objective score, d i Representing the difference between the subjective score ordering order and the objective score ordering order of the ith picture. Values of PLCC and SROCC lie between 0 and 1, with closer to 1 indicating better agreement between subjective and objective scores; the smaller the RMSE value, the smaller the difference between the subjective score and the objective score, the better the model performance.
Table 1 shows the test results on the database SIQAD, wherein PSNR, SSIM, MSSIM, IWSSIM, VIF, IFC, FSIM, SCQ is a quality evaluation method designed for natural pictures, SIQM, SQI, ESIM, MDOGS, GFM is an objective quality evaluation method designed for desktop content pictures in recent years, and it can be seen by comparing the data of each method:
aiming at the overall performance, the method has the advantages that the first name is listed in the bit sequence of the PLCC (PLCC) and RMSE (RMSE) evaluation indexes, and the second name is listed in the bit sequence of the SROCC;
for the independent distortion types, the method obtains 9 first names and 1 third name in total, is obviously superior to other methods, and has obvious superiority when evaluating the distortion types GB, MB and J2K.
Table one SIQAD database test results:
Figure BDA0001972478370000181
those skilled in the art will appreciate that the systems, apparatus, and their respective modules provided herein may be implemented entirely by logic programming of method steps such that the systems, apparatus, and their respective modules are implemented as logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc., in addition to the systems, apparatus, and their respective modules being implemented as pure computer readable program code. Therefore, the system, the apparatus, and the respective modules thereof provided by the present invention may be regarded as one hardware component, and the modules included therein for implementing various programs may also be regarded as structures within the hardware component; modules for implementing various functions may also be regarded as being either software programs for implementing the methods or structures within hardware components.
The foregoing describes specific embodiments of the present invention. It is to be understood that the invention is not limited to the particular embodiments described above, and that various changes or modifications may be made by those skilled in the art within the scope of the appended claims without affecting the spirit of the invention. The embodiments of the present application and features in the embodiments may be combined with each other arbitrarily without conflict.

Claims (7)

1. A picture objective quality evaluation method based on joint multi-scale picture features is characterized by comprising the following steps:
picture processingThe steps are as follows: processing the original image into image groups with different scales by using a Gaussian pyramid and a Laplacian pyramid, and respectively marking the image groups as y 0 (n) And y is 1 (n) From y 1 (n) Extracting to obtain edge structure characteristics;
and (3) edge salient feature extraction: using the luminance mask and contrast mask, a group of pictures y is processed from a Gaussian pyramid 0 (n) Picture group y processed by Laplacian pyramid 1 (n) Extracting to obtain edge saliency characteristics;
and calculating the feature similarity: according to the obtained edge structure characteristics and edge saliency characteristics, calculating to obtain edge structure similarity and edge saliency similarity;
the characteristic combination step: according to the obtained edge structure similarity and edge saliency similarity, calculating to obtain a final local quality diagram;
feature pooling: calculating to obtain a final objective evaluation score according to the obtained final local quality map;
the edge salient feature extraction step comprises the following steps:
using the luminance mask and contrast mask, a group of pictures y is processed from a Gaussian pyramid 0 (n) Picture group y processed by Laplacian pyramid 1 (n) The edge saliency characteristics are extracted and obtained, and the calculation formula is as follows:
Figure QLYQS_1
Figure QLYQS_2
Figure QLYQS_3
wherein,,
C LM (n) representing the brightness mask calculation result based on Gaussian pyramid and Gaussian pyramidImage characteristics of the image group processed by the Laplacian pyramid;
Figure QLYQS_4
representing a y function;
y 1 (n) representing the picture group processed by the Laplacian pyramid;
y 0 (n) representing a group of pictures after gaussian pyramid processing;
n represents the number of layers of the layer;
layer y 1 (1) The displayed picture features are edge structure features;
Figure QLYQS_5
representing y 0 (n+1) Carrying in the calculation result of the y function;
γ 1 representing a brightness contrast threshold;
the absolute value is taken;
a 1 a constant that represents a guarantee of equation stability;
C LCM (n) representing the contrast mask calculation result, based on C LM (n) Image characteristics of the processed image group;
n represents the number of layers of the layer;
C LCM (1) representing the picture feature represented when n=1 as an edge saliency feature;
a 2 a constant that represents a guarantee of equation stability;
Figure QLYQS_6
representing C LM (n+1) Carrying in the calculation result of the y function;
γ 2 representing a contrast detectable threshold;
g (x, y; sigma) represents a Gaussian kernel function;
* Representing a convolution;
∈2 represents upsampling.
2. The method for objectively evaluating a picture quality based on joint multi-scale picture features as claimed in claim 1, wherein the feature similarity calculation step:
according to the obtained edge structure characteristics and edge saliency characteristics, the edge structure similarity is obtained through calculation, and the calculation formula is as follows:
Figure QLYQS_7
wherein,,
S 1 (x, y) represents the structural similarity of the (x, y) sides of the points;
subscripts r and d represent that the feature is taken from a reference picture or a distorted picture, respectively;
y 1r (1) (x, y) represents an edge structure feature of the reference picture when n=1 at the point (x, y);
y 1d (2) (x, y) represents the edge structure feature of the distorted picture when n=2 at point (x, y);
T 1 the representation is a non-zero constant to ensure the stability of the equation;
according to the obtained edge structure characteristics and edge saliency characteristics, the edge saliency similarity is obtained through calculation, and the calculation formula is as follows:
S 2 (x,y)=MS 1 (x,y) α ·MS 2 (x,y)
Figure QLYQS_8
Figure QLYQS_9
wherein,,
S 2 (x, y) represents the point (x, y) side saliency similarity;
alpha represents S 2 M S in (x, y) 1 (x, y) the weight;
MS 1 (x, y) represents the edge structure similarity calculated by the similarity calculation function;
MS 2 (x, y) represents the edge significant similarity calculated by the similarity calculation function;
w 1 (x, y) represents a weight factor;
C LMr (1) (x, y) represents LM mask characteristics when n=1 at point (x, y) of the reference picture;
C LMd (1) (x, y) represents LM mask characteristics when n=1 at point (x, y) of the reference picture;
T 2 representing a non-zero constant to ensure the stability of the equation;
(x,y) w 1 (x, y) represents w at all points on the picture 1 Accumulation of (x, y);
C LCMr (1) (x, y) represents LCM mask features of the reference picture when n=1 at point (x, y);
C LCMd (1) (x, y) represents LCM mask features for a distorted picture when n=1 at point (x, y);
C LCMd (2) (x, y) represents LCM mask features representing the distorted picture at point (x, y) where n=2;
T 3 representing a non-zero constant to ensure the stability of the equation.
3. The method for objectively evaluating a picture quality based on joint multi-scale picture features of claim 2, wherein the feature combining step:
according to the obtained edge structure similarity and edge saliency similarity, calculating to obtain local quality similarity, wherein the calculation formula is as follows:
S QM (x,y)=(S 1 (x,y)) ξ ·(S 2 (x,y)) ψ
=(S 1 (x,y)) ξ ·MS 1 (x,y)μ·MS 2 (x,y) ψ
μ=ψ·α
wherein,,
S QM (x, y) represents local mass similarity at point (x, y);
xi represents S 1 (x, y) at local mass S QM (x, y) the weight;
psi represents M S 1 (x, y) at local mass S QM (x, y) the weight;
mu represents M S 2 (x, y) at local mass S QM (x, y) the weight;
alpha represents S 2 M S in (x, y) 1 (x, y) weight.
4. A method of objective quality assessment of pictures based on joint multi-scale picture features according to claim 3, wherein the feature pooling step:
according to the obtained local quality graph similarity, calculating to obtain a final objective evaluation score, wherein the calculation formula is as follows:
Figure QLYQS_10
w 2 (x,y)=max(y 1r (2) (x,y),y 1d (2) (x,y))
wherein,,
s represents the final objective evaluation score;
w 2 (x, y) represents a weight parameter;
y 1r (2) (x, y) represents the edge structure feature of the reference picture when n=2 at the point (x, y).
5. A desktop content picture objective quality evaluation system based on joint multi-scale picture characteristics is characterized by comprising:
and a picture processing module: processing the original image into image groups with different scales by using a Gaussian pyramid and a Laplacian pyramid, and respectively marking the image groups as y 0 (n) And y is 1 (n) From y 1 (n) Extracting to obtain edge structure characteristics;
and the edge salient feature extraction module is used for: using the luminance mask and contrast mask, a group of pictures y is processed from a Gaussian pyramid 0 (n) Picture group y processed by Laplacian pyramid 1 (n) Extracting to obtain edge saliency characteristics;
and the feature similarity calculation module is used for: according to the obtained edge structure characteristics and edge saliency characteristics, calculating to obtain edge structure similarity and edge saliency similarity;
and the characteristic combination module is used for: according to the obtained edge structure similarity and edge saliency similarity, calculating to obtain a final local quality diagram;
and (3) a characteristic pooling module: calculating to obtain a final objective evaluation score according to the obtained final local quality map;
the picture processing module:
processing the original image into image groups with different scales by using a Gaussian pyramid and a Laplacian pyramid, and respectively marking the image groups as y 0 (n) And y is 1 (n)
The edge salient feature extraction module is used for:
using the luminance mask and contrast mask, a group of pictures y is processed from a Gaussian pyramid 0 (n) Picture group y processed by Laplacian pyramid 1 (n) The edge saliency characteristics are extracted and obtained, and the calculation formula is as follows:
Figure QLYQS_11
/>
Figure QLYQS_12
Figure QLYQS_13
wherein,,
C LM (n) representing a brightness mask calculation result, which is based on image characteristics of the image group processed by the Gaussian pyramid and the Laplacian pyramid;
Figure QLYQS_14
representing a y function;
y 1 (n) representing the picture group processed by the Laplacian pyramid;
y 0 (n) representing a group of pictures after gaussian pyramid processing;
n represents the number of layers of the layer;
layer y 1 (1) The displayed picture features are edge structure features;
Figure QLYQS_15
representing y 0 (n+1) Carrying in the calculation result of the y function;
γ 1 representing a brightness contrast threshold;
the absolute value is taken;
a 1 a constant that represents a guarantee of equation stability;
C LCM (n) representing the contrast mask calculation result, based on C LM (n) Image characteristics of the processed image group;
n represents the number of layers of the layer;
C LCM (1) representing the picture feature represented when n=1 as an edge saliency feature;
a 2 a constant that represents a guarantee of equation stability;
Figure QLYQS_16
representing C LM (n+1) Carrying in the calculation result of the y function;
γ 2 representing a contrast detectable threshold;
g (x, y; sigma) represents a Gaussian kernel function;
* Representing a convolution;
∈2 represents upsampling;
the feature similarity calculation module is used for:
according to the obtained edge structure characteristics and edge saliency characteristics, the edge structure similarity is obtained through calculation, and the calculation formula is as follows:
Figure QLYQS_17
wherein,,
S 1 (x, y) represents the structural similarity of the (x, y) sides of the points;
subscripts r and d represent that the feature is taken from a reference picture or a distorted picture, respectively;
y 1r (1) (x, y) represents an edge structure feature of the reference picture when n=1 at the point (x, y);
y 1d (2) (x, y) represents the edge structure feature of the distorted picture when n=2 at point (x, y);
T 1 the representation is a non-zero constant to ensure the stability of the equation;
according to the obtained edge structure characteristics and edge saliency characteristics, the edge saliency similarity is obtained through calculation, and the calculation formula is as follows:
S 2 (x,y)=MS 1 (x,y) α ·MS 2 (x,y)
Figure QLYQS_18
Figure QLYQS_19
wherein,,
S 2 (x, y) represents the point (x, y) side saliency similarity;
alpha represents S 2 M S in (x, y) 1 (x, y) the weight;
MS 1 (x, y) represents calculation by similaritySimilarity of edge structures after function calculation;
MS 2 (x, y) represents the edge significant similarity calculated by the similarity calculation function;
w 1 (x, y) represents a weight factor;
C LMr (1) (x, y) represents LM mask characteristics when n=1 at point (x, y) of the reference picture;
C LMd (1) (x, y) represents LM mask characteristics when n=1 at point (x, y) of the reference picture;
T 2 representing a non-zero constant to ensure the stability of the equation;
(x,y) w 1 (x, y) represents w at all points on the picture 1 Accumulation of (x, y);
C LCMr (1) (x, y) represents LCM mask features of the reference picture when n=1 at point (x, y);
C LCMd (1) (x, y) represents LCM mask features for a distorted picture when n=1 at point (x, y);
C LCMd (2) (x, y) represents LCM mask features representing the distorted picture at point (x, y) where n=2;
T 3 representing a non-zero constant to ensure the stability of the equation.
6. The desktop content picture objective quality assessment system based on joint multi-scale picture features of claim 5, wherein the feature combination module:
according to the obtained edge structure similarity and edge saliency similarity, calculating to obtain local quality similarity, wherein the calculation formula is as follows:
S QM (x,y)=(S 1 (x,y)) ξ ·(S 2 (x,y)) ψ
=(S 1 (x,y)) ξ ·MS 1 (x,y) μ ·MS 2 (x,y) ψ
μ=ψ·α
wherein,,
S QM (x, y) represents local mass similarity at point (x, y);
xi represents S 1 (x, y) at local mass S QM (x, y) the weight;
psi represents M S 1 (x, y) at local mass S QM (x, y) the weight;
mu represents M S 2 (x, y) at local mass S QM (x, y) the weight;
alpha represents S 2 M S in (x, y) 1 (x, y) the weight;
the feature pooling module:
according to the obtained local quality graph similarity, calculating to obtain a final objective evaluation score, wherein the calculation formula is as follows:
Figure QLYQS_20
w 2 (x,y)=max(y 1r (2) (x,y),y 1d (2) (x,y))
wherein,,
s represents the final objective evaluation score;
w 2 (x, y) represents a weight parameter;
y 1r (2) (x, y) represents the edge structure feature of the reference picture when n=2 at the point (x, y).
7. A computer-readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the steps of the method for objective quality assessment of pictures based on joint multi-scale picture features of any one of claims 1 to 4.
CN201910122634.7A 2019-02-19 2019-02-19 Picture objective quality evaluation method and system based on combined multi-scale picture characteristics Active CN111598826B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910122634.7A CN111598826B (en) 2019-02-19 2019-02-19 Picture objective quality evaluation method and system based on combined multi-scale picture characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910122634.7A CN111598826B (en) 2019-02-19 2019-02-19 Picture objective quality evaluation method and system based on combined multi-scale picture characteristics

Publications (2)

Publication Number Publication Date
CN111598826A CN111598826A (en) 2020-08-28
CN111598826B true CN111598826B (en) 2023-05-02

Family

ID=72183150

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910122634.7A Active CN111598826B (en) 2019-02-19 2019-02-19 Picture objective quality evaluation method and system based on combined multi-scale picture characteristics

Country Status (1)

Country Link
CN (1) CN111598826B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112288699B (en) * 2020-10-23 2024-02-09 北京百度网讯科技有限公司 Method, device, equipment and medium for evaluating relative definition of image
CN117952968A (en) * 2024-03-26 2024-04-30 沐曦集成电路(上海)有限公司 Image quality evaluation method based on deep learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102646269A (en) * 2012-02-29 2012-08-22 中山大学 Image processing method and device based on Laplace pyramid
CN104143188A (en) * 2014-07-04 2014-11-12 上海交通大学 Image quality evaluation method based on multi-scale edge expression
CN107578404A (en) * 2017-08-22 2018-01-12 浙江大学 The complete of view-based access control model notable feature extraction refers to objective evaluation method for quality of stereo images
CN109255358A (en) * 2018-08-06 2019-01-22 浙江大学 A kind of 3D rendering quality evaluating method of view-based access control model conspicuousness and depth map

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10977509B2 (en) * 2017-03-27 2021-04-13 Samsung Electronics Co., Ltd. Image processing method and apparatus for object detection

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102646269A (en) * 2012-02-29 2012-08-22 中山大学 Image processing method and device based on Laplace pyramid
CN104143188A (en) * 2014-07-04 2014-11-12 上海交通大学 Image quality evaluation method based on multi-scale edge expression
CN107578404A (en) * 2017-08-22 2018-01-12 浙江大学 The complete of view-based access control model notable feature extraction refers to objective evaluation method for quality of stereo images
CN109255358A (en) * 2018-08-06 2019-01-22 浙江大学 A kind of 3D rendering quality evaluating method of view-based access control model conspicuousness and depth map

Also Published As

Publication number Publication date
CN111598826A (en) 2020-08-28

Similar Documents

Publication Publication Date Title
Xue et al. Learning without human scores for blind image quality assessment
US11176408B2 (en) Tire image recognition method and tire image recognition device
TWI665639B (en) Method and device for detecting tampering of images
US20220092882A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
WO2022022154A1 (en) Facial image processing method and apparatus, and device and storage medium
Huang et al. Non-uniform patch based face recognition via 2D-DWT
Zhang et al. A no-reference evaluation metric for low-light image enhancement
CN109978854B (en) Screen content image quality evaluation method based on edge and structural features
US20060147107A1 (en) Method and system for learning-based quality assessment of images
CN109948566B (en) Double-flow face anti-fraud detection method based on weight fusion and feature selection
CN111598826B (en) Picture objective quality evaluation method and system based on combined multi-scale picture characteristics
CN110969089B (en) Lightweight face recognition system and recognition method in noise environment
CN105528620B (en) method and system for combined robust principal component feature learning and visual classification
CN108875623B (en) Face recognition method based on image feature fusion contrast technology
CN113255557B (en) Deep learning-based video crowd emotion analysis method and system
Zhang et al. Dual-channel multi-task CNN for no-reference screen content image quality assessment
CN108171689A (en) A kind of identification method, device and the storage medium of the reproduction of indicator screen image
CN114220143A (en) Face recognition method for wearing mask
WO2016145571A1 (en) Method for blind image quality assessment based on conditional histogram codebook
CN108010023B (en) High dynamic range image quality evaluation method based on tensor domain curvature analysis
CN111047618B (en) Multi-scale-based non-reference screen content image quality evaluation method
CN111104941B (en) Image direction correction method and device and electronic equipment
CN112990213B (en) Digital multimeter character recognition system and method based on deep learning
CN107729885B (en) Face enhancement method based on multiple residual error learning
Zheng et al. Heteroscedastic sparse representation based classification for face recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant