CN113850785A - Method and system for generating and evaluating image quality model, and equipment and medium thereof - Google Patents

Method and system for generating and evaluating image quality model, and equipment and medium thereof Download PDF

Info

Publication number
CN113850785A
CN113850785A CN202111136728.3A CN202111136728A CN113850785A CN 113850785 A CN113850785 A CN 113850785A CN 202111136728 A CN202111136728 A CN 202111136728A CN 113850785 A CN113850785 A CN 113850785A
Authority
CN
China
Prior art keywords
image
neural network
trained
image quality
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111136728.3A
Other languages
Chinese (zh)
Inventor
翟英明
王城
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spreadtrum Communications Shanghai Co Ltd
Original Assignee
Spreadtrum Communications Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spreadtrum Communications Shanghai Co Ltd filed Critical Spreadtrum Communications Shanghai Co Ltd
Priority to CN202111136728.3A priority Critical patent/CN113850785A/en
Publication of CN113850785A publication Critical patent/CN113850785A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a system for generating and evaluating an image quality model, equipment and a medium thereof, wherein the method for generating the image quality model comprises the following steps: the method comprises the steps of constructing a training image sample set with N types of image characteristics, inputting training image samples in the training image sample set to a convolutional neural network model to be trained, wherein the convolutional neural network model to be trained comprises at least two deep neural network basic models, extracting the characteristics of the training image samples by using the convolutional neural network model to be trained to obtain training image characteristic information, calculating a loss value between the training image characteristic information and standard image characteristic information by using a loss function, adjusting parameters in the convolutional neural network model to be trained according to the loss value, and generating an image quality model comprising the parameters. The image quality model provided by the invention is used for assisting a test engineer to evaluate the imaging quality of the terminal camera, and the test efficiency and the evaluation accuracy are improved.

Description

Method and system for generating and evaluating image quality model, and equipment and medium thereof
Technical Field
The invention belongs to the technical field of image testing, and particularly relates to a method and a system for generating and evaluating an image quality model, and equipment and a medium thereof.
Background
With the trend of the market competition of the smart phone towards the warmth, the camera mode of the smart phone determines the key of the competition of various manufacturers to a great extent.
People can take pictures in daily life and traveling, the number of pictures stored in a personal photo album is increased explosively, which determines that an image test engineer needs to have a measure for visual perception in camera mode adjustment of a smart phone, but even professional photographers have difficulty in explaining which features have a greater influence on the aesthetic quality of an image, and the image test engineer still evaluates the image based on basic factors such as automatic exposure, white balance and automatic focusing although the aesthetics are difficult to describe.
The existing image quality evaluation method mainly relies on an image test engineer to evaluate the quality of camera imaging quality based on personal subjective preferences, and has the disadvantages of low efficiency, strong subjectivity and insufficient reliability of evaluation results.
Disclosure of Invention
The invention aims to provide a method and a system for generating and evaluating an image quality model, equipment and a medium thereof, which improve the efficiency of image quality evaluation and the accuracy of an evaluation result.
To achieve the above object, in a first aspect, the present invention provides a method for generating an image quality model, the method including: the method comprises the steps of constructing a training image sample set with N types of image characteristics, inputting training image samples in the training image sample set to a convolutional neural network model to be trained, wherein N is a positive integer, utilizing the convolutional neural network model to be trained to comprise at least two deep neural network basic models, then utilizing the convolutional neural network model to be trained to extract characteristics of the training image samples to obtain training image characteristic information, utilizing a loss function to calculate a loss value between the image characteristic information of the training image samples and image standard characteristic information of verification image samples, adjusting parameters in the convolutional neural network model to be trained according to the loss value, and generating an image quality model comprising the parameters.
The method for generating the image quality model has the advantages that: the method comprises the steps of inputting a training image sample set with N types of image characteristics into a convolutional neural network model to be trained for training, extracting the characteristics of the training image sample by using the convolutional neural network model to be trained, obtaining comprehensive training image characteristic information because the convolutional neural network model to be trained comprises at least two deep neural network basic models, calculating a loss value between the training image characteristic information and standard image characteristic information through a loss function, adjusting parameters in the convolutional neural network model to be trained according to the loss value, and generating an image quality model of the parameters.
Optionally, the convolutional neural network model to be trained includes at least two deep neural network basic models, including: the convolutional neural network model to be trained comprises GoogleNet, ResNet, MobileNet and ResMobile, wherein the ResMobile comprises a residual error network module and a depth separable convolution module. The beneficial effects are that: the convolutional neural network model to be trained comprises GoogleNet, ResNet, MobileNet and ResMobile, wherein ResMobile comprises a residual error network module and a depth separable convolution module, namely the residual error idea in ResNet is applied to the MobileNet network, specifically, firstly, by taking the ResNet method as a reference, a parallel 1 × 1 convolution branch and an identical mapping branch are added to each convolution block during training to form a ResMobile Block, and then the ResMobile Block is converted into a convolution, so that the architecture is more flexible, the width of each layer is easy to change, the parallelism of a plurality of single-path architectures is high, and the memory consumption can be saved.
Optionally, the training image sample set includes training image samples with different exposure compensations, training image samples with different focusing modes, and training image samples with different white balances. The beneficial effects are that: training image samples with different exposure compensation, training image samples with different focusing modes and training image samples with different white balance are input into the convolutional neural network model to be trained for training, and various picture characteristics are adopted for training, so that the reliability of the convolutional neural network model to be trained is improved.
Optionally, the generating an image quality model including the parameter includes: and when the iteration times of the convolutional neural network model to be trained reach a set value or the loss value of the loss function reaches a target value, generating an image quality model comprising the parameters. The beneficial effects are that: by presetting the iteration times, when the preset iteration times are reached, the training is stopped, and the cyclic training in the long-time iterative operation process is avoided. Or adjusting parameters in the convolutional neural network model to be trained according to the loss value until the loss value calculated by using the loss function reaches a preset target value, so as to obtain the image quality model comprising the parameters, and realize the rapid and reliable generation of the image quality model.
In a second aspect, an embodiment of the present invention provides an evaluation method for an image quality model, the method being applied to the image quality model described above, and the method including:
inputting a test image into the image quality model, and obtaining an output result of the image quality model, wherein the output result comprises reference scores respectively corresponding to N types of image features of the test image; and carrying out weighted summation on the reference scores respectively corresponding to the N types of image characteristics to obtain the image quality score of the test image.
The evaluation method of the image quality model provided by the invention has the beneficial effects that: the test image is input into the image quality model, so that an output result in the image quality model is obtained, the output result comprises reference scores corresponding to N types of image features in the test image respectively, the reference scores corresponding to the N types of image features respectively are subjected to weighted summation, the image quality score of the test image is obtained, the efficiency of image quality evaluation and the accuracy of an evaluation result are improved, and a test engineer can conveniently and quickly know the quality of the corresponding image through the grade.
In a third aspect, the present invention provides a system for generating an image quality model, comprising:
the device comprises a construction unit, a calculation unit and a processing unit, wherein the construction unit is used for constructing a training image sample set with N types of image characteristics, and N is a positive integer; and the processing unit is electrically connected with the construction unit and is used for inputting the training image samples in the training image sample set to the convolutional neural network model to be trained, the convolutional neural network model to be trained comprises at least two deep neural network basic models, and the convolutional neural network model to be trained is used for carrying out feature extraction on the training image samples to obtain training image feature information. And the acquisition unit is electrically connected with the processing unit and is used for acquiring standard image characteristic information of the verification image sample. The calculation unit is electrically connected with the acquisition unit and used for calculating a loss value between the training image characteristic information and the standard image characteristic information by using a loss function, and the processing unit is also used for adjusting parameters in the convolutional neural network model to be trained according to the loss value and generating an image quality model comprising the parameters.
The image quality model generation system provided by the invention has the beneficial effects that: the method comprises the steps of inputting a training image sample set with N types of image characteristics into a convolutional neural network model to be trained through a processing unit for training, extracting the characteristics of the training image sample by using the convolutional neural network model to be trained, obtaining comprehensive training image characteristic information because the convolutional neural network model to be trained comprises at least two deep neural network basic models, calculating a loss value between the image characteristic information of the training image sample and image standard characteristic information of a verification image sample through a loss function by a calculating unit, and finally adjusting parameters in the convolutional neural network model to be trained according to the loss value by the processing unit, thereby generating an image quality model of the parameters.
Optionally, the convolutional neural network model to be trained includes at least two deep neural network basic models, including: the convolutional neural network model to be trained comprises GoogleNet, ResNet, MobileNet and ResMobile, wherein ResMobile comprises a residual network module and a depth separable convolution module. The beneficial effects are that: the convolutional neural network model to be trained in the processing unit comprises GoogleNet, ResNet, MobileNet and ResMobile, the ResMobile comprises a residual network module and a depth separable convolution module, namely the residual idea in ResNet is applied to the MobileNet network, specifically, firstly, by taking the ResNet method as a reference, a parallel 1 × 1 convolution branch and a constant mapping branch are added to each convolution block during the training of the MobileNet network to form a ResMobileBlock, and then the ResMobileBlock is converted into a convolution, so that the architecture is more flexible, the width of each layer is easy to change, the parallelism of a plurality of single-path architectures is high, and the memory consumption can be saved.
Optionally, the training image sample set constructed by the construction unit includes an exposure-compensated training image sample, a training image sample in a focusing manner, and a training image sample in white balance. The beneficial effects are that: the training image sample with exposure compensation, the training image sample with focusing mode and the training image sample with white balance are input into the convolutional neural network model to be trained for training, and various picture characteristics are adopted for training, so that the reliability of the convolutional neural network model to be trained in the training process is improved.
Optionally, the processing unit is configured to generate an image quality model including the parameter when the number of iterations of the convolutional neural network model to be trained reaches a set value or a loss value of the loss function reaches a target value. The beneficial effects are that: by presetting the iteration times, when the preset iteration times are reached, the training is stopped, and the cyclic training in the long-time iterative operation process is avoided. Or adjusting parameters in the convolutional neural network model to be trained according to the loss value until the loss value calculated by using the loss function reaches a preset target value, so as to obtain the image quality model comprising the parameters, and realize the rapid and reliable generation of the image quality model.
In a fourth aspect, an embodiment of the present invention provides an evaluation system for an image quality model, the system being applied to the above-mentioned generation system for an image quality model, the evaluation system including:
an input unit for inputting a test image to the image quality model; the acquisition unit is electrically connected with the input unit and is used for acquiring the output result of the image quality model, and the output result comprises reference scores respectively corresponding to the N types of image characteristics of the test image. And the calculation unit is electrically connected with the acquisition unit and is used for carrying out weighted summation on the reference scores respectively corresponding to the N types of image characteristics to obtain the image quality score of the test image.
The evaluation system of the image quality model provided by the invention has the beneficial effects that: the test image is input to the image quality model through the input unit to test the test image, the acquisition unit acquires the output result of the image quality model, the output result comprises reference scores corresponding to the N types of image features of the test image respectively, and finally the calculation unit performs weighted summation on the reference scores corresponding to the N types of image features respectively to obtain the image quality score of the test image.
In a fifth aspect, the present invention provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method steps of the first aspect when executing the computer program.
The electronic equipment has the beneficial effects that: execution of the computer program by a processor effects the operation of the above-described method.
In a sixth aspect, the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method steps of the first aspect described above.
The computer-readable storage medium of the present invention is advantageous in that the execution of the above-described method is realized by executing a computer program.
Drawings
FIG. 1 is a flow chart of a method for generating an image quality model provided in the practice of the present invention;
FIG. 2 is a schematic diagram of a MobileNet three-layer network architecture provided by the present invention;
FIG. 3 is a schematic diagram of the application of the residual error concept to one layer of the MobileNet, provided for the implementation of the present invention;
FIG. 4 is a schematic diagram of a data set of training image samples with different exposure compensation corresponding to verification image samples with different exposure compensation according to an embodiment of the present invention;
FIG. 5 is a flowchart of a method for evaluating an image quality model according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating an evaluation method of an image quality model according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of an operation interface and an execution display interface of an evaluation system of an image quality model according to an embodiment of the present invention;
FIG. 8 is a schematic structural diagram of a system for generating an image quality model according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an evaluation system of an image quality model according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. Unless defined otherwise, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this invention belongs. As used herein, the word "comprising" and similar words are intended to mean that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items.
With the trend of the competition of the smart phone market towards the warmth, the key of competition of manufacturers is determined to a great extent by the camera mode of the smart phone, so that the quality of pictures taken by the camera determines the size of the market. However, the existing image quality evaluation method mainly relies on image test engineers to evaluate the quality of the camera imaging quality based on personal subjective preferences, the efficiency is low, the subjectivity is strong, and the reliability of the evaluation result is insufficient.
In view of the existing problems, an embodiment of the present invention provides a method for generating an image quality model, which is shown in fig. 1 and includes:
s101: and constructing a training image sample set with N types of image characteristics, wherein N is a positive integer.
In this step, the constructed training set of the N-type image features mainly includes training image samples with different exposure compensations, training image samples with different focusing modes, and training image samples with different white balances.
S102: and inputting the training image samples in the training image sample set to a convolutional neural network model to be trained, wherein the convolutional neural network model to be trained comprises at least two deep neural network basic models.
In the step, the training image sample including exposure compensation, the training image sample in a focusing mode and the training image sample in white balance are input into the convolutional neural network model to be trained for training, and various picture characteristics are adopted for training, so that the reliability of the convolutional neural network model to be trained is improved.
In addition, the convolutional neural network model to be trained comprises GoogleNet, ResNet, MobileNet and ResMobile. In this embodiment, the convolutional neural network model to be trained is composed of GoogleNet, ResNet, MobileNet, and ResMobile, where ResMobile includes a residual network module and a deep separable convolutional module. With reference to fig. 2 and 3, fig. 2 is a schematic diagram of a three-layer network structure before MobileNet, and fig. 3 is a schematic diagram of applying a residual error idea to one layer of MobileNet, that is, ResMobile applies a residual error idea in ResNet to a MobileNet network, specifically, referring to the ResNet method, a parallel 1 × 1 convolution branch and an identity mapping branch are added to each convolution block during training to form a ResMobileBlock, and then the ResMobileBlock is converted into a convolution, so that the architecture is more flexible, the width of each layer is easily changed, the parallelism of a plurality of single-path architectures is high, and memory consumption can be saved.
S103: and performing feature extraction on the training image sample by using the convolutional neural network model to be trained to obtain training image feature information.
In the step, feature extraction is carried out on the training image sample through the convolutional neural network model to be trained, and training image feature information is obtained. Because the convolutional neural network model to be trained is composed of the GoogleNet, the ResNet, the MobileNet and the ResMobile in the embodiment, the image feature information is more comprehensive and detailed.
S104: and acquiring standard image characteristic information of the verification image sample.
And standard image characteristic information of the verification image sample is obtained, and a reference is made for subsequently adjusting parameters in the convolutional neural network model to be trained.
S105: and calculating a loss value between the training image characteristic information and the standard image characteristic information by using a loss function.
In this step, the loss value between the training image feature information and the standard image feature information is calculated, and the obtained actual loss value is used for subsequent parameter adjustment, so that the reliability of parameter adjustment can be improved.
Further, referring to fig. 4, fig. 4 is a schematic diagram of a data set of training image samples with different exposure compensation corresponding to verification image samples with different exposure compensation. The data sets in the training image sample and the verification image sample respectively comprise nine types of pictures, namely, the pictures are entirely too dark, entirely darker, some areas too dark, some areas darker, entirely too bright, entirely brighter, some areas too bright, some areas brighter and normal, for example, image characteristic information of the entirely too dark in the training image sample and the verification image sample is calculated by using a loss function to obtain mutual loss values, and the data sets in the training image sample and the verification image sample are calculated sequentially according to the scheme.
It should be noted that the loss values between the training image samples of different focusing manners and the verification image samples of different focusing manners, and the loss values between the training image samples of different white balances and the verification image samples of different white balances are calculated in the same manner, which is not described herein.
S106: and adjusting parameters in the convolutional neural network model to be trained according to the loss value.
And the step of optimizing the convolutional neural network model to be trained by continuously adjusting parameters.
S107: an image quality model including the parameters is generated.
In this embodiment, a training image sample set of N-type image features is input into a convolutional neural network model to be trained for training, the convolutional neural network model to be trained is used for feature extraction of the training image sample, the convolutional neural network model to be trained includes at least two deep neural network basic models, so that the obtained training image feature information is relatively comprehensive, a loss value between the training image feature information and standard image feature information is calculated through a loss function, and parameters in the convolutional neural network model to be trained are adjusted according to the loss value, so that an image quality model of the adjusted parameters is generated.
In another embodiment of the disclosure, an evaluation method of an image quality model is provided, which is shown in fig. 5 and is applied to the image quality model implemented above, and the method includes:
s501: a test image is input to the image quality model.
In this step, referring to fig. 6, fig. 6 is a schematic diagram of an evaluation method of an image quality model, and test pictures are input into the image quality model, where the image quality model includes *** net, ResNet, mobilet, and ResMobile after training, and each test picture needs to be input into *** net, ResNet, mobilet, and ResMobile.
S502: and acquiring an output result of the image quality model, wherein the output result comprises reference scores respectively corresponding to the N types of image characteristics of the test image.
Acquiring an output result of the image quality model, aggregating features by adopting a statistical aggregation structure, and expressing the reference score by a probability obtained by a logistic regression model (softmax).
S503: and carrying out weighted summation on the reference scores respectively corresponding to the N types of image characteristics to obtain the image quality score of the test image.
Illustratively, the reference scores corresponding to the N types of image features respectively include a reference score corresponding to an exposure compensation image feature in the test image, a reference score corresponding to a focusing mode, and a reference score corresponding to a white balance, and the reference scores corresponding to the N types of image features respectively are weighted and summed to obtain an image quality score of the test image.
In order to enable the reference score to be more direct and convenient for a test engineer to observe, the image quality score of the test image is obtained by weighting and summing the reference score. The test engineer can directly evaluate the quality of the test image according to the image quality score of the test image.
In the embodiment, the test image is input into the image quality model, so that an output result in the image quality model is obtained, the output result comprises a reference score corresponding to the exposure compensation image feature in the test image, a reference score corresponding to the focusing mode and a reference score corresponding to the white balance, then the reference scores corresponding to the image features are weighted and summed, and the image quality score of the test image is obtained, so that the efficiency and the accuracy of the image quality evaluation are improved, and the quality of the corresponding image can be conveniently and rapidly known by a test engineer through the grade.
Specifically, referring to fig. 7, fig. 7 is an operation interface and an execution display interface of the image quality model evaluation system. The operation interface comprises a training image sample set saved to a path 1, a verification image sample set saved to a path 2, an image quality model saved to a path 3, a selection button, a training display area, an iteration number (Epoch) input box, a Size of each Batch of samples (Batch-Size) input box, a convolutional neural network model to be trained and a Graphics Processing Unit (GPU).
When the image quality model training is carried out, the training image sample set, the verification image sample set and the convolutional neural network model to be trained are selected through the selection button, then the calculation times and the number of pictures are respectively input into the iteration time input box and the size input box of each batch of samples, and finally the training task of the image quality model can be realized in the image processor by clicking the selection button, wherein the training display area can always display the specific conditions of the training, the observation of a test engineer is facilitated, and after the training is finished, the image quality model to be generated is stored into the path 3.
Further, the execution display interface includes a folder display area, a folder or file display area, a picture display area, a path 4 corresponding to the generated image quality model, a path 5 corresponding to the test image, an identification button, and a batch identification button. The folder display area is used for assisting a test engineer to find the position of a test image file, when image quality evaluation is to be carried out, a corresponding test image in the path 5 is selected, the folder or the file display area can display the folder and the image of the corresponding test image in the path 5, an image quality model generated in an operation interface is loaded from the path 4, quality identification evaluation of one image or batch of images is carried out by clicking an identification button or a batch identification button, the test image is displayed in the image display area, and an evaluated identification result is displayed below the corresponding test image. The pictures in the picture display area can be freely enlarged and reduced, so that the image test engineering can conveniently check specific image details.
In yet another embodiment of the present disclosure, a system for generating an image quality model is provided, as shown in reference to fig. 8, the system comprises a construction unit 801, a processing unit 802, an acquisition unit 803 and a calculation unit 804, the constructing unit 801 is configured to construct a training image sample set having N types of image features, where N is a positive integer, the processing unit 802 is electrically connected to the constructing unit 801, for inputting training image samples of the set of training image samples to a convolutional neural network model to be trained, the convolutional neural network model to be trained comprises at least two deep neural network basic models, and the convolutional neural network model to be trained is utilized to carry out feature extraction on the training image sample to obtain the training image feature information, the obtaining unit 803 is electrically connected to the processing unit 802, and the obtaining unit 803 is configured to obtain standard image feature information of an authentication image sample. The calculating unit 804 is configured to calculate a loss value between the training image feature information and the standard image feature information by using a loss function, and the processing unit 802 is further configured to adjust a parameter in the convolutional neural network model to be trained according to the loss value, and generate an image quality model including the parameter.
In this embodiment, a training image sample set of N-class image features is input to a convolutional neural network model to be trained through the processing unit 802 for training, feature extraction is performed on the training image sample by using the convolutional neural network model to be trained, since the convolutional neural network model to be trained includes at least two deep neural network basic models, the obtained training image feature information is relatively comprehensive, then the calculating unit 804 calculates a loss value between the training image feature information and standard image feature information through a loss function, and finally the processing unit 802 adjusts parameters in the convolutional neural network model to be trained according to the loss value, thereby generating an image quality model of the parameters.
Optionally, the training image sample set constructed by the construction unit 801 includes an exposure-compensated training image sample, a focusing-mode training image sample, and a white-balanced training image sample, and the reliability of training the convolutional neural network model to be trained is improved by training with various picture features. In addition, the processing unit 802 is further configured to generate an image quality model including the parameter when the iteration number of the convolutional neural network model to be trained reaches a set value or the loss value of the loss function reaches a target value, and stop training when the preset iteration number is reached by presetting the iteration number, so that cyclic training in a long-time iterative operation process is avoided, or the parameter in the convolutional neural network model to be trained is adjusted according to the loss value until the loss value calculated by using the loss function reaches the preset target value, so that the image quality model including the parameter is obtained, and the image quality model is generated quickly and reliably.
In a further embodiment of the present disclosure, an evaluation system of an image quality model is provided, which is shown in fig. 9 and is based on the generation system of the image quality model, and the evaluation system of the image quality model includes: the input unit 901 is configured to input a test image to the image quality model, the obtaining unit 902 is electrically connected to the input unit 901 and configured to obtain an output result of the image quality model, where the output result includes reference scores corresponding to N types of image features of the test image, respectively, and the calculating unit 903 is electrically connected to the obtaining unit 902 and configured to perform weighted summation on the reference scores corresponding to the N types of image features, respectively, to obtain an image quality score of the test image.
In this embodiment, a test image is input to the image quality model through the input unit 901 to test the test image, the obtaining unit 902 obtains an output result of the image quality model, the output result includes reference scores corresponding to N types of image features of the test image, and finally the calculating unit 903 performs weighted summation on the reference scores corresponding to the N types of image features to obtain the image quality score of the test image, so that a test engineer can be assisted in evaluating the image quality, and the efficiency of evaluating the image quality and the accuracy of the evaluation result are improved.
In another embodiment disclosed in the present invention, on the basis of the above-mentioned embodiment, the present invention provides a computer-readable storage medium on which a computer program is stored, which, when being executed by a processor, implements the method for generating an image quality model as in the above-mentioned embodiment.
In other embodiments of the present application, embodiments of the present application disclose an electronic device, which may include: a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program when executed by the processor being operative to perform the steps of the respective embodiments.
The above description is only a specific implementation of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any changes or substitutions within the technical scope disclosed in the embodiments of the present application should be covered by the scope of the embodiments of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. A method of generating an image quality model, the method comprising:
constructing a training image sample set with N types of image characteristics, wherein N is a positive integer;
inputting training image samples in the training image sample set to a convolutional neural network model to be trained, wherein the convolutional neural network model to be trained comprises at least two deep neural network basic models;
performing feature extraction on the training image sample by using the convolutional neural network model to be trained to obtain training image feature information;
obtaining standard image characteristic information of a verification image sample;
calculating a loss value between the training image feature information and the standard image feature information by using a loss function;
adjusting parameters in the convolutional neural network model to be trained according to the loss value;
an image quality model including the parameters is generated.
2. The method of claim 1, wherein the convolutional neural network model to be trained comprises at least two deep neural network basis models, comprising: the convolutional neural network model to be trained comprises GoogleNet, ResNet, MobileNet and ResMobile, wherein the ResMobile comprises a residual error network module and a depth separable convolution module.
3. The method of claim 1 or 2, wherein the set of training image samples comprises exposure-compensated training image samples, in-focus training image samples, and white-balanced training image samples.
4. The method of claim 3, wherein the generating an image quality model including the parameters comprises:
and when the iteration times of the convolutional neural network model to be trained reach a set value or the loss value of the loss function reaches a target value, generating an image quality model comprising the parameters.
5. An evaluation method of an image quality model, which is applied to the image quality model according to any one of claims 1 to 4, the method comprising:
inputting a test image to the image quality model;
obtaining an output result of the image quality model, wherein the output result comprises reference scores respectively corresponding to the N types of image features of the test image;
and carrying out weighted summation on the reference scores respectively corresponding to the N types of image characteristics to obtain the image quality score of the test image.
6. A system for generating an image quality model, comprising:
the device comprises a construction unit, a calculation unit and a processing unit, wherein the construction unit is used for constructing a training image sample set with N types of image characteristics, and N is a positive integer;
the processing unit is electrically connected with the construction unit and is used for inputting training image samples in the training image sample set to a convolutional neural network model to be trained, the convolutional neural network model to be trained comprises at least two deep neural network basic models and utilizes the convolutional neural network model to be trained to perform feature extraction on the training image samples to obtain training image feature information;
the acquisition unit is electrically connected with the processing unit and is used for acquiring standard image characteristic information of the verification image sample;
the calculating unit is electrically connected with the acquiring unit and calculates a loss value between the training image characteristic information and the standard image characteristic information by using a loss function;
and the processing unit is also used for adjusting parameters in the convolutional neural network model to be trained according to the loss value and generating an image quality model comprising the parameters.
7. The system of claim 6, wherein the convolutional neural network model to be trained comprises at least two deep neural network basis models, comprising: the convolutional neural network model to be trained comprises GoogleNet, ResNet, MobileNet and ResMobile, wherein ResMobile comprises a residual network module and a depth separable convolution module.
8. The system according to claim 6 or 7, wherein the set of training image samples constructed by the construction unit comprises exposure-compensated training image samples, in-focus training image samples and white-balanced training image samples.
9. The system of claim 8, wherein the processing unit is configured to generate an image quality model including the parameter when the number of iterations of the convolutional neural network model to be trained reaches a set value or a loss value of the loss function reaches a target value.
10. An evaluation system of an image quality model, which is applied to the generation system of the image quality model according to any one of claims 6 to 9, the system comprising:
an input unit for inputting a test image to the image quality model;
the acquisition unit is electrically connected with the input unit and is used for acquiring output results of the image quality model, and the output results comprise reference scores respectively corresponding to the N types of image characteristics of the test image;
and the calculating unit is electrically connected with the acquiring unit and is used for carrying out weighted summation on the reference scores respectively corresponding to the N types of image characteristics to obtain the image quality score of the test image.
11. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 4 or 5 when executing the computer program.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1 to 4 or 5.
CN202111136728.3A 2021-09-27 2021-09-27 Method and system for generating and evaluating image quality model, and equipment and medium thereof Pending CN113850785A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111136728.3A CN113850785A (en) 2021-09-27 2021-09-27 Method and system for generating and evaluating image quality model, and equipment and medium thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111136728.3A CN113850785A (en) 2021-09-27 2021-09-27 Method and system for generating and evaluating image quality model, and equipment and medium thereof

Publications (1)

Publication Number Publication Date
CN113850785A true CN113850785A (en) 2021-12-28

Family

ID=78980020

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111136728.3A Pending CN113850785A (en) 2021-09-27 2021-09-27 Method and system for generating and evaluating image quality model, and equipment and medium thereof

Country Status (1)

Country Link
CN (1) CN113850785A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114783041A (en) * 2022-06-23 2022-07-22 合肥的卢深视科技有限公司 Target object recognition method, electronic device, and computer-readable storage medium
CN115171198A (en) * 2022-09-02 2022-10-11 腾讯科技(深圳)有限公司 Model quality evaluation method, device, equipment and storage medium
CN115902227A (en) * 2022-12-22 2023-04-04 巴迪泰(广西)生物科技有限公司 Detection evaluation method and system of immunofluorescence kit

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114783041A (en) * 2022-06-23 2022-07-22 合肥的卢深视科技有限公司 Target object recognition method, electronic device, and computer-readable storage medium
CN115171198A (en) * 2022-09-02 2022-10-11 腾讯科技(深圳)有限公司 Model quality evaluation method, device, equipment and storage medium
CN115902227A (en) * 2022-12-22 2023-04-04 巴迪泰(广西)生物科技有限公司 Detection evaluation method and system of immunofluorescence kit
CN115902227B (en) * 2022-12-22 2024-05-14 巴迪泰(广西)生物科技有限公司 Detection and evaluation method and system for immunofluorescence kit

Similar Documents

Publication Publication Date Title
CN113850785A (en) Method and system for generating and evaluating image quality model, and equipment and medium thereof
Li et al. An underwater image enhancement benchmark dataset and beyond
Ying et al. A bio-inspired multi-exposure fusion framework for low-light image enhancement
TWI709091B (en) Image processing method and device
Schwartz et al. Deepisp: Toward learning an end-to-end image processing pipeline
CN110046673B (en) No-reference tone mapping image quality evaluation method based on multi-feature fusion
Prashnani et al. Pieapp: Perceptual image-error assessment through pairwise preference
CN106778928B (en) Image processing method and device
CN107944379B (en) Eye white image super-resolution reconstruction and image enhancement method based on deep learning
CN108615071B (en) Model testing method and device
JP4906034B2 (en) Imaging apparatus, method, and program
US20180075315A1 (en) Information processing apparatus and information processing method
CN112164005B (en) Image color correction method, device, equipment and storage medium
CN113269149B (en) Method and device for detecting living body face image, computer equipment and storage medium
CN114257738B (en) Automatic exposure method, device, equipment and storage medium
Ou et al. A novel rank learning based no-reference image quality assessment method
CN112651333B (en) Silence living body detection method, silence living body detection device, terminal equipment and storage medium
CN108710893A (en) A kind of digital image cameras source model sorting technique of feature based fusion
CN107424117A (en) Image U.S. face method, apparatus, computer-readable recording medium and computer equipment
CN112651945A (en) Multi-feature-based multi-exposure image perception quality evaluation method
EP4380179A1 (en) Exposure compensation method and apparatus, and electronic device
CN109859216A (en) Distance measuring method, device, equipment and storage medium based on deep learning
CN114140463A (en) Welding defect identification method, device, equipment and storage medium
CN109478316A (en) The enhancing of real-time adaptive shadow and highlight
CN112084825B (en) Cooking evaluation method, cooking recommendation method, computer device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination