CN112529871A - Method and device for evaluating image and computer storage medium - Google Patents

Method and device for evaluating image and computer storage medium Download PDF

Info

Publication number
CN112529871A
CN112529871A CN202011459266.4A CN202011459266A CN112529871A CN 112529871 A CN112529871 A CN 112529871A CN 202011459266 A CN202011459266 A CN 202011459266A CN 112529871 A CN112529871 A CN 112529871A
Authority
CN
China
Prior art keywords
event
target
image
evaluation index
evaluating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011459266.4A
Other languages
Chinese (zh)
Other versions
CN112529871B (en
Inventor
茅陈庆
杨海舟
梁晨华
刘名扬
楼炯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision System Technology Co Ltd
Original Assignee
Hangzhou Hikvision System Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision System Technology Co Ltd filed Critical Hangzhou Hikvision System Technology Co Ltd
Priority to CN202011459266.4A priority Critical patent/CN112529871B/en
Publication of CN112529871A publication Critical patent/CN112529871A/en
Application granted granted Critical
Publication of CN112529871B publication Critical patent/CN112529871B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a method and a device for evaluating an image and a computer storage medium, and belongs to the technical field of image processing. In the embodiment of the application, when the target image is evaluated, the matched evaluation index and analysis algorithm can be self-adapted based on the event elements of the associated event, so that the target image is evaluated based on the matched evaluation index and analysis algorithm. Therefore, in the process of evaluating the event-related image, manual participation is not required, so that the human resources required for evaluating the event-related image can be greatly reduced. In addition, the evaluation process does not need manual participation, so that the efficiency of evaluating the event-related images can be improved. In addition, since the evaluation process is based on event matching evaluation indexes, the objectivity of the quality result of the target image can be improved, thereby improving the accuracy of evaluation of the event-related image.

Description

Method and device for evaluating image and computer storage medium
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to a method and a device for evaluating an image and a computer storage medium.
Background
In the process of depth judgment analysis of an event by a worker, the event is generally required to be analyzed based on some images related to the event, and the images can be videos or pictures. In order to improve the accuracy of event analysis, after images associated with an event are acquired, the images need to be evaluated first to filter out low-quality images.
In the related art, the images associated with the events are generally evaluated in a manual manner. The evaluation method is easy to consume human resources, the evaluation result is not objective enough, and the accurate evaluation result is difficult to obtain.
Disclosure of Invention
The embodiment of the application provides a method and a device for evaluating an image and a computer storage medium, which can improve the accuracy of evaluating an event-related image. The technical scheme is as follows:
in one aspect, a method of evaluating an image is provided, the method comprising:
acquiring a target image to be evaluated;
determining a target evaluation index according to the event elements of the target event associated with the target image;
and determining the quality result of the target image according to the analysis result of the target image by the analysis algorithm corresponding to the target evaluation index.
Optionally, in a case where the event element of the target event includes an event occurrence site, the target evaluation index includes an evaluation index for evaluating whether a shooting area in the image is the event occurrence site; and/or
In a case where the event element of the target event includes an event occurrence time period, the target evaluation index includes an evaluation index for evaluating a relationship between the acquisition time of the image and the event occurrence time period; and/or
In the case that the event element of the target event includes an event attention object, the target evaluation index includes an evaluation index for evaluating whether the event attention object exists in the image, and/or for evaluating whether the event attention object has been labeled in the image, and/or for evaluating the definition of the event attention object in the image, and the event attention object refers to any object that needs to be focused on when the event element of the target event is analyzed; and/or
In the case that the event element of the target event includes an event audio, the target evaluation index includes an evaluation index for evaluating whether the event audio corresponds to the image and/or evaluating the definition of the event audio corresponding to the image.
Optionally, the method further comprises:
acquiring a target comprehensive quality evaluation model matched with the target image;
determining a quality result of the target image according to an analysis result of the target image by the analysis algorithm corresponding to the target evaluation index, including:
and determining the quality result of the target image based on the analysis result corresponding to each target evaluation index and the logic relation of each analysis result configured in the target comprehensive quality evaluation model.
Optionally, the method further comprises:
acquiring a target grading model matched with the target image;
and determining the quality gear of the target image based on the corresponding relation between the quality result and the quality gear configured in the target grading model.
In another aspect, there is provided an apparatus for evaluating an image, the apparatus including:
the acquisition module is used for acquiring a target image to be evaluated;
the determining module is used for determining a target evaluation index according to the event element of the target event related to the target image;
the determining module is further configured to determine a quality result of the target image according to an analysis result of the target image by an analysis algorithm corresponding to the target evaluation index.
Optionally, in a case where the event element of the target event includes an event occurrence site, the target evaluation index includes an evaluation index for evaluating whether a shooting area in the image is the event occurrence site; and/or
In a case where the event element of the target event includes an event occurrence time period, the target evaluation index includes an evaluation index for evaluating a relationship between the acquisition time of the image and the event occurrence time period; and/or
In the case that the event element of the target event includes an event attention object, the target evaluation index includes an evaluation index for evaluating whether the event attention object exists in the image, and/or for evaluating whether the event attention object has been labeled in the image, and/or for evaluating the definition of the event attention object in the image, and the event attention object refers to any object that needs to be focused on when the event element of the target event is analyzed; and/or
In the case that the event element of the target event includes an event audio, the target evaluation index includes an evaluation index for evaluating whether the event audio corresponds to the image and/or evaluating the definition of the event audio corresponding to the image.
Optionally, the obtaining module is further configured to:
acquiring a target comprehensive quality evaluation model matched with the target image;
the determination module is to:
and determining the quality result of the target image based on the analysis result corresponding to each target evaluation index and the logic relation of each analysis result configured in the target comprehensive quality evaluation model.
Optionally, the obtaining module is further configured to obtain a target grading model matched with the target image;
the determination module is further to:
and determining the quality gear of the target image based on the corresponding relation between the quality result and the quality gear configured in the target grading model.
In another aspect, there is provided an apparatus for evaluating an image, the apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of the above-described method of evaluating an image.
In another aspect, a computer-readable storage medium is provided, wherein the computer-readable storage medium has stored thereon instructions, which when executed by a processor, implement the steps of the above-described method of evaluating an image.
In another aspect, a computer program product comprising instructions which, when run on a computer, cause the computer to perform the above-described method of evaluating an image is provided.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
in the embodiment of the application, when the target image is evaluated, the target evaluation index may be determined based on the event element of the target event associated with the target image, and further, the quality result of the target image may be determined according to the analysis result of the target image by the analysis algorithm corresponding to the target evaluation index. That is, in the embodiment of the present application, the evaluation index and the analysis algorithm can be adaptively matched based on the event element of the target event associated with the target image, so that the target image can be evaluated based on the matched evaluation index and analysis algorithm. According to the method provided by the embodiment of the application, manual participation is not needed in the process of evaluating the event related image, so that the human resources required for evaluating the event related image can be greatly reduced. Moreover, the evaluation process does not need manual participation, so that the efficiency of evaluating the event-related image can be improved. In addition, because the evaluation process does not need manual participation, the objectivity of the quality result of the target image can be improved, and the accuracy of evaluating the event-related image is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an image quality evaluation system provided in an embodiment of the present application;
FIG. 2 is a flow chart of a method for evaluating an image according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a matching process provided by an embodiment of the present application;
FIG. 4 is a flow chart of another method for evaluating an image provided by an embodiment of the present application;
FIG. 5 is a schematic structural diagram of an apparatus for evaluating an image according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a comment server provided in an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present application more clear, the embodiments of the present application will be further described in detail with reference to the accompanying drawings.
Before explaining the embodiments of the present application in detail, an application scenario of the embodiments of the present application will be described.
Currently, workers generally perform unified storage management on event-related images, and then perform quality evaluation on the event-related images. The quality evaluation of the event-related images aims to supervise and urge related personnel to improve the quality of the uploaded event-related images through the quality evaluation, and lay a foundation for subsequent deep study and judgment work such as serial-parallel analysis based on the event-related images.
The event-related image may be a video or a plurality of independent images. The event-associated image may include all images associated with the event. Such as may include images or video captured at the scene of the event. The events related to the embodiments of the present application may be any events that the staff needs to analyze, such as traffic events, factory emergencies, item loss events, and the like.
The method for evaluating the image is applied to the scene for evaluating the event-related image. The purpose is to provide a method capable of improving the evaluation efficiency and accuracy of an event-related image.
In order to implement the method provided by the embodiment of the present application, the embodiment of the present application provides an image quality evaluation system. The image quality evaluation system will be explained in detail below.
Fig. 1 is a schematic diagram of an image quality evaluation system 100 according to an embodiment of the present application. As shown in fig. 1, the image quality evaluation system 100 includes an input/output module 101, an evaluation index management module 102, an algorithm management module 103, and an integrated calculation module 104.
The input/output module 101 is used for interaction, and the staff can upload the event-related images through the input/output module. In the embodiment of the application, the event-related image may be an independent image uploaded by a user, or may be a video frame image in a video uploaded by a worker. For ease of understanding, fig. 1 illustrates an example in which an event-related image or an event-related video is input to the input/output module 101.
Optionally, based on the input-output module 101, the system may also output an evaluation analysis report of the event-related image for further review by the staff.
Each evaluation index is arranged in the evaluation index management module 102. These evaluation indicators may include evaluation indicators associated with event elements, image quality indicators, and the like. The event element may include an element associated with the event itself, and the event element may include an event occurrence scene, an event occurrence time period, an event attention object, an event audio, and the like. It should be noted that the foregoing event elements are merely used for illustration, the event elements related to the embodiments of the present application include, but are not limited to, the foregoing event elements, and any event-related element is within the scope of the event elements in the embodiments of the present application, and thus is not illustrated here.
Based on each configured evaluation index, when an event associated with a certain image is known, the evaluation index management module 102 may determine an evaluation index matching the image based on an event element of the event.
The algorithm management module 103 is configured to configure each analysis algorithm, and establish a correspondence between the analysis algorithm and the evaluation index, so that the analysis algorithm can be further matched after the evaluation index is matched for a certain image. Different evaluation indexes can correspond to the same analysis algorithm or different analysis algorithms. The analysis algorithm configured in the algorithm management module 103 may include a video analysis algorithm, an image analysis algorithm, a text analysis algorithm, an audio analysis algorithm, and the like. The video analysis algorithm is an algorithm for analyzing a video, and the image analysis algorithm is an algorithm for analyzing an independently acquired image. The character analysis algorithm is used for analyzing texts in images or videos, and the audio analysis algorithm is used for analyzing audio. It should be noted that the foregoing analysis algorithms are only used for illustration, and the analysis algorithms related to the embodiments of the present application include, but are not limited to, the foregoing analysis algorithms, and any algorithm that can be used in evaluating the event-related image is within the scope of the analysis algorithms in the embodiments of the present application, and is not illustrated here.
In addition, the analysis algorithm configured in the algorithm management module 103 may be various algorithms obtained after training based on methods such as deep learning. The algorithm may be obtained by other methods, and the determination method of the analysis algorithm in the embodiment of the present application is not limited.
For a certain image, after matching an analysis algorithm corresponding to a certain evaluation index, the algorithm management module 103 may be further configured to evaluate the evaluation index of the image based on the analysis algorithm to obtain an analysis result of the target image under the evaluation index.
Different comprehensive quality evaluation models and different grading models are configured in the comprehensive calculation module 104. The comprehensive quality evaluation model is used for determining the quality result of the target image based on the analysis result of the target image under each evaluation index. The grading model is used for determining a quality gear of the target image based on the quality result of the target image. The target image is any image to be evaluated.
In addition, the comprehensive quality evaluation model is used for setting how to determine the quality result based on the correlation logic between the evaluation indexes, and the embodiment of the application does not limit the specific implementation manner of setting how to determine the quality evaluation based on the correlation logic between the evaluation indexes. The grading model is used for setting quality gears corresponding to different scores based on score logic, and the embodiment of the application also does not limit the specific implementation manner for setting quality gears corresponding to different scores based on score logic. The functions of the composite score calculation model and the grading model will be described in detail in the following method embodiments, and will not be described herein.
It should be noted that the image quality evaluation system shown in fig. 1 may be deployed in a centralized manner in one terminal, or may be deployed in a centralized manner in one server, and optionally, each module in the image quality evaluation system may also be deployed in a distributed manner in different devices, which is not limited in this embodiment of the present application.
In addition, each module in the image quality evaluation system in fig. 1 is a software module, and the naming of each module is based on the function naming of the software module. When the embodiment of the application is applied, different naming can be performed based on requirements, for example, an input/output module can be named as a first module, and an evaluation index management module can be named as a second module. The embodiments of the present application do not limit the naming of the above modules.
It should be noted that, the evaluation indexes and the analysis algorithms in the image quality evaluation system shown in fig. 1 may be added, updated or deleted as required. Therefore, based on the image quality evaluation system shown in fig. 1, manageability, configurability and extensibility of evaluation indexes and analysis algorithms can be realized, quality score evaluation and quality grading of event-related images can be automatically realized, and a comprehensive analysis report with high objectivity is output. The method solves the problems that evaluation and audit of the event associated images in the current stage are time-consuming, labor-consuming and low in efficiency, and also solves the problems that objective basis for quality audit is lacked and the overall quality condition cannot be mastered in the current stage.
The method for evaluating an image provided in the embodiment of the present application is explained in detail below based on the image quality evaluation system shown in fig. 1. As is clear from the image quality evaluation system shown in fig. 1, the execution subject of this method is not limited. For convenience of subsequent description, the following embodiments are described by taking an example in which the image quality evaluation system is centrally deployed on the terminal. That is, the image quality evaluation system shown in fig. 1 is deployed on a terminal.
Fig. 2 is a flowchart of a method for evaluating an image according to an embodiment of the present application. As shown in fig. 2, the method includes the following steps.
Step 201: and the terminal acquires a target image to be evaluated. For example, the terminal acquires a target image to be evaluated in response to a selection operation for an image acquisition plug-in on a display interface.
In the embodiment of the application, an image analysis control is displayed on a display interface of the terminal, and the image analysis control is used for acquiring a target image to be evaluated and a target event related to the target image. Therefore, the user can trigger the terminal to start the evaluation process of the target object by selecting the image analysis control.
In a possible implementation manner, the staff may store each case-related image at the cloud server in advance, and each stored image corresponds to a stored related event. In this way, when the terminal detects a selection operation for the image analysis control, a plurality of images and events associated with each image can be downloaded from the cloud server, and then one image is selected from the plurality of images as a target image in response to a selection operation triggered by a preset operation by a user, and a target event associated with the target image is obtained.
In another possible implementation manner, when the terminal detects a selection operation for the image analysis control, an image input interface may be displayed, so that a user uploads an image to be evaluated through the image input interface and uploads an event associated with the image, so that the terminal acquires a target image and a target event associated with the target image.
The image which needs to be evaluated by the user can be uploaded to the terminal in the format of the file to be evaluated. The file to be evaluated may include a plurality of images and/or a plurality of videos. Since the application scenario of the embodiment of the present application is to evaluate the event-related images, the plurality of images and/or the plurality of videos have been associated with the event in advance. For example, the staff member may upload a file to be evaluated, which includes images and/or videos associated with all traffic accident events that have occurred in the last month.
In addition, when the terminal detects the selection operation of the image analysis control, the terminal can also create an evaluation analysis task based on the file to be evaluated uploaded by the staff. The evaluation analysis task indicates which images and/or videos associated with which events need to be evaluated. The terminal then triggers the evaluation analysis task, and for the image and/or video associated with any event in the evaluation analysis task, the terminal can evaluate the image and/or video associated with the event through steps 202 to 203. The following steps 202 to 203 will be further described, and will not be described herein again.
The two implementation manners are only two optional implementation manners for the terminal to acquire the target image and the target event associated with the target image, and the embodiment of the application does not limit how the terminal acquires the target image and the target event associated with the target image.
Step 202: and determining a target evaluation index according to the event elements of the target event related to the target image.
In one possible implementation manner, as shown in fig. 1, the evaluation index management module is configured with a plurality of evaluation indexes associated with each event element in advance, and in this scenario, the implementation manner of step 202 may be: and the terminal determines one or more evaluation indexes matched with the target image from the plurality of evaluation indexes according to the event elements of the target event related to the target image to be evaluated, and takes the matched indexes as the target evaluation indexes.
The event element of the target event may be an event element of the target event or an event element that needs to be focused when analyzing the target event. For example, for a target event, if an event occurrence scene and an event occurrence time period of the target event are predetermined, the event elements of the target event may include the event occurrence scene and the event occurrence time period. For another example, for a target event, if attention needs to be paid to a person appearing in the target event, the event element of the target event may include an event attention object. For another example, for a target event, if the sound occurring in the target event needs to be analyzed, the event element of the target event may include event audio.
The event elements that need to be focused on when analyzing the target event may be determined based on the event type of the target event. The event elements that need to be attended by the staff of different event types are usually different, so that the corresponding event elements that need to be attended can be configured in advance for different event types. In this way, event elements that need to be focused when analyzing the target event can be determined based on the type of the target event. For example, in the case that the event type is a contact resource transaction event type, the event elements needing attention generally include an event occurrence time, an event occurrence scene, an event video or image, an event audio and the like. For another example, for the event type being the item loss event type, the event elements needing attention usually include the event occurrence time, the event occurrence place, the event video or image, but do not need attention to the event audio.
After determining the event element of the target event, specifically, determining the target evaluation index matching the target image based on the event element of the target event may be exemplified as follows.
(1) In the case where the event element of the target event includes an event occurrence scene, the target evaluation index includes an evaluation index for evaluating whether or not the photographing region in the image is the event occurrence scene
For example, for a target event, it is already clear that the event occurrence site of the target event is indoors. In this case, the matched target evaluation index may be used to evaluate whether or not the shooting area in the target image is indoors. For another example, for a target event, it is already known that the event occurrence scene of the target event is on a road. In this case, the matched target evaluation index may be used to evaluate whether or not the shooting area in the target image is on the road.
(2) In the case where the event element of the target event includes an event occurrence time period, the target evaluation index includes an evaluation index for evaluating a relationship between the acquisition time of the image and the event occurrence time period.
For example, for a target event, it has been clarified that the event occurrence period of the target event is evening. In this case, the matched target evaluation index may be used to evaluate whether the acquisition time of the target image is in the evening.
(3) In the case that the event element of the target event includes an event attention object, the target evaluation index includes an evaluation index for evaluating whether the event attention object exists in the image, and/or for evaluating whether the event attention object is marked in the image, and/or for evaluating the definition of the event attention object in the image, and the event attention object refers to any object that needs to be paid attention to in analyzing the event.
For example, for a target event, it has been determined that an object of interest in the target event is a vehicle. In this case, the matched target evaluation index may be used to evaluate whether there is a vehicle in the target image, and/or to evaluate whether the staff member has marked a vehicle in the target image, and/or to evaluate the clarity of the vehicle in the target image.
(4) In the case where the event element of the target event includes an event audio, the target evaluation index includes an evaluation index for evaluating whether or not the event audio corresponds to the image, and/or for evaluating the clarity of the event audio corresponding to the image.
For example, for the target event, the tone of the involved person needing attention in the target event is already defined. In this case, the matched target evaluation index may be used to evaluate the target image corresponding to the event audio and/or to evaluate the clarity of the event audio corresponding to the target image.
It should be noted that the above examples are merely used to illustrate how to match the target evaluation index based on the event element of the target event, and the embodiments of the present application are not limited thereto. When the embodiment of the application is applied, how to match the target evaluation index based on the event elements of the target event can be customized based on requirements, and therefore, the description is not given here.
Step 203: and determining the quality result of the target image according to the analysis result of the target image by the analysis algorithm corresponding to the target evaluation index.
In a possible implementation manner, the target evaluation index in step 202 includes one or more, in this case, the implementation process of step 202 is: respectively determining scores corresponding to the one or more target evaluation indexes according to the analysis algorithms corresponding to the one or more target evaluation indexes, wherein the scores are analysis results of the analysis algorithms on the target image, and determining the total score of the target image according to the scores corresponding to the one or more target evaluation indexes, wherein the quality results comprise the total score.
The determining the scores corresponding to the one or more target evaluation indexes according to the analysis algorithms corresponding to the one or more target evaluation indexes respectively specifically includes: and each target evaluation index has a corresponding analysis algorithm, and for any target evaluation index, the target image is analyzed based on the analysis algorithm corresponding to the target evaluation index, so that the grade of the target image under the target evaluation index can be obtained.
For example, 3 target evaluation indexes are matched in step 202, and are respectively labeled as target evaluation index 1, target evaluation index 2, and target evaluation index 3. In this way, after analysis is performed based on the respective corresponding analysis algorithms, a score corresponding to the target evaluation index 1, a score corresponding to the target evaluation index 2, and a score corresponding to the target evaluation index 3 are obtained.
In addition, the analysis algorithms corresponding to the target evaluation indexes may be configured in advance, that is, a corresponding relationship between the evaluation indexes and the analysis algorithms is established in the image quality evaluation system in advance, so that when the target evaluation indexes are matched in step 202, the analysis algorithms corresponding to the target evaluation indexes can be obtained based on the corresponding relationship. And will not be described in detail herein.
Note that, the importance of the same evaluation index may be different in different events. Therefore, in the embodiment of the present application, the image quality evaluation system shown in fig. 1 may further be configured with a plurality of comprehensive quality evaluation models, and a logical relationship between the analysis results is indicated in any one of the comprehensive quality evaluation models, so as to obtain a final quality result based on the analysis results and the logical relationship.
In a possible implementation manner, the comprehensive quality evaluation model may be a comprehensive score calculation model, and weights for different evaluation indexes are configured in any one of the multiple comprehensive score calculation models. At this time, after matching the target index in step 202, the terminal may further acquire a target comprehensive score calculation model matching the target image based on the plurality of comprehensive score calculation models. In this scenario, the above-mentioned implementation manner of determining the total score of the target image according to the scores corresponding to the one or more target evaluation indexes may be: the total score of the target image can be determined based on the weight of each target evaluation index configured in the target comprehensive score calculation model and the score corresponding to each of one or more target evaluation indexes.
Wherein, based on the weight of each target evaluation index and the score corresponding to each of the one or more target evaluation indexes, determining the total score of the target image may refer to: and carrying out weighted summation or weighted product summation on the scores corresponding to the target evaluation indexes to obtain a total score.
The weights configured in any one of the comprehensive scoring computation models for different evaluation indexes may be configured by a worker based on a requirement, which is not limited in the embodiment of the present application.
In addition, after the target index is matched in step 201, a specific implementation manner of the terminal obtaining the target comprehensive score calculation model matched with the target image based on the multiple comprehensive score calculation models may be as follows: and acquiring a comprehensive score calculation model with the configured evaluation indexes consistent with one or more matched target evaluation indexes from the plurality of comprehensive score calculation models to obtain the target comprehensive score calculation model.
Therefore, in the embodiment of the present application, after the target evaluation indexes are matched based on step 201, the weights between the target evaluation indexes may also be matched based on the weights. That is, in the embodiment of the present application, the weight of any evaluation index is not fixed, but is adaptively matched to different weights based on different events associated with the image. Therefore, the accuracy of subsequent evaluation on the image is improved.
That is, in the embodiment of the present application, after a target image is input into an image quality evaluation system, the system may adaptively match not only a target evaluation index and a corresponding analysis algorithm but also a comprehensive quality evaluation model based on the above implementation manner. I.e., to determine which integrated quality assessment model to use to determine the aforementioned quality result based on the event elements of the target event.
Alternatively, instead of configuring multiple comprehensive quality assessment models in advance, after obtaining the analysis results under each target index, the analysis results corresponding to each target evaluation index are directly processed according to the same rule, and the quality result can be obtained. For example, instead of configuring a plurality of comprehensive score calculation models in advance, after the scores of the target indexes are obtained, the scores corresponding to the target evaluation indexes are directly summed or integrated, so that the total score can be obtained. And will not be described in detail herein.
In addition, different quality notch divisions may be employed in different events. Therefore, in the system shown in fig. 1, a plurality of grading models may be configured, and a corresponding relationship between the quality result and the quality gear may be configured in any one of the plurality of grading models. At this time, after the target comprehensive quality evaluation model is matched, a target grading model matched with the target image may be obtained. In this way, after obtaining the quality result based on the target quality evaluation model, the quality gear of the target image may also be determined based on the correspondence between the quality result and the quality gear configured in the target grading model. Evaluating the target image at this time also includes evaluating a quality gear of the target image.
That is, in the embodiment of the present application, after the quality result is determined, the quality gear of the target image may also be determined according to the quality result of the target image. The quality gear of the target image is used for indicating the grade of the quality of the target image, so that a worker can know the quality of the target image approximately.
For example, the quality results of the target images are expressed in the form of the total score, and in this case, n quality steps of Q1, Q2, …, and Qn are set in advance based on different score sections in the target classification model. The quality represented by Q1, Q2, …, Qn is gradually improved. In this way, the worker can know the approximate grade of the quality of the target image based on the quality grade of the target image.
It should be noted that, the embodiment of the present application does not limit how to set the scoring interval corresponding to each quality gear of the image, and when the embodiment of the present application is applied, the setting may be based on different requirements.
Therefore, after the target evaluation indexes are matched based on step 201, the weights and the quality gear setting modes between the target evaluation indexes can be matched based on the target events. That is, in the embodiment of the present application, the image quality gear division manner for all types of events is not fixed, but differs depending on the type of event. Therefore, the accuracy of subsequent evaluation on the image is improved.
That is, in the embodiment of the present application, after a target image is input into an image quality evaluation system, the system may be matched not only to a target evaluation index and a corresponding analysis algorithm, but also to a corresponding comprehensive quality evaluation model and a corresponding grading model based on the above implementation manner. That is, it is determined which quality gear division is used to determine the quality gear of the target event according to the event elements of the target event. The above process can be represented by the matching process shown in fig. 3, and will not be described in detail here.
Alternatively, a plurality of grading models may not be configured in advance, and after scores under each target index are obtained, a unified grading model may be directly used to determine the quality grade of the target image. And will not be described in detail herein.
Based on the system shown in fig. 1, the above implementation can be implemented based on a comprehensive computation module. The comprehensive calculation module may implement step 202 using any of the implementations described above.
Step 204: and displaying the quality result of the target image.
After the quality result of the target image is obtained through steps 201 to 203, the quality result may be displayed so that the user can intuitively feel the analysis result of the target image.
Further, the target image in the above steps 201 to 203 may be one image. Optionally, when a video associated with a certain event needs to be evaluated, the target image is the video, and at this time, the evaluation process may also be completed through the steps 201 to 203.
Specifically, a video to be evaluated is obtained, wherein the video comprises a plurality of frames of images; and taking each frame of image in the multi-frame images as a target image, executing the steps 201 to 202, and determining an evaluation report of the video according to the quality result of each frame of image in the multi-frame images.
For example, when the analysis result of the image is represented by a score and the quality result of the image is represented by a total score, the evaluation report may include the total number of images in the video, the total score of each image in the video, and the distribution of the images in different score intervals in the video. The image distribution conditions on different scoring intervals in the video can include the number of images on different scoring intervals, the proportion of images on different scoring intervals, and the like.
The following further explains the embodiment of the present application with the flowchart shown in fig. 4, as shown in fig. 4, a worker uploads a video file to be evaluated in the image quality evaluation system shown in fig. 1, the image quality evaluation system creates a new video analysis evaluation task, automatically matches evaluation indexes (the evaluation indexes may include case-related element indexes (I1), case-related video image quality indexes (I2), case-related target feature indexes (I3), case-related video audio indexes (I4), etc.) according to the video analysis evaluation task, matches corresponding analysis algorithms (video analysis algorithm (a1), character analysis algorithm (a2), audio analysis algorithm (A3), etc.), and matches a comprehensive score calculation model and a classification model (Cn) of the video file. After the image quality evaluation system finishes the matching, each image in the video is analyzed, various analysis algorithms are called to start analyzing and calculating scores, various scores (S1-Sn) of case-related element indexes (I1), case-related video image quality indexes (I2), case-related target characteristic indexes (I3), case-related video audio indexes (I4) and the like are obtained, and then a previously loaded comprehensive score calculation model and a grading model are called to calculate and judge the total score (S total) and the quality grade Qn of the video image quality respectively. And finally, outputting an evaluation report of the case-involved video of the event to a worker. The evaluation report content comprises the total number of video images of the input model, the total score of each video image, the number and the proportion of the video images under each score, and the like. The staff can visually see the general condition of the quality of the video images involved in the case by reading the evaluation report so as to provide a basis for the evaluation of the staff.
In summary, when the target image is evaluated, the evaluation index and the analysis algorithm may be adaptively matched based on the event elements of the target event associated with the target image, so that the target image is evaluated based on the matched evaluation index and analysis algorithm. According to the method provided by the embodiment of the application, manual participation is not needed in the process of evaluating the event related image, so that the human resources required for evaluating the event related image can be greatly reduced. Moreover, the evaluation process does not need manual participation, so that the efficiency of evaluating the event-related image can be improved. In addition, because the evaluation process does not need manual participation, the objectivity of the quality result of the target image can be improved, and the accuracy of evaluating the event-related image is improved.
All the above optional technical solutions can be combined arbitrarily to form an optional embodiment of the present application, and the present application embodiment is not described in detail again.
Fig. 5 is a schematic structural diagram of an apparatus for evaluating an image according to an embodiment of the present application, and as shown in fig. 5, the apparatus 500 for evaluating an image may include the following modules.
An obtaining module 501, configured to obtain a target image to be evaluated; for example, in response to a selection operation for an image analysis control on a display interface, a target image to be evaluated and a target event associated with the target image are acquired;
a determining module 502, configured to determine a target evaluation index according to an event element of a target event associated with a target image;
the determining module 502 is further configured to determine a quality result of the target image according to an analysis result of the target image by an analysis algorithm corresponding to the target evaluation index.
Alternatively, in a case where the event element of the target event includes an event occurrence scene, the target evaluation index includes an evaluation index for evaluating whether or not the shooting area in the image is the event occurrence scene; and/or
In the case where the event element of the target event includes an event occurrence time period, the target evaluation index includes an evaluation index for evaluating a relationship between the acquisition time of the image and the event occurrence time period; and/or
In the case that the event element of the target event includes an event attention object, the target evaluation index includes an evaluation index for evaluating whether the event attention object exists in the image, and/or for evaluating whether the event attention object is marked in the image, and/or for evaluating the definition of the event attention object in the image, wherein the event attention object refers to any object needing attention for analyzing the event; and/or
In the case where the event element of the target event includes an event audio, the target evaluation index includes an evaluation index for evaluating whether or not the event audio corresponds to the image, and/or for evaluating the clarity of the event audio corresponding to the image.
Optionally, the obtaining module is further configured to:
acquiring a target comprehensive quality evaluation model matched with a target image;
the determination module is to:
and determining the quality result of the target image based on the analysis result corresponding to each target evaluation index and the logical relationship of each analysis result configured in the target comprehensive quality evaluation model.
Optionally, the obtaining module is further configured to obtain a target grading model matched with the target image;
the determination module is further configured to:
and determining the quality gear of the target image based on the corresponding relation between the quality result and the quality gear configured in the target grading model.
In summary, in the embodiment of the present application, when the target image is evaluated, the evaluation index and the analysis algorithm may be adaptively matched based on the event element of the target event associated with the target image, so that the target image is evaluated based on the matched evaluation index and analysis algorithm. According to the method provided by the embodiment of the application, manual participation is not needed in the process of evaluating the event related image, so that the human resources required for evaluating the event related image can be greatly reduced. Moreover, the evaluation process does not need manual participation, so that the efficiency of evaluating the event-related image can be improved. In addition, because the evaluation process does not need manual participation, the objectivity of the quality result of the target image can be improved, and the accuracy of evaluating the event-related image is improved.
It should be noted that: the apparatus for evaluating an image according to the above embodiment is only illustrated by dividing the functional modules when evaluating an image, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the functions described above. In addition, the image evaluation device and the image evaluation method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments, and are not described herein again.
Fig. 6 is a block diagram of a terminal 600 according to an embodiment of the present disclosure. Any one of the modules of the image quality evaluation system can be implemented by the terminal shown in fig. 6. The terminal 600 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. The terminal 600 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
In general, the terminal 600 includes: a processor 601 and a memory 602.
The processor 601 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 601 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 601 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 601 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, processor 601 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
The memory 602 may include one or more computer-readable storage media, which may be non-transitory. The memory 602 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 602 is used to store at least one instruction for execution by processor 601 to implement the method of evaluating an image provided by the method embodiments herein.
In some embodiments, the terminal 600 may further optionally include: a peripheral interface 603 and at least one peripheral. The processor 601, memory 602, and peripheral interface 603 may be connected by buses or signal lines. Various peripheral devices may be connected to the peripheral interface 603 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 604, a display 605, a camera assembly 606, an audio circuit 607, a positioning component 608, and a power supply 609.
The peripheral interface 603 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 601 and the memory 602. In some embodiments, the processor 601, memory 602, and peripheral interface 603 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 601, the memory 602, and the peripheral interface 603 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 604 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 604 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 604 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 604 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 604 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 604 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display 605 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 605 is a touch display screen, the display screen 605 also has the ability to capture touch signals on or over the surface of the display screen 605. The touch signal may be input to the processor 601 as a control signal for processing. At this point, the display 605 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 605 may be one, providing the front panel of the terminal 600; in other embodiments, the display 605 may be at least two, respectively disposed on different surfaces of the terminal 600 or in a folded design; in other embodiments, the display 605 may be a flexible display disposed on a curved surface or a folded surface of the terminal 600. Even more, the display 605 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 605 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 606 is used to capture images or video. Optionally, camera assembly 606 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 606 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Audio circuitry 607 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 601 for processing or inputting the electric signals to the radio frequency circuit 604 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 600. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 601 or the radio frequency circuit 604 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 607 may also include a headphone jack.
The positioning component 608 is used for positioning the current geographic Location of the terminal 600 to implement navigation or LBS (Location Based Service). The Positioning component 608 can be a Positioning component based on the united states GPS (Global Positioning System), the chinese beidou System, the russian graves System, or the european union's galileo System.
Power supply 609 is used to provide power to the various components in terminal 600. The power supply 609 may be ac, dc, disposable or rechargeable. When the power supply 609 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 600 also includes one or more sensors 610. The one or more sensors 610 include, but are not limited to: acceleration sensor 611, gyro sensor 612, pressure sensor 613, fingerprint sensor 614, optical sensor 615, and proximity sensor 616.
The acceleration sensor 611 may detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the terminal 600. For example, the acceleration sensor 611 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 601 may control the display screen 605 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 611. The acceleration sensor 611 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 612 may detect a body direction and a rotation angle of the terminal 600, and the gyro sensor 612 and the acceleration sensor 611 may cooperate to acquire a 3D motion of the user on the terminal 600. The processor 601 may implement the following functions according to the data collected by the gyro sensor 612: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 613 may be disposed on the side bezel of terminal 600 and/or underneath display screen 605. When the pressure sensor 613 is disposed on the side frame of the terminal 600, a user's holding signal of the terminal 600 can be detected, and the processor 601 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 613. When the pressure sensor 613 is disposed at the lower layer of the display screen 605, the processor 601 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 605. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 614 is used for collecting a fingerprint of a user, and the processor 601 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 614, or the fingerprint sensor 614 identifies the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 601 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 614 may be disposed on the front, back, or side of the terminal 600. When a physical button or vendor Logo is provided on the terminal 600, the fingerprint sensor 614 may be integrated with the physical button or vendor Logo.
The optical sensor 615 is used to collect the ambient light intensity. In one embodiment, processor 601 may control the display brightness of display screen 605 based on the ambient light intensity collected by optical sensor 615. Specifically, when the ambient light intensity is high, the display brightness of the display screen 605 is increased; when the ambient light intensity is low, the display brightness of the display screen 605 is adjusted down. In another embodiment, the processor 601 may also dynamically adjust the shooting parameters of the camera assembly 606 according to the ambient light intensity collected by the optical sensor 615.
A proximity sensor 616, also known as a distance sensor, is typically disposed on the front panel of the terminal 600. The proximity sensor 616 is used to collect the distance between the user and the front surface of the terminal 600. In one embodiment, when proximity sensor 616 detects that the distance between the user and the front face of terminal 600 gradually decreases, processor 601 controls display 605 to switch from the bright screen state to the dark screen state; when the proximity sensor 616 detects that the distance between the user and the front face of the terminal 600 is gradually increased, the processor 601 controls the display 605 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 6 is not intended to be limiting of terminal 600 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
Embodiments of the present application also provide a non-transitory computer-readable storage medium, where instructions in the storage medium, when executed by a processor of a terminal, enable the terminal to perform the method for evaluating an image provided in the above embodiments.
Embodiments of the present application further provide a computer program product containing instructions, which when run on a terminal, cause the terminal to perform the method for evaluating an image provided by the above embodiments.
Fig. 7 is a schematic structural diagram of a server according to an embodiment of the present application. Any one of the modules of the image quality evaluation system can be implemented by the server shown in fig. 7. The server may be a server in a cluster of background servers. Specifically, the method comprises the following steps:
the server 700 includes a Central Processing Unit (CPU)701, a system memory 704 including a Random Access Memory (RAM)702 and a Read Only Memory (ROM)703, and a system bus 705 connecting the system memory 704 and the central processing unit 701. The server 700 also includes a basic input/output system (I/O system) 706, which facilitates transfer of information between devices within the computer, and a mass storage device 707 for storing an operating system 713, application programs 714, and other program modules 715.
The basic input/output system 706 includes a display 708 for displaying information and an input device 709, such as a mouse, keyboard, etc., for a user to input information. Wherein the display 708 and the input device 709 are connected to the central processing unit 701 through an input output controller 710 connected to the system bus 705. The basic input/output system 706 may also include an input/output controller 710 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input-output controller 710 may also provide output to a display screen, a printer, or other type of output device.
The mass storage device 707 is connected to the central processing unit 701 through a mass storage controller (not shown) connected to the system bus 705. The mass storage device 707 and its associated computer-readable media provide non-volatile storage for the server 700. That is, the mass storage device 707 may include a computer-readable medium (not shown), such as a hard disk or CD-ROM drive.
Without loss of generality, computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that computer storage media is not limited to the foregoing. The system memory 704 and mass storage device 707 described above may be collectively referred to as memory.
According to various embodiments of the present application, server 700 may also operate as a remote computer connected to a network via a network, such as the Internet. That is, the server 700 may be connected to the network 712 through a network interface unit 711 connected to the system bus 705, or the network interface unit 711 may be used to connect to other types of networks or remote computer systems (not shown).
The memory further includes one or more programs, and the one or more programs are stored in the memory and configured to be executed by the CPU. The one or more programs include instructions for performing the method of evaluating an image provided by embodiments of the present application.
Embodiments of the present application also provide a non-transitory computer-readable storage medium, where instructions of the storage medium, when executed by a processor of a server, enable the server to perform the method for evaluating an image provided in the foregoing embodiments.
Embodiments of the present application further provide a computer program product containing instructions, which when run on a server, cause the server to execute the method for evaluating an image provided by the above embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only a preferred embodiment of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A method of evaluating an image, the method comprising:
acquiring a target image to be evaluated;
determining a target evaluation index according to the event elements of the target event associated with the target image;
and determining the quality result of the target image according to the analysis result of the target image by the analysis algorithm corresponding to the target evaluation index.
2. The method of claim 1,
in the case where the event element of the target event includes an event occurrence scene, the target evaluation index includes an evaluation index for evaluating whether or not a shooting area in the image is the event occurrence scene; and/or
In a case where the event element of the target event includes an event occurrence time period, the target evaluation index includes an evaluation index for evaluating a relationship between the acquisition time of the image and the event occurrence time period; and/or
In the case that the event element of the target event includes an event attention object, the target evaluation index includes an evaluation index for evaluating whether the event attention object exists in the image, and/or for evaluating whether the event attention object has been labeled in the image, and/or for evaluating the definition of the event attention object in the image, and the event attention object refers to any object that needs to be focused on when the event element of the target event is analyzed; and/or
In the case that the event element of the target event includes an event audio, the target evaluation index includes an evaluation index for evaluating whether the event audio corresponds to the image and/or evaluating the definition of the event audio corresponding to the image.
3. The method of claim 1, wherein the method further comprises:
acquiring a target comprehensive quality evaluation model matched with the target image;
determining a quality result of the target image according to an analysis result of the target image by the analysis algorithm corresponding to the target evaluation index, including:
and determining the quality result of the target image based on the analysis result corresponding to each target evaluation index and the logic relation of each analysis result configured in the target comprehensive quality evaluation model.
4. The method of claim 1,
the method further comprises the following steps:
acquiring a target grading model matched with the target image;
and determining the quality gear of the target image based on the corresponding relation between the quality result and the quality gear configured in the target grading model.
5. An apparatus for evaluating an image, the apparatus comprising:
the acquisition module is used for acquiring a target image to be evaluated;
the determining module is used for determining a target evaluation index according to the event element of the target event related to the target image;
the determining module is further configured to determine a quality result of the target image according to an analysis result of the target image by an analysis algorithm corresponding to the target evaluation index.
6. The apparatus of claim 8,
in the case where the event element of the target event includes an event occurrence scene, the target evaluation index includes an evaluation index for evaluating whether or not a shooting area in the image is the event occurrence scene; and/or
In a case where the event element of the target event includes an event occurrence time period, the target evaluation index includes an evaluation index for evaluating a relationship between the acquisition time of the image and the event occurrence time period; and/or
In the case that the event element of the target event includes an event attention object, the target evaluation index includes an evaluation index for evaluating whether the event attention object exists in the image, and/or for evaluating whether the event attention object has been labeled in the image, and/or for evaluating the definition of the event attention object in the image, and the event attention object refers to any object that needs to be focused on when the event element of the target event is analyzed; and/or
In the case that the event element of the target event includes an event audio, the target evaluation index includes an evaluation index for evaluating whether the event audio corresponds to the image and/or evaluating the definition of the event audio corresponding to the image.
7. The apparatus of claim 10, wherein the acquisition module is further to:
acquiring a target comprehensive quality evaluation model matched with the target image;
the determination module is to:
and determining the quality result of the target image based on the analysis result corresponding to each target evaluation index and the logic relation of each analysis result configured in the target comprehensive quality evaluation model.
8. The apparatus of claim 10,
the acquisition module is also used for acquiring a target grading model matched with the target image;
the determination module is further to:
and determining the quality gear of the target image based on the corresponding relation between the quality result and the quality gear configured in the target grading model.
9. An apparatus for evaluating an image, the apparatus comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of the method of any of the above claims 1 to 4.
10. A computer-readable storage medium having stored thereon instructions which, when executed by a processor, carry out the steps of the method of any of claims 1 to 4.
CN202011459266.4A 2020-12-11 2020-12-11 Method and device for evaluating image and computer storage medium Active CN112529871B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011459266.4A CN112529871B (en) 2020-12-11 2020-12-11 Method and device for evaluating image and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011459266.4A CN112529871B (en) 2020-12-11 2020-12-11 Method and device for evaluating image and computer storage medium

Publications (2)

Publication Number Publication Date
CN112529871A true CN112529871A (en) 2021-03-19
CN112529871B CN112529871B (en) 2024-02-23

Family

ID=74999182

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011459266.4A Active CN112529871B (en) 2020-12-11 2020-12-11 Method and device for evaluating image and computer storage medium

Country Status (1)

Country Link
CN (1) CN112529871B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113923444A (en) * 2021-10-08 2022-01-11 广州辰达精密光电科技有限公司 Zoom lens quality evaluation method and device
CN117370602A (en) * 2023-04-24 2024-01-09 深圳云视智景科技有限公司 Video processing method, device, equipment and computer storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103098088A (en) * 2011-07-13 2013-05-08 松下电器产业株式会社 Image evaluation device, image evaluation method, program, and integrated circuit
CN110246110A (en) * 2018-03-01 2019-09-17 腾讯科技(深圳)有限公司 Image evaluation method, device and storage medium
US20200104320A1 (en) * 2017-12-29 2020-04-02 Guangzhou Kugou Computer Technology Co., Ltd. Method, apparatus and computer device for searching audio, and storage medium
CN111475613A (en) * 2020-03-06 2020-07-31 深圳壹账通智能科技有限公司 Case classification method and device, computer equipment and storage medium
CN111612657A (en) * 2020-05-22 2020-09-01 创新奇智(重庆)科技有限公司 Client type identification method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103098088A (en) * 2011-07-13 2013-05-08 松下电器产业株式会社 Image evaluation device, image evaluation method, program, and integrated circuit
US20200104320A1 (en) * 2017-12-29 2020-04-02 Guangzhou Kugou Computer Technology Co., Ltd. Method, apparatus and computer device for searching audio, and storage medium
CN110246110A (en) * 2018-03-01 2019-09-17 腾讯科技(深圳)有限公司 Image evaluation method, device and storage medium
CN111475613A (en) * 2020-03-06 2020-07-31 深圳壹账通智能科技有限公司 Case classification method and device, computer equipment and storage medium
CN111612657A (en) * 2020-05-22 2020-09-01 创新奇智(重庆)科技有限公司 Client type identification method and device, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113923444A (en) * 2021-10-08 2022-01-11 广州辰达精密光电科技有限公司 Zoom lens quality evaluation method and device
CN113923444B (en) * 2021-10-08 2024-04-30 广州辰达精密光电科技有限公司 Zoom lens quality evaluation method and device
CN117370602A (en) * 2023-04-24 2024-01-09 深圳云视智景科技有限公司 Video processing method, device, equipment and computer storage medium

Also Published As

Publication number Publication date
CN112529871B (en) 2024-02-23

Similar Documents

Publication Publication Date Title
CN112162671B (en) Live broadcast data processing method and device, electronic equipment and storage medium
CN110262947B (en) Threshold warning method and device, computer equipment and storage medium
CN113204298B (en) Method and device for displaying release progress, electronic equipment and storage medium
CN111338910B (en) Log data processing method, log data display method, log data processing device, log data display device, log data processing equipment and log data storage medium
CN111290948B (en) Test data acquisition method and device, computer equipment and readable storage medium
CN111104980B (en) Method, device, equipment and storage medium for determining classification result
CN110147503B (en) Information issuing method and device, computer equipment and storage medium
CN111078521A (en) Abnormal event analysis method, device, equipment, system and storage medium
CN111327953A (en) Live broadcast voting method and device and storage medium
CN112529871B (en) Method and device for evaluating image and computer storage medium
CN110765182B (en) Data statistical method and device, electronic equipment and storage medium
CN113506086A (en) Task issuing method and device, computer equipment and medium
CN110990728B (en) Method, device, equipment and storage medium for managing interest point information
CN114827651B (en) Information processing method, information processing device, electronic equipment and storage medium
CN111294253B (en) Test data processing method and device, computer equipment and storage medium
CN114238859A (en) Data processing system, method, electronic device, and storage medium
CN114153963A (en) Document recommendation method and device, computer equipment and medium
CN113407774A (en) Cover determining method and device, computer equipment and storage medium
CN112132472A (en) Resource management method and device, electronic equipment and computer readable storage medium
CN112101297A (en) Training data set determination method, behavior analysis method, device, system and medium
CN111539794A (en) Voucher information acquisition method and device, electronic equipment and storage medium
CN112237743A (en) User data statistical method, device, computer equipment and storage medium
CN111212299B (en) Method and device for acquiring live video tutorial, server and storage medium
CN112804481B (en) Method and device for determining position of monitoring point and computer storage medium
CN114071119B (en) Resource testing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant