CN112767318B - Image processing effect evaluation method, device, storage medium and equipment - Google Patents

Image processing effect evaluation method, device, storage medium and equipment Download PDF

Info

Publication number
CN112767318B
CN112767318B CN202011642571.7A CN202011642571A CN112767318B CN 112767318 B CN112767318 B CN 112767318B CN 202011642571 A CN202011642571 A CN 202011642571A CN 112767318 B CN112767318 B CN 112767318B
Authority
CN
China
Prior art keywords
target image
character
evaluation index
text
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011642571.7A
Other languages
Chinese (zh)
Other versions
CN112767318A (en
Inventor
苏雷
胡金水
谢名亮
韩球
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
iFlytek Co Ltd
Original Assignee
iFlytek Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by iFlytek Co Ltd filed Critical iFlytek Co Ltd
Priority to CN202011642571.7A priority Critical patent/CN112767318B/en
Publication of CN112767318A publication Critical patent/CN112767318A/en
Application granted granted Critical
Publication of CN112767318B publication Critical patent/CN112767318B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Development Economics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Educational Administration (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Strategic Management (AREA)
  • Game Theory and Decision Science (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an evaluation method, device, storage medium and equipment of image processing effect, wherein the method comprises the following steps: firstly, obtaining a target image processing effect to be evaluated, wherein the target image processing effect comprises a character smearing image of a character line in a target image, a character area mask image of the character line and a color prediction result of the character line, then generating evaluation indexes of the target image processing effect, wherein the evaluation indexes comprise at least one of a first evaluation index, a second evaluation index, a third evaluation index, a fourth evaluation index and a fifth evaluation index, and further determining the processing quality of the target image processing effect according to the evaluation indexes.

Description

Image processing effect evaluation method, device, storage medium and equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method, an apparatus, a storage medium, and a device for evaluating an image processing effect.
Background
With the popularization of intelligent terminal equipment such as smart phones and tablet computers, the application of image text detection and recognition technology in daily life and work of people is becoming wider and wider. For example, on dictionary, input method, reader and even chat Application software (APP), people can conveniently extract key text information on images by the technology, and the key text information can be used for recording, translating or communicating scenes.
In order to meet the increasing image and character recognition demands of people, not only is the recognition accuracy required to be improved, but also the operation, interaction and result display of the character recognition function are required to be continuously beautified so as to improve the favorability and viscosity of users. In the result display mode of the character recognition function, a common mode is to separate a foreground (character) image and a background image from text lines on an image, cover the positions of the original text lines on the image with the background image to achieve the effect of image erasing, and restore the recognized characters according to the original text or translated text to be pasted back to the positions of the original characters in the image. In this display mode, the processing effects of character detection position, character recognition content, character erasure and restoration in the image are very critical. The quality of the character erasing and restoring effect affects the experience degree of the user only on the accuracy of character detection and recognition. Therefore, whether or not the processing effects such as the character erasure and the restoration of the image can be accurately evaluated is very important for improving the user experience.
At present, a manual evaluation method is generally adopted for evaluating the processing effects of character erasing, restoration and the like of an image, but the manual evaluation mode does not have unified and strict evaluation standards, and the result is greatly influenced by subjective impressions of evaluation staff, so that the evaluation method is not enough personified and difficult to quantify, is low in evaluation efficiency and needs to spend a large amount of manpower resources.
Disclosure of Invention
The main object of the embodiments of the present application is to provide a method, an apparatus, a storage medium, and a device for evaluating an image processing effect, which can evaluate the image processing effect more accurately.
The embodiment of the application provides an evaluation method of image processing effects, which comprises the following steps:
obtaining a target image processing effect to be evaluated, wherein the target image processing effect comprises a character smearing image of a character line in a target image, a character region mask image of the character line in the target image and a character color prediction result of the character line in the target image;
generating an evaluation index of the target image processing effect, wherein the evaluation index comprises at least one of a first evaluation index, a second evaluation index, a third evaluation index, a fourth evaluation index and a fifth evaluation index;
Wherein the first evaluation index is generated according to the difference between the character smearing image of the character row and the character content of the character row in the target image; the second evaluation index is generated according to the character smearing image of the character row in the target image; the third evaluation index is generated according to the character area mask image of the Chinese character line in the target image; the fourth evaluation index is generated according to the character smearing image of the character row in the target image; the fifth evaluation index is generated according to a color distribution histogram of the character smearing image of the character row in the target image;
and determining the processing quality of the target image processing effect according to the evaluation index.
In a possible implementation manner, the generating the first evaluation index of the target image processing effect includes:
filling a text region mask image of the text line in the target image by using a color prediction result of the text line in the target image to obtain a filled text line image;
calculating a text editing distance between a text recognition result of the filled text line image and text contents of the text line;
Calculating the identification score of the text line according to the text editing distance;
and calculating the average value of the identification scores of all text lines in the target image, and taking the average value as a first evaluation index of the target image processing effect.
In a possible implementation manner, the generating the second evaluation index of the target image processing effect includes:
calculating the average value of gradients of the character smearing images of all the character lines in the target image;
and generating a second evaluation index of the target image processing effect by using a preset gradient threshold value and an average value of gradients of the character smearing images of all the text lines in the target image.
In a possible implementation manner, the generating the third evaluation index of the target image processing effect includes:
performing mask operation on the original text line image by using the text region mask image of the text line in the target image to obtain a text line image without text;
calculating the color distance between the text region mask image of the text line and the text line image which does not contain text;
and calculating the score of the color distance as a third evaluation index of the target image processing effect.
In a possible implementation manner, the generating the fourth evaluation index of the target image processing effect includes:
calculating the brightness distance between the text smearing image of the text line and the text line image which does not contain the text;
and calculating the score of the brightness distance as a fourth evaluation index of the target image processing effect.
In a possible implementation manner, the generating a fifth evaluation index of the target image processing effect includes:
generating a color distribution histogram of a character smearing image of the character rows in the target image; generating a color distribution histogram of the text line image which does not contain characters;
and calculating the pearson correlation coefficient score of the color distribution histogram of the character smearing image of the Chinese character row in the target image and the color distribution histogram of the text row image without the characters, and taking the color distribution histogram and the pearson correlation coefficient score as a fifth evaluation index of the target image processing effect.
In a possible implementation manner, the determining the processing quality of the target image processing effect according to the evaluation index includes:
weighting each evaluation index included in the evaluation index of the target image processing effect;
And determining the processing quality of the target image processing effect according to the weighted processing result.
The embodiment of the application also provides an evaluation device of the image processing effect, which comprises:
the acquisition unit is used for acquiring a target image processing effect to be evaluated; the target image processing effect comprises a character smearing image of the character line in the target image, a character area mask image of the character line in the target image and a color prediction result of the character line in the target image;
a generation unit configured to generate an evaluation index of a target image processing effect, the evaluation index including at least one of a first evaluation index, a second evaluation index, a third evaluation index, a fourth evaluation index, and a fifth evaluation index;
wherein the first evaluation index is generated according to the difference between the character smearing image of the character row and the character content of the character row in the target image; the second evaluation index is generated according to the character smearing image of the character row in the target image; the third evaluation index is generated according to the character area mask image of the Chinese character line in the target image; the fourth evaluation index is generated according to the character smearing image of the character row in the target image; the fifth evaluation index is generated according to a color distribution histogram of the character smearing image of the character row in the target image;
And the determining unit is used for determining the processing quality of the target image processing effect according to the evaluation index.
In a possible implementation manner, the generating unit includes:
a filling subunit, configured to fill a text region mask image of a text line in the target image by using a color prediction result of the text line in the target image, so as to obtain a filled text line image;
a first calculating subunit, configured to calculate a text editing distance between a text recognition result of the filled text line image and text content of the text line;
a second calculating subunit, configured to calculate an identification score of the text line according to the text editing distance;
and the third computing subunit is used for computing the average value of the recognition scores of all text lines in the target image as a first evaluation index of the target image processing effect.
In a possible implementation manner, the generating unit includes:
a fourth calculating subunit, configured to calculate an average value of gradients of the text smear images of all text lines in the target image;
the first generation subunit is used for generating a second evaluation index of the target image processing effect by using a preset gradient threshold value and an average value of gradients of the character smearing images of all text lines in the target image.
In a possible implementation manner, the generating unit includes:
the obtaining subunit is used for carrying out mask operation on the original image of the text line by utilizing the mask image of the text region of the text line in the target image to obtain a text line image without text;
a fifth calculating subunit, configured to calculate a color distance between the text region mask image of the text line and the text line image that does not include text;
and a sixth calculation subunit for calculating the score of the color distance as a third evaluation index of the target image processing effect.
In a possible implementation manner, the generating unit includes:
a seventh calculating subunit, configured to calculate a brightness distance between the text smear image of the text line and the text line image that does not include text;
an eighth calculation subunit for calculating a score of the luminance distance as a fourth evaluation index of the target image processing effect.
In a possible implementation manner, the generating unit includes:
a second generation subunit, configured to generate a color distribution histogram of a text smear image of a text line in the target image; generating a color distribution histogram of the text line image which does not contain characters;
A ninth calculating subunit, configured to calculate, as a fifth evaluation index of the target image processing effect, a pearson correlation coefficient score of a color distribution histogram of the text smear image of the text line in the target image and a color distribution histogram of the text line image that does not include the text.
In a possible implementation manner, the determining unit includes:
a weighting subunit, configured to perform weighting processing on each evaluation index included in the evaluation index of the target image processing effect;
and the determining subunit is used for determining the processing quality of the target image processing effect according to the weighted processing result.
The embodiment of the application also provides an evaluation device of the image processing effect, which comprises: a processor, memory, system bus;
the processor and the memory are connected through the system bus;
the memory is configured to store one or more programs, the one or more programs comprising instructions, which when executed by the processor, cause the processor to perform any one of the implementations of the method of evaluating image processing effects described above.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores instructions, and when the instructions run on the terminal equipment, the terminal equipment is caused to execute any implementation mode of the evaluation method of the system image processing effect.
The embodiment of the application also provides a computer program product, which enables the terminal equipment to execute any implementation mode of the evaluation method of the image processing effect when being run on the terminal equipment.
The embodiment of the application provides an evaluation method and device for image processing effect, firstly, a target image processing effect to be evaluated is obtained; the target image processing effect comprises a character smearing image of a character line in the target image, a character area mask image of the character line in the target image and a color prediction result of the character line in the target image, and then an evaluation index of the target image processing effect is generated, wherein the evaluation index comprises at least one of a first evaluation index, a second evaluation index, a third evaluation index, a fourth evaluation index and a fifth evaluation index, and the first evaluation index is generated according to the difference between the character smearing image of the character line in the target image and the character content of the character line; the second evaluation index is generated according to the character smearing image of the character row in the target image; the third evaluation index is generated according to the character area mask image of the character line in the target image; the fourth evaluation index is generated according to the character smearing image of the character row in the target image; the fifth evaluation index is generated according to the color distribution histogram of the character smear image of the character line in the target image, and further the processing quality of the target image processing effect can be determined according to the evaluation index.
Therefore, in the embodiment of the application, when the processing quality of the target image processing effect is evaluated, the association relationship among the character smearing image of the character line, the character region mask image of the character line and the color prediction result of the character line in the target image is considered, so that the processing quality of the image processing effect can be automatically and accurately evaluated by utilizing each evaluation index.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of an evaluation method of an image processing effect according to an embodiment of the present application;
FIG. 2 is an exemplary diagram of a target image provided by an embodiment of the present application;
FIG. 3 is an exemplary diagram of an original text line image in a target image provided in an embodiment of the present application;
FIG. 4 is an exemplary diagram of a text smear image of a text line in a target image provided by an embodiment of the present application;
FIG. 5 is an exemplary diagram of a text region mask image of a text line in a target image provided in an embodiment of the present application;
fig. 6 is a schematic diagram of the composition of an evaluation device for image processing effect according to an embodiment of the present application.
Detailed Description
In some image processing effect evaluation methods, a lot of manpower is usually required to manually evaluate the image processing quality, but it is difficult to automatically evaluate the image processing quality, specifically, when manually evaluating the image character erasing and restoring effect, an evaluator first uses a first image character smearing and restoring engine to perform character erasing restoration on a batch of test sets a, so as to obtain a result set D1. And then, performing character erasing restoration on the same test set A by using a second image character smearing engine to obtain a result set D2. Then, the evaluator compares the corresponding images D1 and D2 in D1 and D2 one by one, and makes subjective scoring for the two images by referring to the corresponding original image a1 of D1 and D2, wherein the scoring term is the similarity degree between the erased text background and the original background, and the similarity degree between the text and the color after text restoration and the original text and the color. And finally, counting all scores to obtain an effect analysis report of the two image text smearing and restoring engines and a comparison result of the effect analysis report and the two image text smearing and restoring engines.
However, this manual evaluation method does not have uniform and strict evaluation criteria, and the result is greatly affected by subjective impressions of the evaluation staff, so that the evaluation method is not enough persuasive and difficult to quantify, and not only is the evaluation efficiency low, but also a large amount of human resources are required to be spent.
In order to solve the above-mentioned drawbacks, the embodiment of the present application provides an evaluation method of image processing effects, first, obtaining a target image processing effect to be evaluated; the target image processing effect comprises a character smearing image of a character line in the target image, a character area mask image of the character line in the target image and a color prediction result of the character line in the target image, and then an evaluation index of the target image processing effect is generated, wherein the evaluation index comprises at least one of a first evaluation index, a second evaluation index, a third evaluation index, a fourth evaluation index and a fifth evaluation index, and the first evaluation index is generated according to the difference between the character smearing image of the character line in the target image and the character content of the character line; the second evaluation index is generated according to the character smearing image of the character row in the target image; the third evaluation index is generated according to the character area mask image of the character line in the target image; the fourth evaluation index is generated according to the character smearing image of the character row in the target image; the fifth evaluation index is generated according to the color distribution histogram of the character smear image of the character line in the target image, and further the processing quality of the target image processing effect can be determined according to the evaluation index.
Therefore, in the embodiment of the application, when the processing quality of the target image processing effect is evaluated, the association relationship among the character smearing image of the character line, the character region mask image of the character line and the color prediction result of the character line in the target image is considered, so that the processing quality of the image processing effect can be automatically and accurately evaluated by utilizing each evaluation index.
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
First embodiment
Referring to fig. 1, a flowchart of a method for evaluating an image processing effect according to the present embodiment is provided, and the method includes the following steps:
s101: and obtaining a target image processing effect to be evaluated, wherein the target image processing effect comprises a character smearing image of a character line in the target image, a character area mask image of the character line in the target image and a color prediction result of the character line in the target image.
In this embodiment, any image processing effect for realizing the processing quality evaluation by using this embodiment is defined as a target image processing effect, and the target image processing effect may be a processing effect obtained by processing a target image by using image processing techniques such as text detection, text recognition, text erasure, and restoration. The image word processing engine which is present or appears in the future can be utilized to process the target image to obtain the corresponding image processing effect.
The application will be described later with the image character smearing and restoring effects obtained by processing the target image by using image processing techniques such as character detection, character recognition, character erasure and restoration, etc., as target image processing effects.
In this embodiment, the target image may be input into the existing image-text smearing and restoration ENGINE engine_e & R for processing to obtain the target image processing effect to be evaluated, for executing the subsequent step S102.
The core technology of the ENGINE_E & R is that a deep learning network UNet is used, the deep features of the image are extracted by a convolution layer, and then a predicted text background image and a detected text region image are obtained by a plurality of deconvolution layer operations; and then, counting the color distribution in the text area on the text line image according to the text area output by the network in the engine, and calculating the color of the text on the image.
Specifically, the input of engine_e & R may be a target image with text (defined herein as im_source, as an example shown in fig. 2) and a Json format file (defined herein as json_s) of text Line (N Line, N being a positive integer greater than 0, as an example shown in fig. 3) on the target image after the target image is recognized by the text recognition ENGINE engine_reg, with vertex position coordinates (defined herein as line_coor, according to which the text Line (defined herein as im_line) may be cut out from im_source) and text content (line_cont) information of the text Line.
The output of engine_e & R (i.e., the target image processing effect) is a text smear image of N text lines on the target image (defined herein as im_erase, which may be considered as a background image of text lines separated by the smear and restore ENGINE as in the example shown in fig. 4), a text region Mask (Mask) image of N text lines (defined herein as im_mask, which may be considered as a foreground image of text lines separated by the smear and restore ENGINE as in the example shown in fig. 5), a text Color prediction result of N text lines (defined herein as color_ Rgb, which is in RGB format, and the three-dimensional values are floating point values ranging from 0 to 255). The sizes of the three images of im_line, im_Erase and im_mask are the same, and the number of pixels is pixel_Num.
S102: and generating an evaluation index of the target image processing effect, wherein the evaluation index comprises at least one of a first evaluation index, a second evaluation index, a third evaluation index, a fourth evaluation index and a fifth evaluation index.
In this embodiment, in order to accurately evaluate the processing quality of the target image processing effect, it is first necessary to generate an evaluation index of the target image processing effect, which may include at least one of a first evaluation index, a second evaluation index, a third evaluation index, a fourth evaluation index, and a fifth evaluation index, and then, the processing quality of the target image processing effect may be evaluated by executing the subsequent step S103 using the evaluation index.
The first evaluation index, the second evaluation index, the third evaluation index, the fourth evaluation index, and the fifth evaluation index are described in order.
(1) In this embodiment, the first evaluation index refers to an evaluation index generated from a difference between a text-painted image of a text line and text contents of the text line in the target image. Used for representing the character restoration effect.
Specifically, an alternative implementation manner is that the implementation process of generating the first evaluation index of the target image processing effect may include the following steps A1-A4:
Step A1: and filling the text region mask image of the text line in the target image by using the color prediction result of the text line in the target image to obtain a filled text line image.
In the present implementation, since the text region Mask image im_mask of each text line obtained by the image text smearing and restoration ENGINE engine_e & R is a binary image in which the detected text region is white (255 ) and the other regions are black (0, 0). Therefore, the text area in im_mask of each line can be filled with Color prediction results color_ Rgb of N text lines in the target image, so as to obtain a filled text line image, which is defined herein as im_fill.
Step A2: and calculating the text editing distance between the text recognition result of the filled text line image and the text content of the text line.
After obtaining the filled text Line image im_fill in step A1, im_fill may be further input into a text recognition ENGINE engine_reg to obtain a text result recognized by the ENGINE, which is defined herein as new_line_cont.
Since the same recognition ENGINE engine_reg is used here as in step S101, in an ideal case im_fill should also recognize the same content as line_cont of the same text Line. Therefore, the subsequent step A3 can be performed by calculating the text editing distance between new_line_cont (N1 characters in total) of each text Line and line_cont (N0 characters in total) of the same text Line on the target image.
The text editing distance is used for comparing the similarity of two character strings, and specifically refers to the minimum editing operation number required for converting one character string into the other character string. This means that the text edit distance between the two strings is the greater, the more different they are. Wherein the permitted editing operation includes replacing one character with another character, inserting one character, deleting one character.
Illustrating: the step of calculating the text edit distance between the two strings of kitttin and sitting may include:
1) The substitution operation is carried out once by kitttin- > sitttin, k is replaced by s, and the substitution times are recorded as 1
2) sittin- > sittin, delete one t at a time, record deletion number as 1
3) sittin- > sitting, performing one insertion operation, adding g, and counting the number of insertion times to be 1
Through the editing operation, the text editing distance between the two character strings, i.e. kitttin and sitting, is obtained as the number of times of replacement 1, the number of times of deletion 1 and the number of times of insertion 1.
Step A3: and calculating the identification score of the text line according to the text editing distance.
After obtaining the text editing distance between the new_line_cont (N1 characters in total) of each text Line and the line_cont (N0 characters in total) of the same text Line on the target image through the step A2, the recognition score of each text Line can be further calculated, and the specific calculation formula is as follows:
Where score_reg_i represents the identification Score of the ith text line; n0 represents the total number of characters of the ith text line; DELETE represents the number of deletions contained in the text edit distance; REPLACE indicates the number of substitutions contained in the text edit distance; NSERT represents the number of insertions contained in the text edit distance.
Step A4: and calculating the average value of the identification scores of all text lines in the target image, and taking the average value as a first evaluation index of the target image processing effect.
After the recognition scores of all text lines in the target image are calculated by using the formula (1), the recognition scores can be further averaged to obtain the recognition scores of the text region mask images of the text lines in the target image, and the recognition scores are used as a first evaluation index of the target image processing effect. The larger the score is used for representing the character restoration effect, the better the character content and the color restoration is in the target image processing effect obtained after the ENGINE_E & R processing. The specific calculation formula is as follows:
wherein score_reg represents the average value of the recognition scores of all text lines in the target image, namely, a first evaluation index of the target image processing effect; score Reg i represents the identification Score of the ith text line; n represents the total number of text lines.
(2) In this embodiment, the second evaluation index is generated from a text-painted image of a text line in the target image. The gradient effect used for representing the character painting.
Specifically, an alternative implementation manner is that the implementation process of generating the second evaluation index of the target image processing effect may include the following steps B1-B2:
step B1: and calculating the average value of the gradients of the character smearing images of all the character lines in the target image.
Step B2: and generating a second evaluation index of the target image processing effect by using a preset gradient threshold value and an average value of gradients of the character smearing images of all the text lines in the target image.
In this implementation, first, by observing the background of the text line in the image, a priori knowledge can be obtained, that is, if the background of the text line is smooth, there is no abrupt color transition change, and there is no obvious edge, the gradient should not be too large, so a gradient threshold can be preset, which is defined herein as grad_thresh, so that when the gradient of the text smear image of the text line in the target image exceeds the gradient threshold, the effect of the background image separated by engine_e & R can be considered to be poor, and if more than the gradient threshold, the effect of separation is explained to be worse.
Wherein for the smear image im_erase for each text line on the target image, the Sobel operator can be used to calculate the gradient value of the target image (defined herein as grad_e). The Sobel filter kernel for calculating the values of the two directions x and y is as follows:
and then, respectively carrying out convolution operation on the target image in the x and y directions by using a Sobel filter kernel to obtain the gradient of the target image in the x and y directions as follows:
the gradient value grad_e of the target image is calculated as follows:
and then the average value of the gradients of the character smearing images of all the text lines in the target image can be calculated, and the specific calculation formula is as follows:
wherein avg_Grad_E represents the average value of the gradients of the text smear images of all text lines in the target image; grad_E represents the gradient value of the target image; n represents the total number of text lines.
Further, a gradient score of the text smear image can be calculated by using a preset gradient threshold value and an average value of gradients of the text smear images of all text lines in the target image, and the gradient score is used as a second evaluation index of the target image processing effect. The smaller the score, the smoother the background image obtained by separating the Chinese line images in the target image processing effect obtained by the ENGINE_E & R processing. The specific calculation formula is as follows:
Here, score_grad represents a gradient Score of the text smear image of all text lines in the target image, that is, the second evaluation index of the target image processing effect, and the smaller the gradient Score, the smoother the background image obtained by separating the text line images by engine_e & R. The method comprises the steps of carrying out a first treatment on the surface of the Avg_grad_e represents the average value of the gradients of the text smear images for all text lines in the target image; n represents the total number of text lines.
(3) In this embodiment, the third evaluation index is generated from the text region mask image of the text line in the target image. The color effect used for representing the character painting.
Specifically, an alternative implementation manner is that the implementation process of generating the third evaluation index of the target image processing effect may include the following steps C1 to C3:
step C1: and performing mask operation on the original text line image by using the text region mask image of the text line in the target image to obtain a text line image without text.
In this implementation, since the im_line image is obtained by slicing from the target image im_source according to the text Line vertex coordinate information line_coor in json_s. Therefore, when performing a masking operation on a Text Line original image using a Text region Mask image of a Text Line in a target image to obtain a Text Line image (which is defined herein as im_line_no_text), im_line_no_text may be initialized to im_line first, and the im_line image size is identical to the im_mask, im_erase, and im_line_no_text sizes.
Then, a background Color Array array_back_color may be created, which is used to store Color values of pixels determined to be a background area. Traversing each pixel point (x, y) on the im_line image, wherein the Color value of the pixel point is Color0; meanwhile, checking whether the color value of the same position on the im_mask is (255 ), if so, judging the point as a text point by the engine_e & R, and not processing the text point; if not, meaning that the point is judged by engine_e & R as the background point of the target image, color0 may be added to the array_back_color. By analogy, when each pixel in im_line has been processed in the manner described above, the Color average of the background area can be calculated from the array_back_color, which is defined herein as avg_back_color.
Then, traversing each pixel point (x, y) on the im_line image, wherein the Color value of each pixel point is Color0; meanwhile, whether the Color value at the same position on the im_mask is (255 ) is checked, if yes, the point is judged to be a Text point by the engine_e & R, and the Color value at the same position on the im_line_no_text can be set to avg_back_color; if not, it means that the point is judged by engine_e & R as the background point of the target image, and the Color value at the same position on im_line_no_text is set to Color0, that is, the original pixel value is kept unchanged. By analogy, after each point on im_line has been processed in the above manner, the gradient magnitude Grad_No_text of im_line_No_text may be calculated according to the method in step (2), and if Grad_No_text is not greater than Grad_Thresh, it may be determined that the Text on im_line_No_text has been removed, and it may not be processed any more; if the number is greater than Grad_Thresh, it can be determined that a part of Text edges remain on the Im_line_No_text, and then the Im_line_No_text needs to be restored back to the Im_line again, then after the Im_mask is expanded to obtain a new Im_mask, the above-mentioned processing procedure is repeatedly executed until the Grad_No_text is not greater than Grad_Thresh.
Step C2: the color distance between the text line text region mask image and the text line image that does not contain text is calculated.
After obtaining the Text Line image im_line_no_text containing No Text through step C1, the Color distance (herein defined as dist_color_xy) between the Text region mask image im_erase and each point (x, y) at the same position on the im_line_no_text, where the Color of the point (x, y) on the im_erase is color_erase, the rgb values are r1, g1, b1, and the Color of the point (x, y) on the im_line_no_text is color_text, and the rgb values are r2, g2, b2, respectively, is calculated as follows:
step C3: a score of the color distance is calculated as a third evaluation index of the target image processing effect.
After calculating the color distance in the step C2, a score of the color distance may be further calculated, and the score may be used as a third evaluation index of the target image processing effect, where the smaller the score is, the closer the color of the background image obtained by separating the line images in the text obtained by the processing of engine_e & R is to the color of the background of the original image. The specific calculation formula is as follows:
Wherein score_color represents a Score of a Color distance between a text region mask image of a text line in the target image and a text line image containing no text, that is, a third evaluation index of the target image processing effect, and the smaller the Score is, the closer the Color of a background image obtained by separating the text line image in the target image processing effect obtained after the engine_e & R processing is to the Color of an original image background is; dist_color_xy represents the Color distance between im_Erase and points (x, y) at the same position on im_line_No_text in the target image; pixel_num N represents the total number of pixels.
(4) In this embodiment, the fourth evaluation index is generated from the character-smeared image of the character line in the target image. The brightness effect used for representing the character painting.
Specifically, an alternative implementation manner is that the implementation process of generating the fourth evaluation index of the target image processing effect may include the following steps D1-D2:
step D1: the luminance distance between the text line image and the text line image containing no text is calculated.
Step D2: a score of the luminance distance is calculated as a fourth evaluation index of the target image processing effect.
In this implementation, the color model HSV refers to a representation method of points in an RGB color model in a cylindrical coordinate system, where H represents a hue, S represents saturation, and V represents brightness. Let (r, g, b) be the red, green and blue coordinates of a point, respectively, whose values are real numbers between 0 and 1. Let max be the maximum of r, g and b. Let min be equal to the minimum of these values. Thus, the specific way of converting the points represented by (r, g, b) into the (h, s, v) values in HSV space is as follows:
/>
v=max (9)
The specific implementation process is consistent with the existing mode, and is not repeated here.
Thus, by the above formulas (8) and (9), the luminance value in the HSV color value of each point (x, y) on the Text Line image im_erase and the Text Line image im_line_no_text containing No Text can be calculated in a distributed manner, which are defined herein as v_erase_xy and v_no_text_xy, respectively.
Further, the average luminance of im_erase and im_line_no_text may be calculated as follows:
wherein avg_v_erase represents the average brightness of im_erase; avg_v_no_erase represents the average luminance of im_line_no_text.
Still further, the score of the brightness distance may be calculated and used as a fourth evaluation index of the target image processing effect, where the smaller the score is, the closer the brightness of the background image obtained by separating the line images in the text obtained by the engine_e & R processing is to the brightness of the background of the original image. The specific calculation formula is as follows:
wherein score_bright represents a Score of a brightness distance between a text smear image of a text line in the target image and a text line image containing no text, that is, a fourth evaluation index of the target image processing effect, and the smaller the Score is, the closer the brightness of a background image obtained by separating the text line image in the target image processing effect obtained after the processing of engine_e & R is to the brightness of an original image background is; n represents the total number of text lines.
(5) In the present embodiment, the fifth evaluation index is generated from the color distribution histogram of the character-applied image of the character line in the target image. The color effect used for representing the character painting.
Specifically, an alternative implementation manner is that the implementation process of generating the fifth evaluation index of the target image processing effect may include the following steps E1-E2:
step E1: generating a color distribution histogram of the character smear image of the character row in the target image; and generating a color distribution histogram of the text line image that does not contain text.
Step E2: and calculating the Pearson correlation coefficient score of the color distribution histogram of the character smearing image of the character row in the target image and the color distribution histogram of the text row image without characters as a fifth evaluation index of the target image processing effect.
Specifically, in the present implementation, the colors of im_erase and im_line_no_text are first uniformly divided into a preset number (defined as Num, which is generally 64) of bins, and then, a histogram calculation method is used to calculate hist_erase and hist_no_text, where the histogram represents the color distribution of the two graphs. The closer the color distribution of the two figures is, the better the Im_Erase is represented, the better the background of Im_line is restored. The present application employs pearson correlation coefficients to measure color distribution closeness. Specifically, the formula for calculating the pearson correlation coefficients of hist_erase and hist_no_text is as follows:
Here, score_pearson represents a Pearson correlation coefficient Score of a color distribution histogram of a text smear image of a text line in the target image and a color distribution histogram of a text line image containing no text, that is, a fifth evaluation index representing the target image processing effect, and the greater the Score, the closer the color distribution of a background image obtained by separating the text line image in the target image processing effect obtained after the engine_e & R processing is to the color distribution of the background of the original image.
An example of the parameter values in the above formula (13) is as follows:
let im_erase and im_line_no_text be gray images (i.e. the pixel values are single channel values, the pixel value range at each pixel is 0-255), and let im_erase have 2500 pixels (50 x 50 in size) and im_line_no_text have 3000 pixels (50 x 60 in size). And dividing 0-255 into 3 bins, namely three bins of 0-84,85-169 and 170-255, wherein the meaning of each bin is a range, and counting the number of pixels falling in the three ranges on each image.
Assuming that 1100 pixel values fall within 0-84, 600 pixel values fall within 85-169, and 900 pixel values fall within 170-255 in im_erase. The color distribution histogram hist_erase of im_erase can be expressed by a vector of [1100,600,900 ].
Assuming 500 pixel values in im_line_no_text fall within 0-84, 1500 pixel values fall within 85-169, and 1000 pixel values fall within 170-255. The color distribution histogram hist_no_text of im_line_no_text may be represented by a vector of [500,1500,1000 ].
Then, corresponding to the above equation (13), the following values are obtained:
num is the number 3 of bins.
1100+600+900.
1100+600+1500+900+1000.
1100+600+900+900.
S103: and determining the processing quality of the target image processing effect according to the evaluation index.
In this embodiment, after generating the evaluation index of the target image processing effect in step S102, that is, generating at least one of the first evaluation index score_reg, the second evaluation index score_grad, the third evaluation index score_color, the fourth evaluation index score_bright, and the fifth evaluation index score_pearson, the data processing may be further performed on the evaluation index, and the processing quality of the target image processing effect may be determined according to the processing result.
In one implementation manner of this embodiment, S103 may specifically include: and carrying out weighting processing on each evaluation index included in the evaluation indexes of the target image processing effect, and determining the processing quality of the target image processing effect according to the weighting processing result.
Specifically, when the evaluation index of the target image processing effect includes five evaluation indexes of score_reg, score_grad, score_color, score_bright, and score_pearson, the evaluation indexes can be weighted by a calculation formula, and the specific calculation formula is as follows:
wherein score_total represents an evaluation Score of the processing quality of the target image processing effect, and the larger the score_total value is, the higher the processing quality of the target image processing effect is, the better the effect is, and vice versa; a0 is the weight value of score_reg, a1 is the weight value of score_grad, a2 is the weight value of score_color, a3 is the weight value of score_bright, a4 is the weight value of score_pearson, and specific values of a0, a1, a2, a3 and a4 can be determined according to practical situations and empirical values, and the embodiment of the present application is not limited herein, and the value range is generally (0, 1).
Thus, the target image processing effect can be evaluated by using the score_reg, score_grad, score_color, score_bright, and score_pearson as each of the gradient, color, brightness, and Color distribution effects of the character restoration effect and the character application, and the overall effect can be evaluated by the score_total.
In summary, in the method for evaluating an image processing effect provided in the present embodiment, first, a target image processing effect to be evaluated is obtained; the target image processing effect comprises a character smearing image of a character line in the target image, a character area mask image of the character line in the target image and a color prediction result of the character line in the target image, and then an evaluation index of the target image processing effect is generated, wherein the evaluation index comprises at least one of a first evaluation index, a second evaluation index, a third evaluation index, a fourth evaluation index and a fifth evaluation index, and the first evaluation index is generated according to the difference between the character smearing image of the character line in the target image and the character content of the character line; the second evaluation index is generated according to the character smearing image of the character row in the target image; the third evaluation index is generated according to the character area mask image of the character line in the target image; the fourth evaluation index is generated according to the character smearing image of the character row in the target image; the fifth evaluation index is generated according to the color distribution histogram of the character smear image of the character line in the target image, and further the processing quality of the target image processing effect can be determined according to the evaluation index.
Therefore, in the embodiment of the application, when the processing quality of the target image processing effect is evaluated, the association relationship among the character smearing image of the character line, the character region mask image of the character line and the color prediction result of the character line in the target image is considered, so that the processing quality of the image processing effect can be automatically and accurately evaluated by utilizing each evaluation index.
Second embodiment
The present embodiment will be described with respect to an apparatus for evaluating an image processing effect, and reference will be made to the above-described method embodiments.
Referring to fig. 6, a schematic diagram of the composition of an apparatus for evaluating an image processing effect according to the present embodiment is provided, and the apparatus 600 includes:
an acquiring unit 601, configured to acquire a target image processing effect to be evaluated; the target image processing effect comprises a character smearing image of the character line in the target image, a character area mask image of the character line in the target image and a color prediction result of the character line in the target image;
a generating unit 602, configured to generate an evaluation index of the target image processing effect, where the evaluation index includes at least one of a first evaluation index, a second evaluation index, a third evaluation index, a fourth evaluation index, and a fifth evaluation index;
Wherein the first evaluation index is generated according to the difference between the character smearing image of the character row and the character content of the character row in the target image; the second evaluation index is generated according to the character smearing image of the character row in the target image; the third evaluation index is generated according to the character area mask image of the Chinese character line in the target image; the fourth evaluation index is generated according to the character smearing image of the character row in the target image; the fifth evaluation index is generated according to a color distribution histogram of the character smearing image of the character row in the target image;
a determining unit 603 for determining a processing quality of the target image processing effect according to the evaluation index.
In one implementation of the present embodiment, the generating unit 602 includes:
a filling subunit, configured to fill a text region mask image of a text line in the target image by using a color prediction result of the text line in the target image, so as to obtain a filled text line image;
a first calculating subunit, configured to calculate a text editing distance between a text recognition result of the filled text line image and text content of the text line;
A second calculating subunit, configured to calculate an identification score of the text line according to the text editing distance;
and the third computing subunit is used for computing the average value of the recognition scores of all text lines in the target image as a first evaluation index of the target image processing effect.
In one implementation of the present embodiment, the generating unit 602 includes:
a fourth calculating subunit, configured to calculate an average value of gradients of the text smear images of all text lines in the target image;
the first generation subunit is used for generating a second evaluation index of the target image processing effect by using a preset gradient threshold value and an average value of gradients of the character smearing images of all text lines in the target image.
In one implementation of the present embodiment, the generating unit 602 includes:
the obtaining subunit is used for carrying out mask operation on the original image of the text line by utilizing the mask image of the text region of the text line in the target image to obtain a text line image without text;
a fifth calculating subunit, configured to calculate a color distance between the text region mask image of the text line and the text line image that does not include text;
And a sixth calculation subunit for calculating the score of the color distance as a third evaluation index of the target image processing effect.
In one implementation of the present embodiment, the generating unit 602 includes:
a seventh calculating subunit, configured to calculate a brightness distance between the text smear image of the text line and the text line image that does not include text;
an eighth calculation subunit for calculating a score of the luminance distance as a fourth evaluation index of the target image processing effect.
In one implementation of the present embodiment, the generating unit 602 includes:
a second generation subunit, configured to generate a color distribution histogram of a text smear image of a text line in the target image; generating a color distribution histogram of the text line image which does not contain characters;
a ninth calculating subunit, configured to calculate, as a fifth evaluation index of the target image processing effect, a pearson correlation coefficient score of a color distribution histogram of the text smear image of the text line in the target image and a color distribution histogram of the text line image that does not include the text.
In one implementation of the present embodiment, the determining unit 603 includes:
A weighting subunit, configured to perform weighting processing on each evaluation index included in the evaluation index of the target image processing effect;
and the determining subunit is used for determining the processing quality of the target image processing effect according to the weighted processing result.
Further, an embodiment of the present application further provides an evaluation device for an image processing effect, including: a processor, memory, system bus;
the processor and the memory are connected through the system bus;
the memory is configured to store one or more programs, the one or more programs comprising instructions, which when executed by the processor, cause the processor to perform any one of the implementations of the method of evaluating image processing effects described above.
Further, the embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores instructions, and when the instructions run on a terminal device, the terminal device is caused to execute any implementation method of the evaluation method of the image processing effect.
Further, the embodiment of the application also provides a computer program product, which when being run on a terminal device, causes the terminal device to execute any implementation method of the evaluation method of the image processing effect.
From the above description of embodiments, it will be apparent to those skilled in the art that all or part of the steps of the above described example methods may be implemented in software plus necessary general purpose hardware platforms. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions to cause a computer device (which may be a personal computer, a server, or a network communication device such as a media gateway, etc.) to perform the methods described in the embodiments or some parts of the embodiments of the present application.
It should be noted that, in the present description, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different manner from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
It is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. An evaluation method of an image processing effect, comprising:
obtaining a target image processing effect to be evaluated, wherein the target image processing effect comprises a character smearing image of a character line in a target image, a character region mask image of the character line in the target image and a character color prediction result of the character line in the target image;
generating an evaluation index of the target image processing effect, wherein the evaluation index comprises at least one of a first evaluation index, a second evaluation index, a third evaluation index, a fourth evaluation index and a fifth evaluation index;
wherein the first evaluation index is generated according to the difference between the character smearing image of the character row and the character content of the character row in the target image; the second evaluation index is generated according to the character smearing image of the character row in the target image; the third evaluation index is generated according to the character area mask image of the Chinese character line in the target image; the fourth evaluation index is generated according to the character smearing image of the character row in the target image; the fifth evaluation index is generated according to a color distribution histogram of the character smearing image of the character row in the target image;
And determining the processing quality of the target image processing effect according to the evaluation index.
2. The method of claim 1, wherein generating the first evaluation index of the target image processing effect comprises:
filling a text region mask image of the text line in the target image by using a color prediction result of the text line in the target image to obtain a filled text line image;
calculating a text editing distance between a text recognition result of the filled text line image and text contents of the text line;
calculating the identification score of the text line according to the text editing distance;
and calculating the average value of the identification scores of all text lines in the target image, and taking the average value as a first evaluation index of the target image processing effect.
3. The method of claim 1, wherein generating the second evaluation index of the target image processing effect comprises:
calculating the average value of gradients of the character smearing images of all the character lines in the target image;
and generating a second evaluation index of the target image processing effect by using a preset gradient threshold value and an average value of gradients of the character smearing images of all the text lines in the target image.
4. The method of claim 1, wherein generating a third evaluation index of the target image processing effect comprises:
performing mask operation on the original text line image by using the text region mask image of the text line in the target image to obtain a text line image without text;
calculating the color distance between the text region mask image of the text line and the text line image which does not contain text;
and calculating the score of the color distance as a third evaluation index of the target image processing effect.
5. The method of claim 4, wherein generating a fourth evaluation index of the target image processing effect comprises:
calculating the brightness distance between the text smearing image of the text line and the text line image which does not contain the text;
and calculating the score of the brightness distance as a fourth evaluation index of the target image processing effect.
6. The method according to claim 4 or 5, wherein generating a fifth evaluation index of the target image processing effect comprises:
generating a color distribution histogram of a character smearing image of the character rows in the target image; generating a color distribution histogram of the text line image which does not contain characters;
And calculating the pearson correlation coefficient score of the color distribution histogram of the character smearing image of the Chinese character row in the target image and the color distribution histogram of the text row image without the characters, and taking the color distribution histogram and the pearson correlation coefficient score as a fifth evaluation index of the target image processing effect.
7. The method according to any one of claims 1 to 6, wherein the determining the processing quality of the target image processing effect according to the evaluation index includes:
weighting each evaluation index included in the evaluation index of the target image processing effect;
and determining the processing quality of the target image processing effect according to the weighted processing result.
8. An image processing effect evaluation device, comprising:
the acquisition unit is used for acquiring a target image processing effect to be evaluated; the target image processing effect comprises a character smearing image of the character line in the target image, a character area mask image of the character line in the target image and a color prediction result of the character line in the target image;
a generation unit configured to generate an evaluation index of a target image processing effect, the evaluation index including at least one of a first evaluation index, a second evaluation index, a third evaluation index, a fourth evaluation index, and a fifth evaluation index;
Wherein the first evaluation index is generated according to the difference between the character smearing image of the character row and the character content of the character row in the target image; the second evaluation index is generated according to the character smearing image of the character row in the target image; the third evaluation index is generated according to the character area mask image of the Chinese character line in the target image; the fourth evaluation index is generated according to the character smearing image of the character row in the target image; the fifth evaluation index is generated according to a color distribution histogram of the character smearing image of the character row in the target image;
and the determining unit is used for determining the processing quality of the target image processing effect according to the evaluation index.
9. An evaluation apparatus of an image processing effect, characterized by comprising: a processor, memory, system bus;
the processor and the memory are connected through the system bus;
the memory is for storing one or more programs, the one or more programs comprising instructions, which when executed by the processor, cause the processor to perform the method of any of claims 1-7.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein instructions, which when run on a terminal device, cause the terminal device to perform the method of any of claims 1-7.
CN202011642571.7A 2020-12-31 2020-12-31 Image processing effect evaluation method, device, storage medium and equipment Active CN112767318B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011642571.7A CN112767318B (en) 2020-12-31 2020-12-31 Image processing effect evaluation method, device, storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011642571.7A CN112767318B (en) 2020-12-31 2020-12-31 Image processing effect evaluation method, device, storage medium and equipment

Publications (2)

Publication Number Publication Date
CN112767318A CN112767318A (en) 2021-05-07
CN112767318B true CN112767318B (en) 2023-07-25

Family

ID=75698717

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011642571.7A Active CN112767318B (en) 2020-12-31 2020-12-31 Image processing effect evaluation method, device, storage medium and equipment

Country Status (1)

Country Link
CN (1) CN112767318B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009151371A (en) * 2007-12-18 2009-07-09 Dainippon Screen Mfg Co Ltd Image processing apparatus, image processing method, and program
WO2019057067A1 (en) * 2017-09-20 2019-03-28 众安信息技术服务有限公司 Image quality evaluation method and apparatus
CN110428368A (en) * 2019-07-31 2019-11-08 北京金山云网络技术有限公司 A kind of algorithm evaluation method, device, electronic equipment and readable storage medium storing program for executing
CN111062884A (en) * 2019-12-05 2020-04-24 Oppo广东移动通信有限公司 Image enhancement method and device, storage medium and terminal equipment
CN112102309A (en) * 2020-09-27 2020-12-18 中国建设银行股份有限公司 Method, device and equipment for determining image quality evaluation result

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009151371A (en) * 2007-12-18 2009-07-09 Dainippon Screen Mfg Co Ltd Image processing apparatus, image processing method, and program
WO2019057067A1 (en) * 2017-09-20 2019-03-28 众安信息技术服务有限公司 Image quality evaluation method and apparatus
CN110428368A (en) * 2019-07-31 2019-11-08 北京金山云网络技术有限公司 A kind of algorithm evaluation method, device, electronic equipment and readable storage medium storing program for executing
CN111062884A (en) * 2019-12-05 2020-04-24 Oppo广东移动通信有限公司 Image enhancement method and device, storage medium and terminal equipment
CN112102309A (en) * 2020-09-27 2020-12-18 中国建设银行股份有限公司 Method, device and equipment for determining image quality evaluation result

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
应用MSER和局部二值化的网络图片文本定位;刘美华;傅彩明;梁开健;周细凤;;光电子・激光(06);全文 *

Also Published As

Publication number Publication date
CN112767318A (en) 2021-05-07

Similar Documents

Publication Publication Date Title
US9483835B2 (en) Depth value restoration method and system
US8233725B2 (en) Adaptive sampling region for a region editing tool
CN108764358B (en) Terahertz image identification method, device and equipment and readable storage medium
CN109800817B (en) Image classification method based on fusion semantic neural network
CN110390643B (en) License plate enhancement method and device and electronic equipment
CN111860369A (en) Fraud identification method and device and storage medium
US8081196B2 (en) Assisted adaptive region editing tool
CN108647696B (en) Picture color value determining method and device, electronic equipment and storage medium
CN113705579A (en) Automatic image annotation method driven by visual saliency
CN110990617B (en) Picture marking method, device, equipment and storage medium
CN113989127A (en) Image contrast adjusting method, system, equipment and computer storage medium
CN113158977B (en) Image character editing method for improving FANnet generation network
CN112767318B (en) Image processing effect evaluation method, device, storage medium and equipment
CN111091580B (en) Stumpage image segmentation method based on improved ResNet-UNet network
CN113361530A (en) Image semantic accurate segmentation and optimization method using interaction means
US7826668B1 (en) Adaptive region editing tool
CN111368838A (en) Method and device for identifying reported screenshot
CN114841974A (en) Nondestructive testing method and system for internal structure of fruit, electronic equipment and medium
Fateh et al. Color reduction in hand-drawn Persian carpet cartoons before discretization using image segmentation and finding edgy regions
CN112907605B (en) Data enhancement method for instance segmentation
CN112084953B (en) Face attribute identification method, system, equipment and readable storage medium
CN113112515B (en) Evaluation method for pattern image segmentation algorithm
CN116912918B (en) Face recognition method, device, equipment and computer readable storage medium
CN112528996B (en) Picture processing method, apparatus, device, storage medium and computer program product
Suleiman Image Enhancement for Scanned Historical Documents in the Presence of Multiple Degradations

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant