CN111881959A - Method and device for identifying image difference - Google Patents

Method and device for identifying image difference Download PDF

Info

Publication number
CN111881959A
CN111881959A CN202010689863.XA CN202010689863A CN111881959A CN 111881959 A CN111881959 A CN 111881959A CN 202010689863 A CN202010689863 A CN 202010689863A CN 111881959 A CN111881959 A CN 111881959A
Authority
CN
China
Prior art keywords
image
line
cell
path
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010689863.XA
Other languages
Chinese (zh)
Other versions
CN111881959B (en
Inventor
靳海亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202010689863.XA priority Critical patent/CN111881959B/en
Publication of CN111881959A publication Critical patent/CN111881959A/en
Application granted granted Critical
Publication of CN111881959B publication Critical patent/CN111881959B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Processing (AREA)

Abstract

The specification discloses a method and a device for identifying image difference, aiming at each row of images in two images for comparison, dimension reduction processing is carried out on pixels of the row of images according to a preset dimension, a row information code corresponding to the row of images is determined according to the dimension reduction processing result, then a comparison state table of the row information codes of the two images is established according to the row information code, a sub-path of an angular line is arranged on a cell with the similarity of the row information code being greater than the preset value, finally, a path between a head cell and a tail cell of a table is determined by taking the edge passing through the least cell as an optimization target, and the image difference is identified according to the path. The robustness is better because the pixel value comparison is not carried out, and the difference of the two images is identified in a way of calculating the path with the minimum cost by establishing a comparison table, the difference between similar lines is identified, and the difference of a UI interface is more clearly reflected instead of the pixel points or line images with the difference.

Description

Method and device for identifying image difference
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for identifying image differences.
Background
Currently, in the development process of a User Interface (UI), when an Interface layout or an Interface style is modified, it is necessary to determine whether an expected change occurs in the modified UI. The conventional approach is usually to manually check the difference between the two images by the user by acquiring images of the UI before and after modification.
In the prior art, in order to reduce manual pressure and avoid errors and omissions in manual inspection, an image contrast method is adopted to identify image differences. Specifically, the two images to be compared are the same in size, the pixel values of the pixels in the same coordinate in the two images are compared according to the coordinates of the pixels in the images, and if the difference between the pixel values of the two pixels is greater than a preset value, the pixel in the coordinate is identified. After traversing pixel points of all coordinates in the two images, a user can determine the difference between the two images according to the identification on the images.
In the method for comparing pixel values of pixel points, the difference between the pixel points in the two images is determined, but in the UI test or UI development process, usually, interface elements in the UI are not changed, but the positions or the layout are changed. The prior art can identify the original position and the changed position of the interface element, and what is actually needed to determine is the result of whether the identifier of the similar element moves the expected distance, and the identifier of the prior art cannot clearly reflect the point.
Disclosure of Invention
The method and the device for identifying image differences, provided by the embodiments of the present specification, are used to partially solve the problems in the prior art.
The embodiment of the specification adopts the following technical scheme:
the method for identifying image differences provided by the specification comprises the following steps:
determining a first image and a second image which need to be compared;
performing dimension reduction processing on pixels of the line of images according to a preset dimension aiming at each line of images in the two images, and determining a line information code corresponding to the line of images according to a dimension reduction processing result;
establishing a comparison state table of the row information codes by respectively taking the row information codes corresponding to the first image and the row information codes corresponding to the second image as rows and columns, wherein each edge of a cell in the comparison state table is a sub-path;
aiming at each cell in the comparison state table, if the similarity between two line information codes corresponding to the cell is greater than a preset value, establishing a sub-path on a diagonal line of the cell;
according to each sub-path of the comparison state table, determining a path composed of each sub-path between a table head table of the comparison state table and a table tail table of the comparison state table by taking the edge passing through the least cell as an optimization target;
and identifying the difference of the two images according to the line image corresponding to the edge of the cell passing through the determined path.
Optionally, for each line of images in the two images, performing dimension reduction processing on pixels of the line of images according to a preset dimension, and before determining line information encoding corresponding to the line of images according to a result of the dimension reduction processing, the method further includes:
and respectively converting the two images into single-channel images.
Optionally, determining the line information code corresponding to the line image according to the result of the dimension reduction processing specifically includes:
determining a line vector corresponding to the line of image obtained after the dimension reduction processing;
for each row element in the row vector, judging whether the numerical value of the row element is larger than the average value of the row elements of the row vector, if so, setting the row element as a first specified value, and if not, setting the row element as a second specified value;
and determining the line information code corresponding to the line image.
Optionally, establishing a comparison state table of the row information codes by using the row information codes corresponding to the first image and the row information codes corresponding to the second image as rows and columns respectively, specifically including:
determining row information codes corresponding to each column of a comparison state table according to the sequence of the images of each row of the first image from top to bottom;
determining a row information code corresponding to each row of the comparison state table according to the sequence of the images of each row of the second image from top to bottom;
and establishing the comparison state table, wherein each cell in the comparison state table corresponds to one line of information code in the first image and one line of information code in the second image, and the cell is used for representing the similarity of the two line of information codes.
Optionally, if the similarity between two line information codes corresponding to the cell is greater than a preset value, establishing a sub-path on a diagonal line of the cell, specifically including:
determining the number of codes with the same code position and the same code value according to the two line information codes corresponding to the cell;
and if the ratio of the determined coding quantity to the preset dimension is larger than a preset value, establishing a sub-path on the diagonal line of the cell.
Optionally, according to each sub-path of the comparison state table, determining a path between a head table cell of the comparison state table and a tail table cell of the comparison state table by using an edge passing through the fewest cell cells as an optimization target, specifically including:
and determining the path with the least edges of the cells passing from the head table lattice to the tail table lattice of the comparison state table through a difference algorithm according to each sub-path of the comparison state table.
Optionally, identifying a difference between the two images according to the line image corresponding to the edge of the cell passing through the determined path, specifically including:
according to the determined path, determining a line image corresponding to each sub-path in the path along the downward edge of the cell, and adding a line increasing identifier;
and determining a line image corresponding to each sub-path to the right along the edge of the cell in the path, and adding a line reduction identifier.
The device for identifying image difference provided by the specification comprises:
the determining module is used for determining a first image and a second image which need to be compared;
the encoding module is used for performing dimension reduction processing on pixels of the line of images according to a preset dimension aiming at each line of images in the two images and determining line information encoding corresponding to the line of images according to a dimension reduction processing result;
the table building module is used for building a comparison state table of the row information codes by respectively taking the row information codes corresponding to the first image and the row information codes corresponding to the second image as rows and columns, wherein each edge of a middle cell of the comparison state table is a sub-path;
the similarity calculation module is used for establishing a sub-path on a diagonal line of each cell if the similarity between two line information codes corresponding to the cell is greater than a preset value for each cell in the comparison state table;
the path planning module is used for determining a path formed by sub-paths between a head table cell of the comparison state table and a tail table cell of the comparison state table by taking the edge passing through the least cell as an optimization target according to the sub-paths of the comparison state table;
and the identification difference module is used for identifying the difference between the two images according to the line image corresponding to the edge of the passing cell in the determined path.
A computer-readable storage medium, storing a computer program which, when executed by a processor, implements any of the methods described above.
The electronic device provided by the present specification comprises a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements any of the above methods when executing the program.
The embodiment of the specification adopts at least one technical scheme which can achieve the following beneficial effects:
for each line of image in the two images for comparison, performing dimension reduction processing on pixels of the line of image according to a preset dimension, determining a line information code corresponding to the line of image according to the dimension reduction processing result, then establishing a comparison state table of the line information codes of the two images according to the line information code, setting a sub-path of an angular line in a cell with the similarity of the line information code greater than the preset value, finally determining a path from a head cell to a tail cell of a table by taking the edge passing through the minimum cell as an optimization target, and identifying the difference of the two images according to the line image corresponding to the edge of the cell passing through the path. Compared with the prior art, the method provided by the specification has better robustness because pixel value comparison is not performed, and the difference between two graphs is identified in a way of establishing a comparison table and calculating a path with the minimum cost, the difference between similar lines is identified, and the difference of a UI interface can be more clearly reflected instead of pixel points or line images with the difference.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a schematic diagram of a prior art identification difference;
FIG. 2 is a schematic flow chart illustrating an exemplary process for identifying image differences according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of line information encoding corresponding to an image provided in the present specification;
FIG. 4 is a schematic diagram of a comparison state table provided herein;
FIG. 5 is a schematic diagram of identification differences provided by embodiments of the present description;
fig. 6 is a schematic structural diagram of an apparatus for identifying image differences provided in an embodiment of the present specification;
fig. 7 is a schematic diagram of an electronic device implementing a method for identifying an image difference according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present disclosure more apparent, the technical solutions of the present disclosure will be clearly and completely described below with reference to the specific embodiments of the present disclosure and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification without any inventive step are within the scope of the present application.
In the prior art, since the comparison of the pixel values is only performed for the same coordinate position in the image, in most scenes, the identified difference content is too complex, so that it is difficult for the user to obtain the required information, as shown in fig. 1.
Fig. 1 is a schematic diagram of a conventional identification difference, where two images above are images to be compared, and both images are visible to include characters ABC, but the positions of the characters in the images are different, so that the content shown below can be determined by the conventional comparison method, that is, the difference is marked, and the position where the visible characters overlap is not marked. However, the difference marked in this way cannot clearly show the difference in the positions of the two icons.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 2 is a schematic flowchart of a process for identifying image differences according to an embodiment of the present disclosure, where the process includes:
s100: a first image and a second image that need to be compared are determined.
Image comparison is generally performed in a terminal (e.g., a mobile phone, a tablet computer, a personal computer), because the process is simple, differences in the images are identified, or the process may be performed by a server. Similarly, in this specification, the terminal or the server may also execute the flow of the identifier image difference, which is not limited in this specification, and the terminal is used as an execution subject to be described later.
Specifically, when performing image comparison, it is first necessary to determine two images to be compared, and therefore the terminal can acquire the first image and the second image. Also, the first image and the second image may also be UI images during UI development and testing, e.g., before and after a developer changes a UI layout. For convenience of description, the first image is positioned as an image before the UI change and the second image is defined as a UI image after the change.
S102: and performing dimension reduction processing on pixels of the line of images according to a preset dimension aiming at each line of images in the two images, and determining a line information code corresponding to the line of images according to a dimension reduction processing result.
In this specification, after acquiring two images, a terminal may perform, for each line of image of each image, dimension reduction processing on the line of image according to a preset dimension to determine a line information code according to a dimension reduction processing result, where the dimension reduction mainly performs fusion between pixel values, so that each element of the dimension reduction processing result corresponds to a pixel value of one pixel, and is used to improve robustness of the line information code when performing similarity calculation subsequently. The information coding is continued to simplify the calculation, reduce the calculation complexity and improve the efficiency of identifying the image difference.
Specifically, in this specification, the terminal may determine a value of one row vector by recombining pixel values of a plurality of adjacent pixels for each row of the image. For example, the pixel values are convolved with a convolution kernel of m × 1 for an n × 1 line image, where m < n. Or, determining the line vector corresponding to the line image by adopting a reshape function. Assuming that the preset dimension is K, for the n × 1 line image, after dimension reduction, each K × 1 line vector can be determined as a dimension reduction result.
Then, for each row element in the row vector, judging whether the numerical value of the row element is larger than the average value of the row elements of the row vector, if so, setting the row element as a first specified value, and if not, setting the row element as a second specified value. Specifically, the terminal may determine, for each row vector, an average value of each row element of the row vector, and then, for each row element, when the value of the row element is greater than the average value, rewrite the numerical value of the row element to a first specified value, and if not, rewrite the numerical value to a second specified value. For convenience of calculation, the first specified value may be 1, and the second specified value may be 0, so that the obtained row information code corresponding to the row vector composed of 0 and 1 is obtained.
Fig. 3 is a schematic diagram of line information codes corresponding to an image provided in this specification, assuming that a preset dimension is 64, a line vector is 64 × 1, the determined line information codes are also composed of a 01-character string with 64 bits, and further assuming that an image has X lines in common, a 64 × X matrix can be determined, as shown in fig. 3.
S104: and establishing a comparison state table of the row information codes by respectively taking the row information codes corresponding to the first image and the row information codes corresponding to the second image as rows and columns, wherein each edge of a cell in the comparison state table is a sub-path.
In this specification, after determining the row information codes corresponding to the respective row images included in the first image and the second image, the terminal may establish a comparison state table of the row information codes, and each edge of a cell in the comparison state table is a sub-path.
Specifically, the terminal may determine the row information code corresponding to each column of the comparison state table according to the sequence of the images of each row of the first image from top to bottom. Similarly, according to the sequence of the images of the rows of the second image from top to bottom, the row information code corresponding to each row of the comparison state table is determined, as shown in table 1, where the blank cell is the comparison state table.
Figure BDA0002588932630000071
Figure BDA0002588932630000081
TABLE 1
The first column in table 1 represents each row information code of the first image, the first row represents each row information code of the second image, and each blank cell represents the row information code of the corresponding first image compared with the row information code of the corresponding second image.
And each cell in the comparison state table corresponds to one line of information code in the first image and one line of information code in the second image, and the cell is used for representing the similarity of the two line of information codes.
S106: and aiming at each cell in the comparison state table, if the similarity between two line information codes corresponding to the cell is greater than a preset value, establishing a sub-path on the diagonal line of the cell.
In this specification, since each cell is used to represent the similarity of the two line information codes, the terminal may perform similarity calculation according to the line information code of the first image corresponding to the cell and the line information code of the corresponding second image for each cell, and determine whether the similarity between the two line information codes corresponding to the cell is greater than a preset value, if so, establish a sub-path for a diagonal line of the cell, otherwise, not establish a sub-path for a diagonal line.
Specifically, when the similarity calculation is performed, since the similarity calculation is performed on two character strings composed of 0 and 1, the similarity of two line information codes can be determined by a method of calculating the euclidean distance of a vector, a method of calculating the cosine similarity of a vector, or the like. The description does not limit what way to calculate the similarity specifically, as long as the existing method for calculating the similarity between two character strings is available.
And then, if the similarity between two line information codes corresponding to the cell is greater than a preset value, establishing a sub-path on the diagonal line of the cell.
Or, the terminal can also determine the similarity of the two line information codes by adopting a simple alignment comparison mode. Specifically, the terminal can determine the number of codes with the same code position and the same code value according to two line information codes corresponding to the cell, and judge whether the ratio of the determined number of codes to the preset dimension is greater than a preset value, if so, a sub-path is established on a diagonal line of the cell, and if not, the sub-path is not established.
Fig. 4 is a schematic diagram of a comparison state table provided in the present specification. And establishing sub-paths on the diagonal lines of the plurality of cells according to the determined similarity.
S108: and according to each sub-path of the comparison state table, determining a path composed of each sub-path between a head table of the comparison state table and a tail table of the comparison state table by taking the edge passing through the least unit table as an optimization target.
In this specification, after step S106, the difference between the first image and the second image can be determined according to the comparison state table and identified in the subsequent steps.
Specifically, the comparison state table shown in table 1 is taken as an example, where the vertical edge of each cell in the comparison state table indicates that the row image corresponding to the cell in the first image is not similar to the row image corresponding to the cell in the second image, and a row of the image needs to be added to the first image to "find" the row image similar to the row image in the second image. For example, the L1 lines of the first image are dissimilar from the L1 line images of the second image, and after proceeding along the sub-path of the vertical edge of the cell of the first image L1 × second image L1, the representation continues to check whether the L2 lines of the first image are similar to the L1 lines of the second image, and so on.
Similarly, the horizontal side of a cell indicates that the row image corresponding to the cell in the first image is not similar to the row image corresponding to the cell in the second image, and the row image corresponding to the cell needs to be reduced in the second image to "find" the row image similar to the row image in the second image. The diagonal line may determine that the line image representing the cell in the first image is similar to the line image corresponding to the cell in the second image according to the operation of step S106. For example, the L1 lines of the first image are dissimilar from the L1 line images of the second image, and after proceeding along the sub-path of the first image L1 × the cell lateral edge of the second image L1, the check will continue to check whether the L1 lines of the first image are similar to the L2 lines of the second image, and so on.
Since layout change is usually encountered in the UI test and development process, and the situation that two images with completely different contents are compared does not occur, the process of proceeding along the sub-path corresponding to the edge of the cell can also be regarded as a process of finding a line where the first image and the second image are different, and the path determined through the process can be used for determining the difference between the first image and the second image.
Further, since the sub-path is established on the diagonal line of the cell in the previous step S106, and the line images of the first image and the second image corresponding to the cell are similar, in the process of determining the path according to the sub-path of the comparison state table, if the two line images corresponding to the cell on the path are determined to be similar, the sub-path of the diagonal line can be followed.
Therefore, the terminal can determine a path between the head table of the comparison state table and the tail table of the comparison state table by taking the edge of the minimum cell as an optimization target, and indicates how to increase or decrease the first image or the second image to obtain two images with the same row image.
Specifically, the terminal may determine, according to each sub-path of the comparison state table, a path with the least edges of the cells passing from the head table of the comparison state table to the tail table of the comparison state table through a difference algorithm.
Further, since the number of edges of the cells passed by the two paths may be consistent with each other and the number of edges of the cells passed by the two paths is the smallest among the possible paths, in order to reduce the complexity of identifying the difference between the two graphs subsequently, the terminal may further determine, as the determined path, the path having the smallest number of passed sub-paths (i.e., the sub-path including the edge of the cell corresponding to the sub-path and the sub-path including the diagonal of the cell) among the paths having the smallest number of passed edges of the cells.
Therefore, the process of step S108 may also include: and determining the path with the least edges of the cells passing from the head table lattice to the tail table lattice of the comparison state table through a difference algorithm according to each sub-path of the comparison state table, and taking the path as a path to be selected. When at least two candidate paths are determined, determining a candidate path with the least sub-paths forming the path from the candidate paths, and using the path as the path determined by the terminal for executing step S100.
S110: and identifying the difference of the two images according to the line image corresponding to the edge of the cell passing through the determined path.
Finally, the terminal can determine the difference between the first image and the second image on the line image according to the edge of the cell passing through the determined path, and represent the difference between the first image and the second image according to the line image corresponding to the edge of the cell passing through the determined path.
Specifically, according to the explanation in step S108, the sub-path of the path passing through the edge of the cell represents the difference between the first image and the second image, so that it is possible to determine which line images of the two images cause the content difference according to the edge of the cell passing through the path. The terminal may determine, according to the determined path, for each sub-path that is downward along the edge of the cell in the path, a line image corresponding to the sub-path in the second image, and add a line increase identifier, and for each sub-path that is rightward along the edge of the cell in the path, determine a line image corresponding to the sub-path in the second image, and add a line decrease identifier.
Taking the first image and the second image shown in fig. 1 as an example, by the method provided in this specification, the difference identified in the second image can be specifically as shown in fig. 5, and it can be seen that since the identified area is not the specific distinguishing content but the area causing the difference, it can be determined more intuitively whether the UI adjustment has achieved the intended effect for the UI development or the UI test. Fig. 5 also shows a schematic diagram of a comparison state table for identifying differences and corresponding paths, and it can be seen that, in the order from top to bottom, the upper rows in the two graphs are similar, so that the determined path advances along a sub-path of a diagonal of a cell, and then when a dissimilar row appears in the middle, the path advances along a sub-path of a vertical edge of the cell, and finally when a boundary of the comparison state table is reached, a path end point is reached along the boundary. For example, if a certain interface element in the UI is shifted down by 2pix, by the method provided in this specification, whether the adjustment has reached the expectation can be determined by whether there are 2 lines in the identified line image, which is more convenient and faster than the prior art.
It should be noted that the comparison state table in fig. 5 is a schematic diagram, in which a thick line is a determined path, and it is seen that diagonal lines indicating that line images are similar are added to cells at different positions after the comparison state table passes through step S106, and it is needless to say that the path corresponding to the thick line in fig. 5 is the shortest path and the side of the passing cell is the least.
Further, as an example in fig. 5 above, after the boundary is reached, the path still proceeds along the parallel edge of the cell to reach the end point, but the row images of the two graphs corresponding to the cell of the parallel edge are similar (e.g., both are blank rows in fig. 5). Therefore, in order to avoid identifying similar line images as differences, in this specification, before identifying differences according to sub-paths passing through edges of cells in a path, the terminal may determine whether two lines of images corresponding to the cells to which the sub-paths belong are similar, if so, the terminal does not perform the difference identification, and if not, the terminal determines that the line images are actually different, and performs the difference identification.
Continuing with the example of FIG. 5, where the cell corresponding to the last sub-path in the path is similar due to the corresponding row image, and therefore is not identified in the graph to the left of the contrast state table when identifying differences.
Of course, it should be noted that the image for identifying the difference in fig. 5 is a diagram for displaying the first image and the second image in an overlapping manner, that is, the image for identifying the difference includes two A, B and C, which is only for convenience of embodying the difference between the first image and the second image. In other embodiments, since the second image is an image changed with respect to the first image, the terminal may identify only the difference in the second image when identifying the difference.
Based on the method for identifying image differences shown in fig. 2, for each line of images in two images to be compared, dimension reduction processing is performed on pixels of the line of images according to a preset dimension, a line information code corresponding to the line of images is determined according to the dimension reduction processing result, then a comparison state table of the line information codes of the two images is established according to the line information code, a sub-path of an angular line is set on a cell with the similarity of the line information code larger than the preset value, finally, a path from a head cell to a tail cell of the table is determined by taking the edge passing the fewest cells as an optimization target, and the difference between the two images is identified according to the line image corresponding to the edge of the cell passing the path. Compared with the prior art, the method provided by the specification has better robustness because pixel value comparison is not performed, and the difference between two graphs is identified in a way of establishing a comparison table and calculating a path with the minimum cost, the difference between similar lines is identified, and the difference of a UI interface can be more clearly reflected instead of pixel points or line images with the difference.
In addition, before step S102 in this specification, the terminal may further perform image processing on the first image and the second image to convert the multi-channel image into a single-channel image. For example, a color image is converted into a grayscale image to reduce the complexity of the processing in step S102, thereby further improving the identification efficiency.
Further, the process of creating the contrast status table of the line information codes as described in step S104 finally determines what differences exist between the second image and the first image, that is, what changes occur in the second image on the basis of the first image.
Similarly, in step S104, the terminal may also determine the row information code corresponding to each column of the comparison state table according to the sequence of the images of each row of the second image from top to bottom. Similarly, according to the sequence of the images of each line of the first image from top to bottom, the line information code corresponding to each line of the comparison state table is determined, and finally, the change of the first image is determined on the basis of the second image.
Furthermore, in this specification, when the comparison state table is established according to the row information codes corresponding to the images in each row of the first image and the second image, the row information code corresponding to each column and the row information code corresponding to each row in the comparison state table need to be determined respectively according to the same sequence. That is, when the row information code corresponding to each column of the comparison state table is determined in the order from top to bottom for each row image, the row information code corresponding to each row of the comparison state table also needs to be determined in the order from top to bottom for each row image. If the row information code corresponding to each column of the comparison state table is determined according to the sequence of the images in each row from bottom to top, the row information code corresponding to each column of the comparison state table also needs to be determined according to the sequence of the images in each row from bottom to top. Of course, if the subsequent step identifies a difference between two images based on the comparison state table determined in the order from bottom to top, the position where the mark appears indicates that the image corresponding to the comparison state table column is different from the image corresponding to the row, and there is a difference in the position of the mark.
Based on the process of identifying image differences shown in fig. 2, the embodiment of the present specification further corresponds to a schematic structural diagram of an apparatus for providing identifying image differences, as shown in fig. 6.
Fig. 6 is a schematic structural diagram of an apparatus for identifying an image difference provided in an embodiment of the present specification, where the apparatus includes:
the determining module 200 determines a first image and a second image which need to be compared;
the encoding module 202 is configured to perform, for each line of images in the two images, dimension reduction processing on pixels of the line of images according to a preset dimension, and determine line information encoding corresponding to the line of images according to a result of the dimension reduction processing;
the table building module 204 is configured to build a comparison state table of line information codes according to the line information codes corresponding to the line images of the two images, where each edge of a cell in the comparison state table is a sub-path;
a similarity calculation module 206, configured to, for each cell in the comparison state table, if the similarity between two line information codes corresponding to the cell is greater than a preset value, establish a sub-path on a diagonal line of the cell;
the path planning module 208 determines a path between a head table cell of the comparison state table and a tail table cell of the comparison state table by taking the edge passing the fewest cell cells as an optimization target according to each sub-path of the comparison state table;
and the identification difference module 210 is used for identifying the difference between the two images according to the line image corresponding to the edge of the cell passing through the determined path.
Optionally, the encoding module 202 performs, for each line of images in the two images, dimension reduction processing on pixels of the line of images according to a preset dimension, and converts the two images into single-channel images respectively before determining that the line information corresponding to the line of images is encoded according to a result of the dimension reduction processing.
Optionally, the encoding module 202 determines a row vector corresponding to the row image obtained after the dimension reduction processing, and determines, for each row element in the row vector, whether a numerical value of the row element is greater than an average value of the row elements of the row vector, if so, sets the row element as a first specified value, and if not, sets the row element as a second specified value, and determines that the row information corresponding to the row image is encoded.
Optionally, the table creating module 204 determines, according to an order of images in rows of the first image from top to bottom, a row information code corresponding to each column of a comparison state table, determines, according to an order of images in rows of the second image from top to bottom, a row information code corresponding to each row of the comparison state table, and creates the comparison state table, where each cell in the comparison state table corresponds to one row information code in the first image and one row information code in the second image, and the cell is used to represent a similarity between the two row information codes.
Optionally, the similarity calculation module 206 determines, according to the two line information codes corresponding to the cell, a code number with the same code position and the same code value, and if a ratio of the determined code number to the preset dimension is greater than a preset value, establishes a sub-path on a diagonal line of the cell.
Optionally, the path planning module 208 determines, according to each sub-path of the comparison state table, a path with the least edges of the cells passing from the head table of the comparison state table to the tail table of the comparison state table through a difference algorithm.
Optionally, the identifier difference module 210 determines, according to the determined path, for each sub-path in the path that is downward along the edge of the cell, a row image in the second image corresponding to the sub-path, and adds a row increasing identifier, and for each sub-path in the path that is rightward along the edge of the cell, determines a row image in the second image corresponding to the sub-path, and adds a row decreasing identifier.
The present specification also provides a computer readable storage medium, which stores a computer program, and the computer program can be used for executing any one of the above methods for identifying image differences.
Based on the identification image difference process provided in fig. 2, the embodiment of the present specification further proposes the electronic device shown in fig. 7. As shown in fig. 7, at a hardware level, the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile memory, from which the processor reads a corresponding computer program into the memory and then runs to implement any one of the above methods for identifying image differences.
Of course, besides the software implementation, the present specification does not exclude other implementations, such as logic devices or a combination of software and hardware, and the like, that is, the execution subject of the following processing flow is not limited to each logic unit, and may be hardware or logic devices.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Language Description Language), traffic, pl (core unified Programming Language), HDCal, JHDL (Java Hardware Description Language), langue, Lola, HDL, laspam, hardsradware (Hardware Description Language), vhjhd (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functions of the various elements may be implemented in the same one or more software and/or hardware implementations of the present description.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
This description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present specification, and is not intended to limit the present specification. Various modifications and alterations to this description will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present specification should be included in the scope of the claims of the present specification.

Claims (10)

1. A method of identifying image differences, comprising:
determining a first image and a second image which need to be compared;
performing dimension reduction processing on pixels of the line of images according to a preset dimension aiming at each line of images in the two images, and determining a line information code corresponding to the line of images according to a dimension reduction processing result;
establishing a comparison state table of the row information codes by respectively taking the row information codes corresponding to the first image and the row information codes corresponding to the second image as rows and columns, wherein each edge of a cell in the comparison state table is a sub-path;
aiming at each cell in the comparison state table, if the similarity between two line information codes corresponding to the cell is greater than a preset value, establishing a sub-path on a diagonal line of the cell;
according to each sub-path of the comparison state table, determining a path composed of each sub-path between a table head table of the comparison state table and a table tail table of the comparison state table by taking the edge passing through the least cell as an optimization target;
and identifying the difference of the two images according to the line image corresponding to the edge of the cell passing through the determined path.
2. The method according to claim 1, wherein for each line of images in the two images, according to a preset dimension, performing dimension reduction processing on pixels of the line of images, and before determining line information corresponding to the line of images to be encoded according to a result of the dimension reduction processing, the method further comprises:
and respectively converting the two images into single-channel images.
3. The method according to claim 1 or 2, wherein determining the line information code corresponding to the line image according to the dimension reduction processing result specifically includes:
determining a line vector corresponding to the line of image obtained after the dimension reduction processing;
for each row element in the row vector, judging whether the numerical value of the row element is larger than the average value of the row elements of the row vector, if so, setting the row element as a first specified value, and if not, setting the row element as a second specified value;
and determining the line information code corresponding to the line image.
4. The method of claim 1, wherein the step of establishing a comparison state table of row information codes by using the row information codes corresponding to the first image and the row information codes corresponding to the second image as rows and columns respectively comprises:
determining row information codes corresponding to each column of a comparison state table according to the sequence of the images of each row of the first image from top to bottom;
determining a row information code corresponding to each row of the comparison state table according to the sequence of the images of each row of the second image from top to bottom;
and establishing the comparison state table, wherein each cell in the comparison state table corresponds to one line of information code in the first image and one line of information code in the second image, and the cell is used for representing the similarity of the two line of information codes.
5. The method of claim 1, wherein if the similarity between two row information codes corresponding to the cell is greater than a predetermined value, establishing a sub-path on a diagonal of the cell, specifically comprising:
determining the number of codes with the same code position and the same code value according to the two line information codes corresponding to the cell;
and if the ratio of the determined coding quantity to the preset dimension is larger than a preset value, establishing a sub-path on the diagonal line of the cell.
6. The method according to claim 4, wherein determining a path from a head table cell of the comparison state table to a tail table cell of the comparison state table with an optimization goal of passing a least number of cell edges according to each sub-path of the comparison state table specifically comprises:
and determining the path with the least edges of the cells passing from the head table lattice to the tail table lattice of the comparison state table through a difference algorithm according to each sub-path of the comparison state table.
7. The method according to claim 4, wherein identifying the difference between the two images according to the line image corresponding to the edge of the cell passing through the determined path specifically comprises:
according to the determined path, determining a line image corresponding to each sub-path in the path along the downward edge of the cell, and adding a line increasing identifier;
and determining a line image corresponding to each sub-path to the right along the edge of the cell in the path, and adding a line reduction identifier.
8. An apparatus for identifying image differences, comprising:
the determining module is used for determining a first image and a second image which need to be compared;
the encoding module is used for performing dimension reduction processing on pixels of the line of images according to a preset dimension aiming at each line of images in the two images and determining line information encoding corresponding to the line of images according to a dimension reduction processing result;
the table building module is used for building a comparison state table of the row information codes by respectively taking the row information codes corresponding to the first image and the row information codes corresponding to the second image as rows and columns, wherein each edge of a middle cell of the comparison state table is a sub-path;
the similarity calculation module is used for establishing a sub-path on a diagonal line of each cell if the similarity between two line information codes corresponding to the cell is greater than a preset value for each cell in the comparison state table;
the path planning module is used for determining a path formed by sub-paths between a head table cell of the comparison state table and a tail table cell of the comparison state table by taking the edge passing through the least cell as an optimization target according to the sub-paths of the comparison state table;
and the identification difference module is used for identifying the difference between the two images according to the line image corresponding to the edge of the passing cell in the determined path.
9. A computer-readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method of any of the preceding claims 1-7.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any of claims 1-7 when executing the program.
CN202010689863.XA 2020-07-17 2020-07-17 Method and device for identifying image difference Active CN111881959B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010689863.XA CN111881959B (en) 2020-07-17 2020-07-17 Method and device for identifying image difference

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010689863.XA CN111881959B (en) 2020-07-17 2020-07-17 Method and device for identifying image difference

Publications (2)

Publication Number Publication Date
CN111881959A true CN111881959A (en) 2020-11-03
CN111881959B CN111881959B (en) 2024-03-08

Family

ID=73155594

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010689863.XA Active CN111881959B (en) 2020-07-17 2020-07-17 Method and device for identifying image difference

Country Status (1)

Country Link
CN (1) CN111881959B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004208252A (en) * 2002-11-07 2004-07-22 Sangaku Renkei Kiko Kyushu:Kk Image coding apparatus and method therefor
CN105095903A (en) * 2015-07-16 2015-11-25 努比亚技术有限公司 Electronic equipment and image processing method
CN108984399A (en) * 2018-06-29 2018-12-11 上海连尚网络科技有限公司 Detect method, electronic equipment and the computer-readable medium of interface difference

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004208252A (en) * 2002-11-07 2004-07-22 Sangaku Renkei Kiko Kyushu:Kk Image coding apparatus and method therefor
CN105095903A (en) * 2015-07-16 2015-11-25 努比亚技术有限公司 Electronic equipment and image processing method
CN108984399A (en) * 2018-06-29 2018-12-11 上海连尚网络科技有限公司 Detect method, electronic equipment and the computer-readable medium of interface difference

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
EYAL SAGI等: "What Difference Reveals About Similarity", 《COGNITIVE SCIENCE》 *
李志清;施智平;李志欣;史忠植;: "基于结构相似度的稀疏编码模型", 软件学报, no. 10 *
鹿天然;于凤芹;陈莹: "一种基于线性序列差异分析降维的人体行为识别方法", 计算机工程, no. 003 *

Also Published As

Publication number Publication date
CN111881959B (en) 2024-03-08

Similar Documents

Publication Publication Date Title
CN109189682B (en) Script recording method and device
US10929628B2 (en) QR code positioning method and apparatus
CN107274442B (en) Image identification method and device
CN109034183B (en) Target detection method, device and equipment
CN107025480B (en) Image generation method and apparatus thereof
CN107622080B (en) Data processing method and equipment
CN110806847A (en) Distributed multi-screen display method, device, equipment and system
CN109978044B (en) Training data generation method and device, and model training method and device
CN111368902A (en) Data labeling method and device
CN114626437A (en) Model training method and device, storage medium and electronic equipment
WO2020181522A1 (en) Defective pixel detection method, image processing chip, and electronic device
CN111881959B (en) Method and device for identifying image difference
CN114863206A (en) Model training method, target detection method and device
CN112949642B (en) Character generation method and device, storage medium and electronic equipment
CN115130621A (en) Model training method and device, storage medium and electronic equipment
CN112560530B (en) Two-dimensional code processing method, device, medium and electronic device
CN107239270B (en) Code processing method and device
CN115018866A (en) Boundary determining method and device, storage medium and electronic equipment
CN110866478B (en) Method, device and equipment for identifying object in image
CN109325127B (en) Risk identification method and device
CN112381905A (en) Color superposition method and device and electronic equipment
CN111898615A (en) Feature extraction method, device, equipment and medium of object detection model
CN111899264A (en) Target image segmentation method, device and medium
CN111539962A (en) Target image classification method, device and medium
CN110503109B (en) Image feature extraction method and device, and image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant