CN108629786B - Image edge detection method and device - Google Patents

Image edge detection method and device Download PDF

Info

Publication number
CN108629786B
CN108629786B CN201710180247.XA CN201710180247A CN108629786B CN 108629786 B CN108629786 B CN 108629786B CN 201710180247 A CN201710180247 A CN 201710180247A CN 108629786 B CN108629786 B CN 108629786B
Authority
CN
China
Prior art keywords
matrix
intensity
value
edge
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710180247.XA
Other languages
Chinese (zh)
Other versions
CN108629786A (en
Inventor
陈欢
彭晓峰
朱洪波
王微
宋利伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spreadtrum Communications Shanghai Co Ltd
Original Assignee
Spreadtrum Communications Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spreadtrum Communications Shanghai Co Ltd filed Critical Spreadtrum Communications Shanghai Co Ltd
Priority to CN201710180247.XA priority Critical patent/CN108629786B/en
Publication of CN108629786A publication Critical patent/CN108629786A/en
Application granted granted Critical
Publication of CN108629786B publication Critical patent/CN108629786B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

An image edge detection method and device, the method comprising: acquiring an input original image; selecting a horizontal input matrix and a vertical input matrix from the original image; dividing the horizontal input matrix and the vertical input matrix into a plurality of corresponding pixel blocks respectively; partial overlap between adjacent pixel blocks of the plurality of pixel blocks; and determining the edge area of the original image based on the similarity information between the adjacent pixel blocks in the horizontal input matrix and the vertical input matrix. According to the scheme, the accuracy of image edge detection can be improved, and the imaging quality of the mobile terminal is improved.

Description

Image edge detection method and device
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and an apparatus for detecting an image edge.
Background
Edge detection, which has been one of the classical research topics in the field of image processing, is used to find information about shape and reflection or transmittance in an image. Edge detection is one of the basic steps of image processing, image analysis, pattern recognition, computer vision, and human vision. The correctness and reliability of the results of edge detection will directly affect the machine vision system's understanding of the objective world.
With the progress of science and technology, the popularity of the mobile terminal is higher and higher, and meanwhile, the requirements of people on the functions of the mobile terminal are higher and higher, especially on the imaging effect of a camera of the mobile terminal.
However, the existing image edge detection method has the problem of poor detection accuracy, and the imaging quality of the mobile terminal is influenced.
Disclosure of Invention
The embodiment of the invention solves the technical problem of how to improve the accuracy of image edge detection and improve the imaging quality of the mobile terminal.
In order to solve the above problem, an embodiment of the present invention provides an image edge detection method, where the method includes: acquiring an input original image; selecting a horizontal input matrix and a vertical input matrix from the original image; dividing the horizontal input matrix and the vertical input matrix into a plurality of corresponding pixel blocks respectively; partial overlap between adjacent pixel blocks of the plurality of pixel blocks; and determining the edge area of the original image based on the similarity information between the adjacent pixel blocks in the horizontal input matrix and the vertical input matrix.
Optionally, the determining an edge region of the original image based on similarity information between adjacent pixel blocks in the horizontal input matrix and the vertical input matrix includes: respectively calculating the similarity value of the adjacent pixel block in the horizontal direction in the horizontal input matrix and the similarity value of the adjacent pixel block in the vertical direction in the vertical input matrix to obtain a corresponding horizontal direction similarity matrix and a corresponding vertical direction similarity matrix; comparing each similarity value in the horizontal direction similarity matrix and the vertical direction similarity matrix obtained by calculation with a corresponding strength threshold value respectively; respectively generating a corresponding horizontal intensity matrix and a corresponding vertical intensity matrix according to the comparison result of each similarity value in the horizontal direction similarity matrix and the vertical direction similarity matrix with the corresponding intensity threshold; generating a corresponding image edge intensity matrix based on the horizontal intensity matrix and the vertical intensity matrix; and extracting edge information in the corresponding direction from the image edge intensity matrix.
Optionally, the similarity value of the pixel block adjacent to the horizontal direction in the horizontal input matrix and the similarity value of the pixel block adjacent to the vertical direction in the vertical input matrix are respectively calculated by the following formula, wherein D is ∑ Vblock1[i]-vblock2[i]L, |; wherein i represents the ith pixel point in the pixel block, D represents the similarity value between the ith pixel points in the adjacent pixel blocks in the horizontal direction or the vertical direction, block1 and block2 respectively represent the adjacent pixel blocks in the horizontal direction or the vertical direction, and the adjacent pixel blocks are adjacentThe ith pixel point in the pixel block is the pixel point of the same channel, vblock1[i]、vblock2[i]Respectively representing the pixel values of the ith pixel point in the adjacent pixel blocks.
Optionally, the generating the corresponding horizontal intensity matrix and the vertical intensity matrix according to the comparison result between each similarity value in the horizontal direction similarity matrix and the corresponding vertical direction similarity matrix and the corresponding intensity threshold respectively includes: when the similarity value is smaller than or equal to a preset first intensity threshold value, recording a corresponding intensity value in a corresponding intensity matrix as a first value; when the similarity value is larger than the first intensity threshold and smaller than or equal to a preset second intensity threshold, recording a corresponding intensity value in a corresponding intensity matrix as a second value; the second value is greater than the first value; when the similarity value is greater than the second intensity threshold and less than or equal to a preset third intensity threshold, recording a corresponding intensity value in the corresponding intensity matrix as a third value; the third value is greater than the second value; when the similarity value is larger than the third intensity threshold value, recording a corresponding intensity value in the corresponding intensity matrix as a fourth value; the fourth value is greater than the third value.
Optionally, the generating a corresponding image edge intensity matrix based on the horizontal intensity matrix and the vertical intensity matrix includes:
Figure BDA0001252758730000031
wherein, flag _ stri,jRepresenting the edge intensity value, h, of the ith row and jth column in the image edge intensity matrixi,jRepresenting intensity values, v, of ith row and jth column in said horizontal intensity matrixi,jAnd N represents the row number and the column number of the horizontal similarity matrix and the vertical similarity matrix.
Optionally, the extracting edge information in a corresponding direction from the image edge intensity matrix includes: acquiring edge intensity values in the corresponding direction in the image edge intensity matrix; when the number of the edge intensity values which are larger than the preset edge intensity threshold value in the corresponding direction is larger than the preset number threshold value, determining that the corresponding direction is an edge direction; calculating to obtain the sum of the edge strength values in the edge direction; and judging the strength of the edge based on the sum of the edge strength values in the edge direction.
Optionally, the extracting edge information in a corresponding direction from the image edge intensity matrix further includes: and when only one edge direction exists or the determined edge directions are more than two and are not on the same straight line, determining the pixel center points of the horizontal input matrix and the vertical input matrix as corner points.
An embodiment of the present invention further provides an image edge detection apparatus, including: an acquisition unit adapted to acquire an input original image; the selecting unit is suitable for selecting a horizontal input matrix and a vertical input matrix from the original image; the dividing unit is suitable for dividing the horizontal input matrix and the vertical input matrix into a plurality of corresponding pixel blocks respectively; partial overlap between adjacent pixel blocks of the plurality of pixel blocks; and the detection unit is suitable for determining the edge area of the original image based on the similarity information between the adjacent pixel blocks in the horizontal input matrix and the vertical input matrix.
Optionally, the detecting unit is adapted to calculate a similarity value of an adjacent pixel block in a horizontal direction in the horizontal input matrix and a similarity value of an adjacent pixel block in a vertical direction in the vertical input matrix, respectively, to obtain a corresponding horizontal direction similarity matrix and a corresponding vertical direction similarity matrix; comparing each similarity value in the horizontal direction similarity matrix and the vertical direction similarity matrix obtained by calculation with a corresponding strength threshold value respectively; respectively generating a corresponding horizontal intensity matrix and a corresponding vertical intensity matrix according to the comparison result of each similarity value in the horizontal direction similarity matrix and the vertical direction similarity matrix with the corresponding intensity threshold; generating a corresponding image edge intensity matrix based on the horizontal intensity matrix and the vertical intensity matrix; and extracting edge information in the corresponding direction from the image edge intensity matrix.
Optionally, the detecting unit is adapted to calculate the similarity value of the pixel block adjacent to the horizontal direction in the horizontal input matrix and the similarity value of the pixel block adjacent to the vertical direction in the vertical input matrix respectively by using the following formula, wherein D is ∑ | vblock1[i]-vblock2[i]L, |; wherein i represents the ith pixel point in the pixel block, D represents the similarity value between the ith pixel points in the adjacent pixel blocks in the horizontal direction or the vertical direction, block1 and block2 respectively represent the adjacent pixel blocks in the horizontal direction or the vertical direction, the ith pixel point in the adjacent pixel blocks is the pixel point in the same channel, vblock1[i]、vblock2[i]Respectively representing the pixel values of the ith pixel point in the adjacent pixel blocks.
Optionally, the detection unit is adapted to record a corresponding intensity value in the corresponding intensity matrix as a first value when the similarity value is less than or equal to a preset first intensity threshold; when the similarity value is larger than the first intensity threshold and smaller than or equal to a preset second intensity threshold, recording a corresponding intensity value in a corresponding intensity matrix as a second value; the second value is greater than the first value; when the similarity value is greater than the second intensity threshold and less than or equal to a preset third intensity threshold, recording a corresponding intensity value in the corresponding intensity matrix as a third value; the third value is greater than the second value; when the similarity value is larger than the third intensity threshold value, recording a corresponding intensity value in the corresponding intensity matrix as a fourth value; the fourth value is greater than the third value.
Optionally, the detecting unit is adapted to generate a corresponding image edge intensity matrix based on the horizontal intensity matrix and the vertical intensity matrix in the following manner:
Figure BDA0001252758730000051
wherein, flag _ stri,jRepresenting the edge intensity value, h, of the ith row and jth column in the image edge intensity matrixi,jIndicates that the level is strongIntensity value, v, in ith row and jth column of degree matrixi,jAnd N represents the row number and the column number of the horizontal similarity matrix and the vertical similarity matrix.
Optionally, the detecting unit is adapted to obtain an edge intensity value in a corresponding direction in the image edge intensity matrix; when the number of the edge intensity values which are larger than the preset edge intensity threshold value in the corresponding direction is larger than the preset number threshold value, determining that the corresponding direction is an edge direction; calculating to obtain the sum of the edge strength values in the edge direction; and judging the strength of the edge based on the sum of the edge strength values in the edge direction.
Optionally, the detecting unit is further adapted to determine pixel center points of the horizontal input matrix and the vertical input matrix as corner points when only a unique edge direction exists, or the determined edge directions are more than two and not on the same straight line.
Compared with the prior art, the technical scheme of the invention has the following advantages:
according to the scheme, the corresponding image edge can be determined through the similarity between the adjacent pixel blocks in the horizontal input matrix and the vertical input matrix selected from the original image, the operation complexity of image edge detection can be reduced, and the accuracy of the edge detection can be considered.
Further, when the corresponding direction is determined to be the edge direction, the sum of the corresponding edge strength values in the edge direction can be obtained according to calculation, the edge strength in the corresponding direction can be accurately distinguished, and the quality of image processing can be improved.
Furthermore, when only a unique edge direction exists, or the determined edge directions are more than two and are not located on the same straight line, the corresponding pixel center point is determined as the corner point, so that the corner point information in the image can be accurately determined, the information of the corner point is reserved during image processing, and the imaging quality of the image is improved.
Drawings
FIG. 1 is a flow chart of a method for detecting image edges in an embodiment of the present invention;
FIG. 2 is a flow chart of another method of image edge detection in an embodiment of the invention;
FIG. 3 is a schematic illustration of raw image data in an embodiment of the invention;
FIG. 4 is a schematic diagram of a horizontal input matrix and a position relationship between adjacent pixel blocks therein according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a vertical input matrix and a position relationship between adjacent pixel blocks therein according to an embodiment of the present invention;
FIG. 6 is a schematic illustration of various directions in an image edge intensity matrix in an embodiment of the invention;
fig. 7 is a schematic structural diagram of an image edge detection apparatus in an embodiment of the present invention.
Detailed Description
As background art, the image edge detection method in the prior art performs edge detection through the similarity between pixels, and has the problems of low speed and low accuracy.
In order to solve the above problems, in the technical solution of the embodiment of the present invention, through the similarity between adjacent pixel blocks in the horizontal input matrix and the vertical input matrix selected in the original image, the corresponding image edge can be determined, the operation complexity of image edge detection can be reduced, and the accuracy of edge detection can be improved.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
Fig. 1 shows a flowchart of an image edge detection method in an embodiment of the present invention. The image edge detection method shown in fig. 1 may include:
step S101: an input original image is acquired.
In a specific implementation, the size of the input original image may be set according to actual needs, for example, may be set to 9 × 9, 13 × 13, or 15 × 15, etc.
Step S102: a horizontal input matrix and a vertical input matrix are selected from the original image.
In a specific implementation, the sizes of the horizontal input matrix and the vertical input matrix selected from the original image can be determined according to the requirement of subsequent similarity calculation.
Step S103: and dividing the horizontal input matrix and the vertical input matrix into a plurality of corresponding pixel blocks respectively.
In a specific implementation, the horizontal input matrix and the vertical input matrix may be divided into a plurality of corresponding pixel blocks with equal size according to actual needs. Meanwhile, adjacent pixel blocks in the plurality of pixel blocks obtained by dividing the horizontal input matrix are partially overlapped, and adjacent pixel blocks in the plurality of pixel blocks obtained by dividing the vertical input matrix are partially overlapped. The overlapping part between the adjacent pixel blocks in the horizontal input matrix and the overlapping part between the adjacent pixel blocks in the vertical input matrix can be set as required. For example, there is a column of overlapping pixels between adjacent blocks of pixels in the horizontal input matrix and a row of overlapping pixels in the vertical input matrix.
Step S104: and determining the edge area of the original image based on the similarity information between the adjacent pixel blocks in the horizontal input matrix and the vertical input matrix.
In specific implementation, when the horizontal input matrix and the vertical input matrix in the original image are respectively divided to obtain a plurality of corresponding pixel blocks, the edge region of the original image can be determined by utilizing the similarity information between adjacent pixel blocks in the horizontal input matrix and the vertical input matrix, and compared with the edge detection by utilizing the similarity between the pixel points, the pixel points in the adjacent pixel blocks have larger discrimination in the edge region, so that the accuracy of the image edge detection can be improved, the calculation amount can be reduced, and the speed of the edge detection can be improved.
According to the scheme, the corresponding image edge can be determined through the similarity between the adjacent pixel blocks in the horizontal input matrix and the vertical input matrix selected from the original image, the operation complexity of image edge detection can be reduced, and the accuracy of edge detection can be improved.
The image edge detection method in the embodiment of the present invention will be described in further detail below.
Fig. 2 shows a flow chart of another image edge detection method in an embodiment of the invention. Referring to fig. 2, an image edge detection method in the embodiment of the present invention is suitable for detecting an edge of an input original image, and may specifically be implemented by the following operations:
step S201: an input original image is acquired.
In specific implementation, the pixel points in the input original image may be represented by corresponding channel components and arranged at intervals.
Referring to fig. 3, taking the size of the input original image as 13 × 13 as an example, the corresponding channel component of each pixel may be one of a G component, an R component, or a B component, and the pixel components of the channels taken by the adjacent pixels are different in both the vertical direction and the horizontal direction.
Step S202: a horizontal input matrix and a vertical input matrix are selected from the original image.
In a specific implementation, when the horizontal input matrix and the vertical input matrix are selected from the original image, the selection may be performed according to the size of the similarity matrix that needs to be generated in the subsequent step.
Referring to fig. 4 and 5, also taking the size of the input original image as 13 × 13 as an example, when the size of the similarity matrix to be finally generated is 5 × 5, the corresponding horizontal input matrix includes a portion in the frame a, that is, the horizontal direction includes 2 to 12 lines, the vertical direction includes 2 to 12 lines, and each pixel block has a size of 3 × 3, and there is a line of pixel overlap between adjacent pixel blocks; the corresponding vertical input matrix comprises a part in a frame b, namely 0-12 lines in the horizontal direction and 2-12 columns in the vertical direction, each pixel block is 3 x 3 in size, and a column of pixel overlaps are arranged between adjacent pixel blocks.
Step S203: and dividing the horizontal input matrix and the vertical input matrix into a plurality of corresponding pixel blocks respectively.
In a specific implementation, the number of pixel blocks obtained by dividing the horizontal input matrix and the number of pixel blocks obtained by dividing the vertical input matrix, and the overlapping portion between adjacent pixel blocks in the horizontal input matrix and the overlapping portion between adjacent pixel blocks in the vertical input matrix are related to the size of a similarity matrix generated subsequently.
Continuing to refer to fig. 4 and 5, when the size of the input original image is 13 × 13, and when the size of the similarity matrix that needs to be generated finally is 5 × 5, 5 pairs of corresponding adjacent pixel blocks need to exist in the horizontal direction and the vertical direction, at this time, the sizes of the pixel blocks in the horizontal input matrix and the vertical input matrix may be set to 3 × 3, and a column of pixel overlap exists between the adjacent pixel blocks in the horizontal input matrix, and a row of pixel overlap exists between the adjacent pixel blocks in the vertical input matrix. For the adjacent pixel blocks in the horizontal input matrix and the vertical input matrix, see the dashed boxes and the corresponding filling areas in fig. 4 and 5, respectively.
Step S204: and respectively calculating the similarity value of the adjacent pixel block in the horizontal direction in the horizontal input matrix and the similarity value of the adjacent pixel block in the vertical direction in the vertical input matrix to obtain a corresponding horizontal direction similarity matrix and a corresponding vertical direction similarity matrix.
In an embodiment of the present invention, the following formulas are adopted to calculate the similarity value of the adjacent pixel block in the horizontal direction in the horizontal input matrix and the similarity value of the adjacent pixel block in the vertical direction in the vertical input matrix respectively:
D=∑|vblock1[i]-vblock2[i]| (1)
wherein i represents the ith pixel point in the pixel block, D represents the similarity value between the ith pixel points in the adjacent pixel blocks in the horizontal direction or the vertical direction, block1 and block2 respectively represent the adjacent pixel blocks in the horizontal direction or the vertical direction, the ith pixel point in the adjacent pixel blocks is the pixel point in the same channel, vblock1[i]、vblock2[i]Respectively representing the pixel values of the ith pixel point in the adjacent pixel blocks.
Step S205: and comparing each similarity value in the horizontal direction similarity matrix and the vertical direction similarity matrix obtained by calculation with a corresponding intensity threshold respectively, and generating a corresponding horizontal intensity matrix and a corresponding vertical intensity matrix respectively according to a comparison result.
In specific implementation, when the corresponding horizontal direction similarity matrix and the vertical direction similarity matrix are obtained through calculation, the corresponding horizontal intensity matrix and the corresponding vertical intensity matrix can be generated through each similarity value in the horizontal direction similarity matrix and the vertical direction similarity matrix.
In an embodiment of the present invention, the corresponding horizontal intensity matrix and vertical intensity matrix are determined as follows:
when the similarity value is less than or equal to a preset first strength threshold str _ thresh0, recording a corresponding strength value in a corresponding strength matrix flag _ strength [ i, j ] as a first value, such as 0;
when the similarity value is greater than the first strength threshold and less than or equal to a preset second strength threshold str _ thresh1, recording a corresponding strength value flag _ strength [ i, j ] in a corresponding strength matrix as a second value, wherein the second value is greater than the first value, such as 1;
when the similarity value is greater than the second strength threshold and less than or equal to a preset third strength threshold str _ thresh2, recording a corresponding strength value flag _ strength [ i, j ] in a corresponding strength matrix as a third value, and the third value is greater than the second value, such as 2;
when the similarity value is greater than the third strength threshold str _ thresh3, the corresponding strength value in the corresponding strength matrix is recorded as a fourth value, and the fourth value is greater than the third value, such as 3.
The values of the first strength threshold str _ thresh0, the second strength threshold str _ thresh1, and the third strength threshold str _ thresh3 may be selected according to actual needs.
Step S206: and generating a corresponding image edge intensity matrix based on the horizontal intensity matrix and the vertical intensity matrix.
In an embodiment of the present invention, when generating the corresponding horizontal intensity matrix and the vertical intensity matrix respectively, based on the horizontal intensity matrix and the vertical intensity matrix, the corresponding image edge intensity matrix may be generated by adopting a method including:
Figure BDA0001252758730000101
wherein, flag _ stri,jRepresenting the edge intensity value, h, of the ith row and jth column in the image edge intensity matrixi,jRepresenting intensity values, v, of ith row and jth column in said horizontal intensity matrixi,jAnd N represents the row number and the column number of the horizontal similarity matrix and the vertical similarity matrix.
Step S207: and extracting edge information in the corresponding direction from the image edge intensity matrix.
In a specific implementation, when the corresponding image edge intensity matrix is obtained, the edge information in the corresponding direction may be extracted according to the edge intensity value in the image edge intensity matrix.
Referring to fig. 6, the directions in the image edge intensity matrix include a 0 degree direction, a 45 degree direction, a 90 degree direction, a 135 degree direction, a 180 degree direction, a 225 degree direction, a 270 degree direction, and a 315 degree direction. And extracting edge information in the corresponding direction from the image edge intensity matrix, namely determining whether the corresponding direction is the edge of the image or not according to the edge intensity value in the corresponding direction of the image edge intensity matrix.
In an embodiment of the present invention, when determining whether the corresponding direction is the edge direction, the edge intensity values in the corresponding direction in the image edge intensity matrix may be first obtained, and the number num of the edge intensity values greater than the preset edge intensity threshold in the edge intensity values in the corresponding direction in the image edge intensity matrix may be determinedDWhether the number is greater than a preset number threshold num _ thresh; when it is determined that the corresponding direction is greater than presetNumber num of edge intensity values of edge intensity thresholdDWhen the number is greater than a preset number threshold num _ thresh, the corresponding direction can be determined to be an edge direction; otherwise, it may be determined that the corresponding direction is not an edge direction.
In a specific implementation, when it is determined that the corresponding direction is the edge direction, the sum of the edge strength values in the edge direction may be obtained through calculation, and the strength of the edge is determined based on the sum of the edge strength values in the edge direction.
The edge directions determined by the above method are two, and the two determined edge directions are on the same straight line, so that the determined edge directions can be determined to be the edge directions of the image. However, when the determined edge direction is only one, the obtained determined edge directions are more than two, and the determined edge directions are not on the same straight line, the central pixel points of the previously determined horizontal input matrix and the vertical input matrix can be determined as the corner points, and the information of the corner points is retained as much as possible during subsequent image processing, so that the pixel values of the corner points are prevented from being updated incorrectly, and the image processing quality can be improved.
The image edge detection method in the embodiment of the present invention is described in detail above, and an apparatus corresponding to the method will be described below.
Fig. 7 shows a structure of an image edge detection apparatus in an embodiment of the present invention. Referring to fig. 7, an image edge detection apparatus 700 in the embodiment of the present invention may include an obtaining unit 701, a selecting unit 702, a dividing unit 703, and a detecting unit 704, where:
the acquiring unit 701 is adapted to acquire an input original image.
The selecting unit 702 is adapted to select a horizontal input matrix and a vertical input matrix from the original image.
The dividing unit 703 is adapted to divide the horizontal input matrix and the vertical input matrix into a plurality of corresponding pixel blocks respectively; and adjacent pixel blocks in the plurality of pixel blocks are partially overlapped.
The detecting unit 704 is adapted to determine an edge region of the original image based on similarity information between adjacent pixel blocks in the horizontal input matrix and the vertical input matrix.
In an embodiment of the present invention, the detecting unit 704 is adapted to calculate a similarity value of an adjacent pixel block in a horizontal direction in the horizontal input matrix and a similarity value of an adjacent pixel block in a vertical direction in the vertical input matrix, respectively, to obtain a corresponding horizontal direction similarity matrix and a corresponding vertical direction similarity matrix; comparing each similarity value in the horizontal direction similarity matrix and the vertical direction similarity matrix obtained by calculation with a corresponding strength threshold value respectively; respectively generating a corresponding horizontal intensity matrix and a corresponding vertical intensity matrix according to the comparison result of each similarity value in the horizontal direction similarity matrix and the vertical direction similarity matrix with the corresponding intensity threshold; generating a corresponding image edge intensity matrix based on the horizontal intensity matrix and the vertical intensity matrix; and extracting edge information in the corresponding direction from the image edge intensity matrix.
In an embodiment of the present invention, the detecting unit 704 is adapted to calculate the similarity value of the pixel block adjacent to the horizontal direction in the horizontal input matrix and the similarity value of the pixel block adjacent to the vertical direction in the vertical input matrix respectively by using the following formula, wherein D is ∑ vblock1[i]-vblock2[i]L, |; wherein i represents the ith pixel point in the pixel block, D represents the similarity value between the ith pixel points in the adjacent pixel blocks in the horizontal direction or the vertical direction, block1 and block2 respectively represent the adjacent pixel blocks in the horizontal direction or the vertical direction, the ith pixel point in the adjacent pixel blocks is the pixel point in the same channel, vblock1[i]、vblock2[i]Respectively representing the pixel values of the ith pixel point in the adjacent pixel blocks.
In an embodiment of the present invention, the detecting unit 704 is adapted to mark a corresponding intensity value in a corresponding intensity matrix as a first value when the similarity value is smaller than or equal to a preset first intensity threshold; when the similarity value is larger than the first intensity threshold and smaller than or equal to a preset second intensity threshold, recording a corresponding intensity value in a corresponding intensity matrix as a second value; the second value is greater than the first value; when the similarity value is greater than the second intensity threshold and less than or equal to a preset third intensity threshold, recording a corresponding intensity value in the corresponding intensity matrix as a third value; the third value is greater than the second value; when the similarity value is larger than the third intensity threshold value, recording a corresponding intensity value in the corresponding intensity matrix as a fourth value; the fourth value is greater than the third value.
In an embodiment of the present invention, the detecting unit 704 is adapted to generate a corresponding image edge intensity matrix based on the horizontal intensity matrix and the vertical intensity matrix in the following manner:
Figure BDA0001252758730000131
wherein, flag _ stri,jRepresenting the edge intensity value, h, of the ith row and jth column in the image edge intensity matrixi,jRepresenting intensity values, v, of ith row and jth column in said horizontal intensity matrixi,jAnd N represents the row number and the column number of the horizontal similarity matrix and the vertical similarity matrix.
In an embodiment of the present invention, the detecting unit 704 is adapted to obtain edge intensity values in corresponding directions in the image edge intensity matrix; when the number of the edge intensity values which are larger than the preset edge intensity threshold value in the corresponding direction is larger than the preset number threshold value, determining that the corresponding direction is an edge direction; calculating to obtain the sum of the edge strength values in the edge direction; and judging the strength of the edge based on the sum of the edge strength values in the edge direction.
In an embodiment of the present invention, the detecting unit 704 is further adapted to determine the pixel center points of the horizontal input matrix and the vertical input matrix as corner points when only a single edge direction exists, or the determined edge directions are more than two and are not on the same straight line.
By adopting the scheme in the embodiment of the invention, the corresponding image edge can be determined through the similarity between the adjacent pixel blocks in the horizontal input matrix and the vertical input matrix selected in the original image, the operation complexity of image edge detection can be reduced, and the accuracy of edge detection can be improved.
Further, when the corresponding direction is determined to be the edge direction, the sum of the corresponding edge strength values in the edge direction can be obtained according to calculation, the edge strength in the corresponding direction can be accurately distinguished, and the quality of image processing can be improved.
Furthermore, when only a unique edge direction exists, or the determined edge directions are more than two and are not located on the same straight line, the corresponding pixel center point is determined as the corner point, so that the corner point information in the image can be accurately determined, the information of the corner point is reserved during image processing, and the imaging quality of the image is improved.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by instructions associated with hardware via a program, which may be stored in a computer-readable storage medium, and the storage medium may include: ROM, RAM, magnetic or optical disks, and the like.
The method and system of the embodiments of the present invention have been described in detail, but the present invention is not limited thereto. Various changes and modifications may be effected therein by one skilled in the art without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (12)

1. An image edge detection method, comprising:
acquiring an input original image;
selecting a horizontal input matrix and a vertical input matrix from the original image;
dividing the horizontal input matrix and the vertical input matrix into a plurality of corresponding pixel blocks respectively; partial overlap between adjacent pixel blocks of the plurality of pixel blocks;
determining an edge region of the original image based on similarity information between adjacent pixel blocks in the horizontal input matrix and the vertical input matrix, including: respectively calculating the similarity value of the adjacent pixel block in the horizontal direction in the horizontal input matrix and the similarity value of the adjacent pixel block in the vertical direction in the vertical input matrix to obtain a corresponding horizontal direction similarity matrix and a corresponding vertical direction similarity matrix; comparing each similarity value in the horizontal direction similarity matrix and the vertical direction similarity matrix obtained by calculation with a corresponding strength threshold value respectively; respectively generating a corresponding horizontal intensity matrix and a corresponding vertical intensity matrix according to the comparison result of each similarity value in the horizontal direction similarity matrix and the vertical direction similarity matrix with the corresponding intensity threshold; generating a corresponding image edge intensity matrix based on the horizontal intensity matrix and the vertical intensity matrix; and extracting edge information in the corresponding direction from the image edge intensity matrix.
2. The image edge detection method of claim 1, wherein the similarity value of the adjacent pixel block in the horizontal direction in the horizontal input matrix and the similarity value of the adjacent pixel block in the vertical direction in the vertical input matrix are respectively calculated by the following formulas:
D=∑|vblock1[i]-vblock2[i]|;
wherein i represents the ith pixel point in the pixel block, D represents the similarity value between the ith pixel points in the adjacent pixel blocks in the horizontal direction or the vertical direction, block1 and block2 respectively represent the adjacent pixel blocks in the horizontal direction or the vertical direction, the ith pixel point in the adjacent pixel blocks is the pixel point in the same channel, vblock1[i]、vblock2[i]Respectively representing the pixel values of the ith pixel point in the adjacent pixel blocks.
3. The image edge detection method according to claim 1, wherein the generating the corresponding horizontal intensity matrix and the vertical intensity matrix according to the comparison result between each similarity value in the horizontal direction similarity matrix and the corresponding vertical direction similarity matrix and the corresponding intensity threshold respectively comprises:
when the similarity value is smaller than or equal to a preset first intensity threshold value, recording a corresponding intensity value in a corresponding intensity matrix as a first value;
when the similarity value is larger than the first intensity threshold and smaller than or equal to a preset second intensity threshold, recording a corresponding intensity value in a corresponding intensity matrix as a second value; the second value is greater than the first value;
when the similarity value is greater than the second intensity threshold and less than or equal to a preset third intensity threshold, recording a corresponding intensity value in the corresponding intensity matrix as a third value; the third value is greater than the second value;
when the similarity value is larger than the third intensity threshold value, recording a corresponding intensity value in the corresponding intensity matrix as a fourth value; the fourth value is greater than the third value.
4. The image edge detection method of claim 1, wherein generating the corresponding image edge intensity matrix based on the horizontal intensity matrix and the vertical intensity matrix comprises:
Figure FDA0002489707240000021
wherein, flag _ stri,jRepresenting the edge intensity value, h, of the ith row and jth column in the image edge intensity matrixi,jRepresenting intensity values, v, of ith row and jth column in said horizontal intensity matrixi,jAnd N represents the row number and the column number of the horizontal similarity matrix and the vertical similarity matrix.
5. The image edge detection method according to any one of claims 1 to 4, wherein the extracting edge information in a corresponding direction from the image edge intensity matrix comprises:
acquiring edge intensity values in the corresponding direction in the image edge intensity matrix;
when the number of the edge intensity values which are larger than the preset edge intensity threshold value in the corresponding direction is larger than the preset number threshold value, determining that the corresponding direction is an edge direction;
calculating to obtain the sum of the edge strength values in the edge direction;
and judging the strength of the edge based on the sum of the edge strength values in the edge direction.
6. The image edge detection method according to claim 5, wherein the extracting edge information in corresponding directions from the image edge strength matrix further comprises:
and when only one edge direction exists or the determined edge directions are more than two and are not on the same straight line, determining the pixel center points of the horizontal input matrix and the vertical input matrix as corner points.
7. An image edge detection apparatus, comprising:
an acquisition unit adapted to acquire an input original image;
the selecting unit is suitable for selecting a horizontal input matrix and a vertical input matrix from the original image;
the dividing unit is suitable for dividing the horizontal input matrix and the vertical input matrix into a plurality of corresponding pixel blocks respectively; partial overlap between adjacent pixel blocks of the plurality of pixel blocks;
the detection unit is suitable for determining the edge area of the original image based on the similarity information between the adjacent pixel blocks in the horizontal input matrix and the vertical input matrix, and comprises the following steps: respectively calculating the similarity value of the adjacent pixel block in the horizontal direction in the horizontal input matrix and the similarity value of the adjacent pixel block in the vertical direction in the vertical input matrix to obtain a corresponding horizontal direction similarity matrix and a corresponding vertical direction similarity matrix; comparing each similarity value in the horizontal direction similarity matrix and the vertical direction similarity matrix obtained by calculation with a corresponding strength threshold value respectively; respectively generating a corresponding horizontal intensity matrix and a corresponding vertical intensity matrix according to the comparison result of each similarity value in the horizontal direction similarity matrix and the vertical direction similarity matrix with the corresponding intensity threshold; generating a corresponding image edge intensity matrix based on the horizontal intensity matrix and the vertical intensity matrix; and extracting edge information in the corresponding direction from the image edge intensity matrix.
8. The image edge detection apparatus according to claim 7, wherein the detection unit is adapted to calculate the similarity value of the pixel block adjacent in the horizontal direction in the horizontal input matrix and the similarity value of the pixel block adjacent in the vertical direction in the vertical input matrix respectively by using the following formulas:
D=∑|vblock1[i]-vblock2[i]|;
wherein i represents the ith pixel point in the pixel block, D represents the similarity value between the ith pixel points in the adjacent pixel blocks in the horizontal direction or the vertical direction, block1 and block2 respectively represent the adjacent pixel blocks in the horizontal direction or the vertical direction, the ith pixel point in the adjacent pixel blocks is the pixel point in the same channel, vblock1[i]、vblock2[i]Respectively representing the pixel values of the ith pixel point in the adjacent pixel blocks.
9. The image edge detection device according to claim 7, wherein the detection unit is adapted to record a corresponding intensity value in the corresponding intensity matrix as a first value when the similarity value is smaller than or equal to a preset first intensity threshold value; when the similarity value is larger than the first intensity threshold and smaller than or equal to a preset second intensity threshold, recording a corresponding intensity value in a corresponding intensity matrix as a second value; the second value is greater than the first value; when the similarity value is greater than the second intensity threshold and less than or equal to a preset third intensity threshold, recording a corresponding intensity value in the corresponding intensity matrix as a third value; the third value is greater than the second value; when the similarity value is larger than the third intensity threshold value, recording a corresponding intensity value in the corresponding intensity matrix as a fourth value; the fourth value is greater than the third value.
10. The image edge detection apparatus according to claim 7, wherein the detection unit is adapted to generate a corresponding image edge intensity matrix based on the horizontal intensity matrix and the vertical intensity matrix in the following manner:
Figure FDA0002489707240000041
wherein, flag _ stri,jRepresenting the edge intensity value, h, of the ith row and jth column in the image edge intensity matrixi,jRepresenting intensity values, v, of ith row and jth column in said horizontal intensity matrixi,jAnd N represents the row number and the column number of the horizontal similarity matrix and the vertical similarity matrix.
11. The image edge detection apparatus according to any one of claims 7-10, wherein the detection unit is adapted to obtain edge intensity values in corresponding directions in the image edge intensity matrix; when the number of the edge intensity values which are larger than the preset edge intensity threshold value in the corresponding direction is larger than the preset number threshold value, determining that the corresponding direction is an edge direction; calculating to obtain the sum of the edge strength values in the edge direction; and judging the strength of the edge based on the sum of the edge strength values in the edge direction.
12. The image edge detection device according to claim 11, wherein the detection unit is further adapted to determine pixel center points of the horizontal input matrix and the vertical input matrix as corner points when only a single edge direction exists, or the determined edge directions are more than two and are not on the same straight line.
CN201710180247.XA 2017-03-23 2017-03-23 Image edge detection method and device Active CN108629786B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710180247.XA CN108629786B (en) 2017-03-23 2017-03-23 Image edge detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710180247.XA CN108629786B (en) 2017-03-23 2017-03-23 Image edge detection method and device

Publications (2)

Publication Number Publication Date
CN108629786A CN108629786A (en) 2018-10-09
CN108629786B true CN108629786B (en) 2020-07-21

Family

ID=63707480

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710180247.XA Active CN108629786B (en) 2017-03-23 2017-03-23 Image edge detection method and device

Country Status (1)

Country Link
CN (1) CN108629786B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110766028B (en) * 2019-10-23 2023-02-21 紫光展讯通信(惠州)有限公司 Pixel type determination method and device
CN113870297B (en) * 2021-12-02 2022-02-22 暨南大学 Image edge detection method and device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101354783A (en) * 2008-08-21 2009-01-28 华为技术有限公司 Method and apparatus for detecting edge
CN101430789A (en) * 2008-11-19 2009-05-13 西安电子科技大学 Image edge detection method based on Fast Slant Stack transformation
CN102044071A (en) * 2010-12-28 2011-05-04 上海大学 Single-pixel margin detection method based on FPGA
CN104200442A (en) * 2014-09-19 2014-12-10 西安电子科技大学 Improved canny edge detection based non-local means MRI (magnetic resonance image) denoising method
CN104867160A (en) * 2015-06-17 2015-08-26 合肥工业大学 Directional calibration target for camera inner and outer parameter calibration
CN105303566A (en) * 2015-10-15 2016-02-03 电子科技大学 Target contour clipping-based SAR image target azimuth estimation method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101354783A (en) * 2008-08-21 2009-01-28 华为技术有限公司 Method and apparatus for detecting edge
CN101430789A (en) * 2008-11-19 2009-05-13 西安电子科技大学 Image edge detection method based on Fast Slant Stack transformation
CN102044071A (en) * 2010-12-28 2011-05-04 上海大学 Single-pixel margin detection method based on FPGA
CN104200442A (en) * 2014-09-19 2014-12-10 西安电子科技大学 Improved canny edge detection based non-local means MRI (magnetic resonance image) denoising method
CN104867160A (en) * 2015-06-17 2015-08-26 合肥工业大学 Directional calibration target for camera inner and outer parameter calibration
CN105303566A (en) * 2015-10-15 2016-02-03 电子科技大学 Target contour clipping-based SAR image target azimuth estimation method

Also Published As

Publication number Publication date
CN108629786A (en) 2018-10-09

Similar Documents

Publication Publication Date Title
CN102426647B (en) Station identification method and device
US20180005014A1 (en) Fingerprint identification system, fingerprint identification method, and electronic equipment
CN112101305B (en) Multi-path image processing method and device and electronic equipment
EP2899671A1 (en) Identification method and device for target object in image
JP6299373B2 (en) Imaging direction normality determination method, imaging direction normality determination program, and imaging direction normality determination apparatus
CN111695540B (en) Video frame identification method, video frame clipping method, video frame identification device, electronic equipment and medium
CN103870597B (en) A kind of searching method and device of no-watermark picture
TW201629848A (en) Method for calculating fingerprint overlapping region and electronic device
CN103455814B (en) Text line segmenting method and text line segmenting system for document images
CN103488984A (en) Method and device for identifying second-generation identity cards based on intelligent mobile equipment
RU2014112237A (en) ENTERING DATA FROM IMAGES OF DOCUMENTS WITH FIXED STRUCTURE
WO2015184899A1 (en) Method and device for recognizing license plate of vehicle
US20150146943A1 (en) Method of recognizing contactless fingerprint and electronic device for performing the same
US10235576B2 (en) Analysis method of lane stripe images, image analysis device, and non-transitory computer readable medium thereof
US11386934B2 (en) Method and device for displaying target object
CN107679442A (en) Method, apparatus, computer equipment and the storage medium of document Data Enter
JP2013250974A (en) Image processing device, image processing method, scanner, and storage medium
CN106331848B (en) The recognition methods of panoramic video and equipment play video method and equipment
CN108629786B (en) Image edge detection method and device
JP7204786B2 (en) Visual search method, device, computer equipment and storage medium
WO2014205787A1 (en) Vehicle detecting method based on hybrid image template
KR20120138642A (en) Image processing method and image processing apparatus
EP3631675A1 (en) Advanced driver assistance system and method
WO2019148362A1 (en) Object detection method and apparatus
JP2016053763A (en) Image processor, image processing method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant