CN117853510A - Canny edge detection method based on bilateral filtering and self-adaptive threshold - Google Patents

Canny edge detection method based on bilateral filtering and self-adaptive threshold Download PDF

Info

Publication number
CN117853510A
CN117853510A CN202311722076.0A CN202311722076A CN117853510A CN 117853510 A CN117853510 A CN 117853510A CN 202311722076 A CN202311722076 A CN 202311722076A CN 117853510 A CN117853510 A CN 117853510A
Authority
CN
China
Prior art keywords
pixel point
gradient
edge
image
edge pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311722076.0A
Other languages
Chinese (zh)
Inventor
刘畅
姜佳利
安兴起
张欣然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijign Institute of Aerospace Control Devices
Original Assignee
Beijign Institute of Aerospace Control Devices
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijign Institute of Aerospace Control Devices filed Critical Beijign Institute of Aerospace Control Devices
Priority to CN202311722076.0A priority Critical patent/CN117853510A/en
Publication of CN117853510A publication Critical patent/CN117853510A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

A Canny edge detection method based on bilateral filtering and self-adaptive threshold value comprises the following steps: removing image noise through a bilateral filtering algorithm; calculating an image gradient by using a Sobel edge detection operator; performing non-maximum suppression processing on the current pixel point according to the detected image gradient to obtain an edge pixel point; taking the threshold value calculated by the OTSU algorithm as a high threshold value, taking one third of the high threshold value as a low threshold value, screening edge pixel points through the high threshold value and the low threshold value, removing non-edge pixel points, obtaining strong edge pixel points, and reserving weak edge pixel points to wait for processing; and screening the weak pixel edge points, inhibiting the weak edge pixel points to be removed, and connecting the real edge pixel points. Compared with the traditional Canny algorithm, the method adopts bilateral filtering to replace Gaussian filtering algorithm, and can remove noise interference and retain edge characteristics in the image. Meanwhile, the OTSU algorithm is utilized for self-adaptive threshold selection, so that the stability of the edge detection algorithm is improved, and the obtained edge curve is finer and more accurate.

Description

Canny edge detection method based on bilateral filtering and self-adaptive threshold
Technical Field
The invention belongs to the technical field of digital image processing, and particularly relates to a Canny edge detection method based on bilateral filtering and a self-adaptive threshold value.
Background
In the field of computer vision, image segmentation is an image processing technique necessary for object detection and localization. The common image segmentation method comprises binarization, edge detection, clustering and the like, wherein gradient detection operators such as Sobel, roberts, LOG are widely applied to image gradient detection. The Canny edge detection algorithm is gradient detection by utilizing a Sobel operator, and is the most widely applied edge detection algorithm at present. The traditional Canny edge detection algorithm mainly comprises five parts, namely Gaussian filtering, sobel gradient detection, non-maximum suppression, hysteresis threshold processing and strong and weak edge connection.
In order to ensure the accuracy of edge detection, three evaluation standards of the accuracy of edge detection are provided by the traditional Canny edge detection algorithm. The method comprises the following steps of: the edge detection precision of the Canny algorithm is higher than that of edge detection operators such as a Prewitt operator, a Roberts operator, a laplace operator and the like. However, the traditional Canny edge detection algorithm utilizes Gaussian filtering to carry out smoothing treatment on the image, and the Gaussian filtering has obvious inhibition effect on the interference of Gaussian noise, but the Gaussian filtering not only removes the Gaussian noise in the image, but also can generate a certain fuzzy effect on the edge of the image, and the positioning accuracy of edge detection is affected. In the traditional Canny edge detection algorithm, the target edge is detected, and when the target edge is connected, the selected double threshold is selected by manual operation. Multiple attempts to select the appropriate threshold often require wasted computing resources and the detected edge effect of the target is not ideal.
The traditional Canny edge detection algorithm performs noise reduction processing by using Gaussian filtering, and can effectively remove Gaussian noise in an image, but can blur the edge of the image and try to lose important edge information. When the threshold is set to screen the edge pixel points, the conventional method relies on experience to manually set the threshold, and a lot of time is required to find the appropriate threshold.
Disclosure of Invention
The technical problem that this application solved is: the Canny edge detection method based on bilateral filtering and the self-adaptive threshold is provided, the problem that the detection precision cannot meet the requirement when the Canny algorithm is used for edge detection is solved, the accuracy of edge detection is improved, and the efficiency of image edge detection is improved.
In order to improve the positioning accuracy of the Canny algorithm and save calculation resources, the Canny edge detection method based on bilateral filtering and an adaptive threshold is provided, the bilateral filtering is adopted to replace Gaussian filtering to preprocess an image, the OTSU algorithm is adopted to select the adaptive threshold, the edge detection quality is improved, and the edge detection accuracy is improved.
The technical scheme provided by the application is as follows:
a Canny edge detection method based on bilateral filtering and self-adaptive threshold value comprises the following steps:
s1: establishing a plane coordinate system based on the original image, and removing gray image noise of the original image by adopting a bilateral filtering algorithm to obtain a smooth image;
s2: calculating the image gradient of the smooth image through a gradient operator to obtain gradient matrixes in the X-axis and Y-axis directions of the smooth image, and obtaining a gradient intensity matrix G of the smooth image according to the gradient matrixes in the X-axis and Y-axis directions of the smooth image xy And gradient direction
S3: according to the gradient intensity matrix G xy Obtaining gray gradient intensity of each pixel point of a smooth image, and performing non-maximum suppression on the gray gradient intensity of each pixel point of the smooth image to obtain a plurality of pre-edge pixel points;
s4: performing self-adaptive high-low threshold calculation by using an OTSU algorithm, wherein the calculated threshold is used as a high threshold, and 1/5-1/2 of the high threshold is used as a low threshold; comparing and screening reserved pre-edge pixel points according to a high threshold value and a low threshold value to obtain strong edge pixel points and weak edge pixel points;
s5: and screening the weak pixel edge points, and inhibiting the weak edge pixel points to be removed, wherein the strong edge pixel points and the reserved weak edge pixel points are real edge pixel points and are connected with the real edge pixel points to obtain a target edge curve.
In the step S1, removing the gray image noise of the original image by using a bilateral filtering algorithm to obtain a smooth image, including:
wherein f (x, y) is the pixel point in the smoothed image, ω s Is the weight of the spatial similarity, ω r Is the weight of gray scale similarity, omega p For normalized weight, I (I, j) is the pixel point in the original image, I (x, y) is the original image, sigma s Weighting omega for spatial similarity s Weight coefficient, sigma of (c) r Weighting omega for gray scale similarity r Weight coefficient of (c) in the above-mentioned formula (c).
In the step S2, calculating the image gradient of the smooth image by a gradient operator to obtain a gradient matrix in the X-axis and Y-axis directions of the smooth image, including:
the matrix of the Sobel operator in the x direction is S x The matrix in the y direction is S y The gradient matrix in the x-direction of the original image is denoted as G x The gradient matrix of the original image in the y direction is denoted as G y
G x =S x *I
G y =S y *I
Wherein, I is a gray matrix of the smooth image, and represents cross-correlation operation; the direction along the x-axis is defined as positive from left to right and the direction along the y-axis is defined as positive from top to bottom.
In the step S2, a gradient intensity matrix G of the smooth image is obtained according to the gradient matrices of the X-axis and Y-axis directions of the smooth image xy And gradient directionComprises a gradient intensity matrix G of a smooth image xy And gradient direction->In order to achieve this, the first and second,
wherein G is x (i, j) is a gradient matrix of the pixel point (i, j) in the X-axis direction, G y (i, j) is a gradient matrix of the pixel point (i, j) in the Y-axis direction.
In the step S3, non-maximum suppression is performed on the gray gradient intensity at each pixel point of the smoothed image to obtain a plurality of pre-edge pixel points, including taking one pixel point of the smoothed image as a current pixel point, and combining the gray gradient intensity of the current pixel point with the gray gradient intensity along the current pixel pointComparing the gray gradient intensity of the adjacent pixel points in the positive gradient direction, if the gray gradient intensity is lower than the gray gradient intensity of the adjacent pixel points, discarding the current pixel point, and if the gray gradient intensity is higher than the gray gradient intensity of the adjacent pixel pointsReserving the current pixel point as a pre-edge pixel point; and traversing each pixel point of the smooth image to obtain a plurality of pre-edge pixel points.
In the step S3, non-maximum suppression is performed on the gray gradient intensity at each pixel point of the smooth image, so as to obtain a plurality of pre-edge pixel points, and a linear interpolation method is adopted between two adjacent pixel points distributed along the gradient direction for comparison, which specifically includes:
dividing the region of the current pixel point and adjacent pixel points distributed along the gradient direction into 8 parts by the gradient direction, the inverse gradient direction, the x direction and the y direction of the current pixel point, and assuming that the gray gradient intensity of the current pixel point along the x direction is G x (i, j) the gray gradient intensity along the y-direction is G y (i, j) the combined gradient strength is G xy (i, j); according to G x (i, j) and G y Judging the region where the gradient direction of the current pixel point is located by the positive and negative values and the positive and negative values of (i, j); linear interpolation is carried out on the current pixel point and the adjacent pixel points in the positive and negative gradient directions in the combined gradient intensity direction, and the comparison gradient intensity G in the positive and negative directions is obtained up (i, j) and G down (i,j),
G up (i.j)=(1-t)·G xy (i.j+1)+t·G xy (i-1.j+1)
G down (i.j)=(1-t)·G xy (i.j-1)+t·G xy (i+1.j-1)
t is the tangent value of the gray gradient direction of the current pixel point, G xy (i, j+1) is the gray gradient intensity at pixel point (i, j+1); g xy (i-1, j+1) is the gray gradient intensity at pixel point (i-1, j+1); g xy (i, j-1) is the gray gradient intensity at pixel point (i, j-1); g xy (i+1. J-1) is the gray gradient intensity at pixel (i+1, j-1);
when G up (i,j)=G down When (i, j) =0, the pixel has no gray gradient, and the current pixel is discarded.
The said
In the step S4, the retained pre-edge pixel points are compared and screened according to the high threshold value and the low threshold value to obtain the strong edge pixel points and the weak edge pixel points, including, if the gray gradient intensity of the pre-edge pixel points is lower than the low threshold value, removing the pre-edge pixel points; if the gray gradient intensity of the pre-edge pixel point is higher than the high threshold value, reserving the pre-edge pixel point as a strong edge pixel point; if the gray gradient intensity of the pre-edge pixel point is higher than the low threshold value and lower than the high threshold value, marking the pre-edge pixel point as a weak edge pixel point.
In the step S5, suppressing the weak edge pixel point to be removed includes comparing the weak edge pixel point with adjacent pixel points of 8 areas around the weak edge pixel point to determine whether the weak edge pixel point can be used as an edge pixel point, if one adjacent pixel point exists in 8 areas around the weak edge pixel point as a strong edge pixel point, the weak edge pixel point is reserved; if any one of the adjacent pixels is not present in 8 areas around the weak edge pixel as the strong edge pixel, the weak edge pixel cannot be preserved.
Compared with the traditional Canny edge detection algorithm, the method adopts the bilateral filtering algorithm to replace the Gaussian filtering algorithm, so that the edge characteristics in the image can be well reserved while noise interference is removed. Meanwhile, the OTSU algorithm is utilized for self-adaptive threshold selection, so that the stability of the edge detection method is improved, the generation of broken edges is reduced, the edge profile accuracy is improved, and the obtained edge curve is finer and more accurate.
In summary, the present application at least includes the following beneficial technical effects:
in order to improve the positioning accuracy of the Canny algorithm and save calculation resources, bilateral filtering is adopted to replace Gaussian filtering to preprocess the image, and OTSU algorithm is adopted to perform self-adaptive threshold selection. The edge curve detected by the improved algorithm has finer outline, fewer weak edge pixels and cleaner non-edge pixels, and the edge detection quality and the detection efficiency are improved.
Compared with the traditional Canny edge detection algorithm, the method adopts the bilateral filtering algorithm to replace the Gaussian filtering algorithm, so that the edge characteristics in the image can be well reserved while noise interference is removed. Meanwhile, the OTSU algorithm is utilized for self-adaptive threshold selection, so that the stability of an edge detection algorithm is improved, the generation of broken edges is reduced, the edge profile accuracy is improved, and the obtained edge curve is finer and more accurate. Lays a foundation for the subsequent target detection and positioning work.
Drawings
FIG. 1 is a schematic diagram of linear interpolation of pixel gradient directions;
FIG. 2 is a flow chart of an algorithm of the present invention;
in fig. 3, (a) is an original test image 1, (b) is a result image detected by a conventional Canny edge detection algorithm, and (c) is a result image detected by a modified Canny edge detection algorithm of the present invention;
in fig. 4, (a) is an original test image 2, (b) is a result image detected by a conventional Canny edge detection algorithm, and (c) is a result image detected by a modified Canny edge detection algorithm of the present invention;
in fig. 5, (a) is an original test image 3, (b) is a result image detected by a conventional Canny edge detection algorithm, and (c) is a result image detected by a modified Canny edge detection algorithm of the present invention.
Detailed Description
Specific embodiments of the present invention will be described in detail below with reference to the drawings and specific examples.
As shown in fig. 2, the algorithm flow of the present invention mainly includes: bilateral filtering, sobel pixel gradient detection, non-maximum value suppression, double-threshold processing and strong and weak edge connection.
The method comprises the following specific steps:
(1) And removing gray image noise from the original image by adopting a bilateral filtering algorithm to obtain a smooth image.
And preprocessing the image by adopting bilateral filtering, and retaining the edge information of the image while removing noise interference in the image.
The formula for bilateral filtering is defined as follows:
ω p =∑ i,j∈Ω ω s (i,j)ω r (i,j) (4)
where f (x, y) is the image after the filtering process, ω s Is the weight of the spatial similarity, ω r Is the weight of gray scale similarity, omega p For normalized weights, I (I, j) is the original image.
Bilateral filtering is not only to weight average the pixel gray values in the pixel domain, but also to calculate the euclidean distance between pixels in the spatial domain. To better balance the impact of these two factors on image quality, weights are introduced to weight the two factors. Introducing weights ω in the spatial domain s If the weight omega s Coefficient value sigma of s Larger, the image noise will be removed cleaner, but the image will be more blurred. If the weight omega s Coefficient value sigma of s Smaller, the image will be more clear, but the noise removal will be less effective. Introducing a weight omega in the pixel domain r Weight coefficient sigma r The higher the requirement for pixel similarity for the center pixel point is, the higher the requirement is. The closer to the center pixel point, the more weight omega is allocated to the pixel domain s The higher the contrast, the farther from the center pixel point, the more weight ω is assigned to the pixel domain s The lower. The closer to the edge the more the pixels have an effect on the edge, the less the pixels are farther from the edge. Therefore, the bilateral filtering is adopted to process the image, so that the function of protecting the edge of the target to be detected can be achievedBut the accuracy of edge detection can be improved.
(2) And calculating the image gradient of the smooth image through a Sobel operator to obtain a gradient matrix of the smooth image in the X-axis and Y-axis directions.
And calculating gradient matrixes of the smooth image in the x direction and the y direction by adopting a gradient operator Sobel operator. The matrix of the Sobel operator in the x direction is denoted as S x The matrix in the y-direction is denoted S y All are 3x3 matrices. The gradient matrix in the x-direction of the smoothed image is denoted as G x The gradient matrix of the original image in the y direction is denoted as G y
G x =S x *I (7)
G y =S y *I (8)
Where I is the gray matrix of the smoothed image, x represents the cross correlation operation. The direction along the x-axis is defined as positive from left to right and the direction along the y-axis is defined as positive from top to bottom. The gradient intensity matrix of the obtained smooth image is G xy The gradient direction is
(3) And carrying out non-maximum suppression on the gray gradient intensity at the detection position.
The method specifically comprises the following steps: according to the gradient intensity matrix G xy Obtaining gray gradient intensity of each pixel point; and comparing the gray gradient intensity of the current pixel point with the gray gradient intensity of the adjacent pixel points distributed along the positive and negative directions of the gradient direction, when the gray gradient intensity of the pixel point is the maximum value, reserving the pixel point as a pre-edge pixel point, and when the pixel point is the non-maximum value, inhibiting the pixel point and not serving as the pre-edge pixel point.
In order to find the maximum point more accurately, a linear interpolation is typically used to make the comparison between two adjacent pixels distributed along the gradient direction. As shown in fig. 1 below, the current pixel point (g xy (i, j)) and the area where the adjacent pixel points distributed along the gradient direction are located is divided into eight parts (which are divided into eight parts in the gradient direction, the inverse gradient direction, the x direction, and the y direction). Let the gray gradient intensity of the current pixel along the x direction be G x (i, j) the gray gradient intensity along the y-direction is G y (i, j) the combined gradient strength is G xy (i, j). Gradient strength G according to x-direction x Gradient strength G in the (i, j) and y directions y The positive and negative sum of (i, j) can judge the region where the gradient direction of the current pixel point is located. Then, the current pixel point and the adjacent pixel points in the positive and negative gradient direction (positive and negative gradient direction in the combined gradient strength direction) are subjected to linear interpolation to obtain the comparison gradient strength G in the positive and negative direction up (i, j) and G down (i,j)。
G up (i.j)=(1-t)·G xy (i.j+1)+t·G xy (i-1.j+1) (11)
G down (i.j)=(1-t)·G xy (i.j-1)+t·G xy (i+1.j-1) (12)
When G x (i,j)=G y When (i, j) =0, the pixel has no gray gradient and is not an edge pixel.
(4) And (3) carrying out self-adaptive high-low threshold calculation by using an OTSU algorithm, wherein the calculated threshold is taken as a high threshold, and one third of the high threshold is taken as a low threshold.
Screening the reserved pre-edge pixel points through a high threshold value and a low threshold value, and removing the current pre-edge pixel points if the gray gradient strength of the current pre-edge pixel points is lower than the low threshold value; if the gradient of the current pre-edge pixel point is higher than the high threshold value, reserving the current pre-edge pixel point as a strong edge pixel point; if the gradient of the current pre-edge pixel point is higher than the low threshold and lower than the high threshold, marking the current pre-edge pixel point as a weak edge pixel point, and waiting for the next step of processing.
The OTSU algorithm is calculated as follows:
in a digital image of m×n pixel size, {0,1,2, & gt, L-1} is set as L different gray scales, N i The number of pixels with gray level i.
Therefore, the total number of pixels in the digital image is:
n=n 0 +n 1 +…+n L-1 (13)
p i a probability representing that the gray level of the pixel point of the digital image is i:
and p is i The following conditions are satisfied:
the global threshold is set to T (k) =k, 0 < k < L-1. Segmentation of an image into foreground images C by a threshold T (k) 1 And background image C 2 . Wherein the gray value of the pixel point in the area where the foreground image is located is distributed in [0, k ]]Within this range, the gray values of the pixel points in the area of the background image are distributed in [ k+1, L-1 ]]Within this range.
Each pixel point is thenTo the foreground image region C 1 Probability p of (b) 1 (k) The method comprises the following steps:
each pixel point is divided into a background image area C 2 Probability p of (b) 2 (k) The method comprises the following steps:
then is allocated to the foreground image region C 1 Average gray value m of pixel points in (a) 1 (k) The method comprises the following steps:
is allocated to the background image area C 2 Average gray value m of pixel points in (a) 2 (k) The method comprises the following steps:
the gray value is 0, k]Average gray value m of pixel points within range k The method comprises the following steps:
average gray value m of the whole image of the digital image G The method comprises the following steps:
from the above formulae:
P 1 (k)·m 1 (k)+p 2 (k)m 2 (k)=m G (22)
P 1 (k)+p 2 (k)=1 (23)
then the inter-class variance of the foreground image and the background imageThe method comprises the following steps:
global inter-class variance for an entire digital imageThe method comprises the following steps:
let the separability measureWherein->
From the above equation, it can be seen that m 1 (k) And m 2 (k) The larger the phase difference between classes, the more the inter-class varianceThe larger. Due to global inter-class varianceIs constant, so to find the optimal thresholding segmentation threshold k * The inter-class variance is to be made +.>Is maximum, namely:
the available high threshold is global threshold T (k) =k, and the low threshold is 0.33T (k) =0.33 k
(5) And screening the weak pixel edge points, inhibiting the weak pixel points to be removed, determining the real edge pixel points, and connecting the real edge pixel points to obtain a target edge curve.
For a weak edge pixel point, whether the weak edge pixel point can be used as an edge pixel point is judged by comparing adjacent pixel points of 8 areas around the weak edge pixel point. If 8 areas around the weak edge pixel point exist, one adjacent pixel point is a strong edge pixel point, the weak pixel point is reserved as the strong edge pixel point; if any one of the adjacent pixels is not present in 8 areas around the weak edge pixel, the weak edge pixel cannot be reserved and cannot be used as a real edge pixel. The strong edge pixel points are real edge pixel points. And finally obtaining the complete image edge.
In order to verify the practical effect of the invention, three test images are adopted for comparison experiments, the original test image 3 (a) and the original image 4 (a) are face images downloaded on the internet, the texture is complex, and the number of detectable edge pixels is large. Fig. 3 (b) and fig. 4 (b) are images of the results detected by the conventional Canny edge detection algorithm. Fig. 3 (c) and fig. 4 (c) are images of the results detected by the modified Canny edge detection method. Fig. 5 (a) is an image of an optical fiber pair interface, fig. 5 (b) is a result image detected by a conventional Canny edge detection algorithm, and fig. 5 (c) is a result image detected by a modified Canny edge detection method.
According to the comparison of the result images, the edge curve contour detected by the improved algorithm is finer, the number of weak edge pixels is smaller, the non-edge pixels are removed more cleanly, and the edge detection quality and the detection efficiency are improved.
The foregoing detailed description has been provided for the purposes of illustration in connection with specific embodiments and exemplary examples, but such description is not to be construed as limiting the application. Those skilled in the art will appreciate that various equivalent substitutions, modifications and improvements may be made to the technical solution of the present application and its embodiments without departing from the spirit and scope of the present application, and these all fall within the scope of the present application. The scope of the application is defined by the appended claims.
What is not described in detail in the present specification is a well known technology to those skilled in the art.

Claims (9)

1. The Canny edge detection method based on bilateral filtering and self-adaptive threshold is characterized by comprising the following steps:
s1, establishing a plane coordinate system based on an original image, and removing gray image noise of the original image by adopting a bilateral filtering algorithm to obtain a smooth image;
s2, calculating image gradients of the smooth image through a gradient operator to obtain gradient matrixes in the X-axis and Y-axis directions of the smooth image, and obtaining a gradient intensity matrix G of the smooth image according to the gradient matrixes in the X-axis and Y-axis directions of the smooth image xy And gradient direction
S3, according to the gradient intensity matrix G xy Obtaining gray gradient intensity of each pixel point of a smooth image, and performing non-maximum suppression on the gray gradient intensity of each pixel point of the smooth image to obtain a plurality of pre-edge pixel points;
s4, performing self-adaptive high and low threshold calculation by using an OTSU algorithm, wherein the calculated threshold is used as a high threshold, and 1/5-1/2 of the high threshold is used as a low threshold;
comparing and screening reserved pre-edge pixel points according to a high threshold value and a low threshold value to obtain strong edge pixel points and weak edge pixel points;
and S5, screening the weak pixel edge points, and inhibiting the weak edge pixel points to be removed, wherein the strong edge pixel points and the reserved weak edge pixel points are real edge pixel points and are connected with the real edge pixel points to obtain a target edge curve.
2. The Canny edge detection method based on bilateral filtering and adaptive threshold according to claim 1, wherein in step S1, the method for removing gray image noise of an original image by adopting a bilateral filtering algorithm to obtain a smooth image comprises:
wherein f (x, y) is the pixel point in the smoothed image, ω s Is the weight of the spatial similarity, ω r Is the weight of gray scale similarity, omega p For normalized weights, I (I, j) is the pixel point in the original image, I (x, y) is the original image, σ s Weighting omega for spatial similarity s Weight coefficient, sigma of (c) r Weighting omega for gray scale similarity r Weight coefficient of (c) in the above-mentioned formula (c).
3. The Canny edge detection method based on bilateral filtering and adaptive thresholding according to claim 1, wherein in step S2, the image gradient of the smoothed image is calculated by a step degree operator, and a gradient matrix in the X-axis and Y-axis directions of the smoothed image is obtained, comprising:
the matrix of the Sobel operator in the x direction is S x The matrix in the y direction is S y The gradient matrix in the x-direction of the original image is denoted as G x The gradient matrix of the original image in the y direction is denoted as G y
G x =S x *I
G y =S y *I
Wherein, I is a gray matrix of the smooth image, and represents cross-correlation operation; the direction along the x-axis is defined as positive from left to right and the direction along the y-axis is defined as positive from top to bottom.
4. A Canny edge detection method based on bilateral filtering and adaptive thresholding according to claim 3, characterized in that: in the step S2, a gradient intensity matrix G of the smooth image is obtained according to the gradient matrices of the X-axis and Y-axis directions of the smooth image xy And gradient directionComprises a gradient intensity matrix G of a smooth image xy And gradient direction->In order to achieve this, the first and second,
wherein G is x (i, j) is a gradient matrix of the pixel point (i, j) in the X-axis direction, G y (i, j) is a gradient matrix of the pixel point (i, j) in the Y-axis direction.
5. The Canny edge detection method based on bilateral filtering and adaptive threshold according to claim 1, wherein: in the step S3, non-maximum suppression is performed on the gray gradient intensity at each pixel point of the smoothed image to obtain a plurality of pre-edge pixel points, including taking one pixel point of the smoothed image as the current pixel point, and taking the gray gradient intensity of the current pixel pointAnd along the gradient direction of the current pixel pointComparing the gray gradient intensity of the adjacent pixel points in the positive gradient direction, if the gray gradient intensity is lower than the gray gradient intensity of the adjacent pixel points, discarding the current pixel point, and if the gray gradient intensity is higher than the gray gradient intensity of the adjacent pixel points, reserving the current pixel point as a pre-edge pixel point; and traversing each pixel point of the smooth image to obtain a plurality of pre-edge pixel points.
6. The Canny edge detection method based on bilateral filtering and adaptive thresholding according to claim 5, wherein in step S3, non-maximum suppression is performed on the gray gradient intensity at each pixel point of the smoothed image, a plurality of pre-edge pixel points are obtained, and the comparison is performed by using a linear interpolation method between two adjacent pixel points distributed along the gradient direction, specifically including:
dividing the region of the current pixel point and adjacent pixel points distributed along the gradient direction into 8 parts by the gradient direction, the inverse gradient direction, the x direction and the y direction of the current pixel point, and assuming that the gray gradient intensity of the current pixel point along the x direction is G x (i, j) the gray gradient intensity along the y-direction is G y (i, j) the combined gradient strength is G xy (i, j); according to G x (i, j) and G y Judging the region where the gradient direction of the current pixel point is located by the positive and negative values and the positive and negative values of (i, j); linear interpolation is carried out on the current pixel point and the adjacent pixel points in the positive and negative gradient directions in the combined gradient intensity direction, and the comparison gradient intensity G in the positive and negative directions is obtained up (i, j) and G down (i,j),
G up (i,j)=(1-t)·G xy (i,j+1)+t·G xy (i-1,j+1)
G down (i,j)=(1-t)·G xy (i,j-1)+t·G xy (i+1,j-1)
t is the tangent value of the gray gradient direction of the current pixel point (i, j), G xy (i, j+1) is a pixelGray gradient intensity at point (i, j+1); g xy (i-1, j+1) is the gray gradient intensity at pixel point (i-1, j+1); g xy (i, j-1) is the gray gradient intensity at pixel point (i, j-1); g xy (i+1. J-1) is the gray gradient intensity at pixel (i+1, j-1);
when G up (i,j)=G down When (i, j) =0, the pixel has no gray gradient, and the current pixel is discarded.
7. The Canny edge detection method based on bilateral filtering and adaptive thresholding of claim 6, characterized in that: the said
8. The Canny edge detection method based on bilateral filtering and adaptive threshold according to claim 1, wherein: in the step S4, the retained pre-edge pixel points are compared and screened according to the high threshold value and the low threshold value to obtain the strong edge pixel points and the weak edge pixel points, including, if the gray gradient intensity of the pre-edge pixel points is lower than the low threshold value, removing the pre-edge pixel points; if the gray gradient intensity of the pre-edge pixel point is higher than the high threshold value, reserving the pre-edge pixel point as a strong edge pixel point; if the gray gradient intensity of the pre-edge pixel point is higher than the low threshold value and lower than the high threshold value, marking the pre-edge pixel point as a weak edge pixel point.
9. The Canny edge detection method based on bilateral filtering and adaptive threshold according to claim 1, wherein: in the step S5, suppressing the weak edge pixel point to be removed includes comparing the weak edge pixel point with adjacent pixel points of 8 areas around the weak edge pixel point to determine whether the weak edge pixel point can be used as an edge pixel point, if one adjacent pixel point exists in 8 areas around the weak edge pixel point as a strong edge pixel point, the weak edge pixel point is reserved; if any one of the adjacent pixels is not present in 8 areas around the weak edge pixel as the strong edge pixel, the weak edge pixel cannot be preserved.
CN202311722076.0A 2023-12-14 2023-12-14 Canny edge detection method based on bilateral filtering and self-adaptive threshold Pending CN117853510A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311722076.0A CN117853510A (en) 2023-12-14 2023-12-14 Canny edge detection method based on bilateral filtering and self-adaptive threshold

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311722076.0A CN117853510A (en) 2023-12-14 2023-12-14 Canny edge detection method based on bilateral filtering and self-adaptive threshold

Publications (1)

Publication Number Publication Date
CN117853510A true CN117853510A (en) 2024-04-09

Family

ID=90537466

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311722076.0A Pending CN117853510A (en) 2023-12-14 2023-12-14 Canny edge detection method based on bilateral filtering and self-adaptive threshold

Country Status (1)

Country Link
CN (1) CN117853510A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118038280A (en) * 2024-04-15 2024-05-14 山东亿昌装配式建筑科技有限公司 Building construction progress monitoring and early warning method based on aerial image

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118038280A (en) * 2024-04-15 2024-05-14 山东亿昌装配式建筑科技有限公司 Building construction progress monitoring and early warning method based on aerial image

Similar Documents

Publication Publication Date Title
CN113781402B (en) Method and device for detecting scratch defects on chip surface and computer equipment
Ahmed Comparative study among Sobel, Prewitt and Canny edge detection operators used in image processing
CN109377450B (en) Edge protection denoising method
CN108416789A (en) Method for detecting image edge and system
CN111833366A (en) Edge detection method based on Canny algorithm
CN113034452B (en) Weldment contour detection method
CN111161222B (en) Printing roller defect detection method based on visual saliency
CN112017223B (en) Heterologous image registration method based on improved SIFT-Delaunay
CN102156996A (en) Image edge detection method
CN108346148B (en) High-density flexible IC substrate oxidation area detection system and method
CN117853510A (en) Canny edge detection method based on bilateral filtering and self-adaptive threshold
CN116468641A (en) Infrared blood vessel image enhancement processing method
CN112991374A (en) Canny algorithm-based edge enhancement method, device, equipment and storage medium
CN117764983A (en) Visual detection method for binocular identification of intelligent manufacturing production line
CN114674826A (en) Visual detection method and detection system based on cloth
CN113781413A (en) Electrolytic capacitor positioning method based on Hough gradient method
Pujar et al. Medical image segmentation based on vigorous smoothing and edge detection ideology
Reddy et al. Canny scale edge detection
Ahn et al. Segmenting a noisy low-depth-of-field image using adaptive second-order statistics
CN111754413A (en) Image processing method, device, equipment and storage medium
CN111415365A (en) Image detection method and device
CN110264417B (en) Local motion fuzzy area automatic detection and extraction method based on hierarchical model
Vignesh et al. Performance and Analysis of Edge detection using FPGA Implementation
CN114255253A (en) Edge detection method, edge detection device, and computer-readable storage medium
CN112801903A (en) Target tracking method and device based on video noise reduction and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination